Syniti Data Migration Tutorial Jobs in Usa

11,358 positions found — Page 2

Distribution and Marketing Data Product Manager
Salary not disclosed
Atlanta, GA 2 days ago
General

Job Title: Distribution and Marketing Data Product Manager

Division: Beazley Shared Services - Data Management

Location: Multiple Locations, US

Hybrid Role

Reports To: Head of Data Products

Key Relationships: Chief Data Office, Data Leadership Team, Data Owners, Distribution and Marketing, CRM, Data Governance and Quality, Data Stewards, Data Architects, Delivery Team members, Technology Team, Finance, Underwriting, Operations and other Business Stakeholders

Beazley:

Beazley is a global specialist insurance company with over 30 years' experience helping people,

communities, and businesses to manage risk all around the world. Our products are wide ranging from cyber & tech to marine, healthcare, financial institutions, and contingency, covering risks like the weather, film production or protection from deadly weapons.

We are a flexible and innovative employer offering a friendly, collaborative, diverse and inclusive work environment. We encourage applications from all backgrounds. Collaboration in office spaces is important and we use a hybrid approach with a minimum of 2 days in the office per week.

We have a wonderful mix of cultures, experiences, and backgrounds at Beazley with over 1500 of us working around the world. Employee's diversity, experience and passion allow us to keep innovating and moving forward, delivering the best. We hire people with wide perspectives, and we have set bold diversity targets as we work towards excellence.

Data @ Beazley:

Our Data team supports Beazley's vision by...

* Being bold through pioneering & championing an exciting vision of how people interact with data

* Facilitating innovation by leading the pace of change in data & analytics, and facilitating the latest capabilities and innovative technologies

* Doing the right thing by providing a controlled working data environment that allows all business domains to thrive independently

* Being the single source of truth for enterprise-wide reporting metrics and KPIs

Our Data team is located at multiple offices across UK, Europe and the US. The specified home office location options provide the best balance for being co-located with key Data Office colleagues and business stakeholders.

The Role:

Data is one of Beazley's greatest assets and this roles is critical to supporting our Distribution and Marketing insights, which includes Customer, Broker and Marketing data. We're seeking a strategic and technically savvy Data Product Manager to lead the strategy, development and evolution of data products and insights that empower our distribution and marketing teams. This role is critical to aligning our data, unlocking insights, and informing growth opportunities across our specialty portfolio. In this role, you will also work to mature data literacy and capabilities as Beazley undertakes a significant investment in modernization, enabling you to embed a culture of data excellence and innovation in our delivery.

Key Responsibilities:

  • Partner with the global Distribution and Marketing team to understand, prioritize and develop data products and insights that support their business strategy.

  • Build and own a roadmap to provide regular updates on delivery commitments for data products, insights, enhancements and queries.

  • Manage stakeholder relationships to support the growth strategy for Beazley customers, brokers, teams and products.

  • Produce insights and key data trendsthat highlight business performance, RoI, efficiencies and game-changing growth opportunities.

  • Inspire the adoption and use of insights to drive decisions in investment and operations that improve efficiency and drive growth by leading demonstrations and hands on training sessions.

  • Lead a team of Product Owners, Product Analysts, Business Analysts and a development team to deliver and maintain data products and insights; maintaining a backlog of work within Jira.

  • Represent the business in data governance discussions, escalating issues as appropriate.

  • Ensure that data product development considers policy, methodology and standards, and ensure these are adhered to during product development.

  • Evaluate the performance of your data product portfolio against KPIs defined by the business and provide feedback on the value delivered.

  • Proactively anticipate business needs and look for opportunities to bring innovation or new approaches into the user design, experience, product development and insights.

  • Relentlessly focus on the Distribution and Marketing team as a customer, delivering high quality data and insights that are clear and inspire action.

  • Partner with the Data Governance Group and CRM solution team (Customer Relationship Management) to drive improvements in our Customer and Broker data quality through MDM and other tools.

  • Provide leadership, direction, development and support to direct reports (including off-shore resources).

Essential Criteria:

  • Bachelor's degree in Business, Marketing, Data Science, Computer Science, Economics, Statistics or related field; Master's degree preferred

  • Proven experience in data product management, marketing analytics or distribution strategy, preferably in insurance or financial services

  • Experience working with data, building data models, and sharing insights

Skills and Abilities:

  • Strategic and curious with the ability to design and develop data and insights that support our Distribution and Marketing team's goals, planning, performance and incentives that drive growth

  • Understand the specialty insurance market, customer segmentation and distribution channels, with experience in North America, Lloyd's, Retail and Wholesale markets preferred

  • Ability to lead workshops that help your stakeholders identify data needs and articulate their desired user experience, with the ability to build dashboards preferred

  • Strong organization and communication skills with the ability to direct work, document requirements and present demos

  • Advanced technical skills with the ability to dive into the data, identify anomalies, and provide high quality, trusted data

  • Understanding of Specialty Insurance principles and key drivers to create opportunities, loyalty and growth

Knowledge and Experience:

  • Experience in Data Products, Data Analytics, Data Science, Statistics, Economics or related fields in Insurance, Financial or sales organizations preferred

  • Strong understanding of MDM and CRM systems and their use with Customer and Broker data

  • Proficiency in data visualization (Power BI), analytics platform (Snowflake), dashboard design and data storytelling

  • Experience working with insurance data, and in particular a strong understanding of pipeline intelligence for sales growth/ targeting and performance

  • Ability to use predictive modeling to drive an understanding of performance, customer behavior, and prospective renewals/ growth to help the Distribution Sales team focus on the best opportunities

  • Experience managing relationships and teams of stakeholders, business analysts, data analysts, data architects, data modelers, data engineers and testers using agile processes

  • Skills in data engineering technologies like Kafka, Snowflake / Snowpark, DataBricks, Jira and Agile principles

  • Experience in managing and manipulating large internal and external datasets

  • Knowledge of relational and dimensional database structures, theories, principles, and practices

  • Driven and proven team player with ability to work with all levels in a highly intellectual, collaborative, and fast paced environment

  • Excellent communication skills, with the ability to tailor them appropriately for different audiences, technical backgrounds, and seniority

Who We Are:

Beazley is a specialist insurance company with over 30 years' experience helping people, communities and businesses to manage risk all around the world. Our mission is to inspire our clients and people with the confidence and freedom to explore, create and build - to enable businesses to thrive. Our clients want to live and work freely and fully, knowing they are benefitting from the most advanced thinking in the insurance market. Our goal is to become the highest performing sustainable specialist insurer.

Our products are wide ranging, from cyber & tech insurance to marine, healthcare, financial institutions and contingency; covering risks such as the weather, film production or protection from deadly weapons.

Our Culture

We have a wonderful mix of cultures, experiences, and backgrounds at Beazley with over 2,000 of us working around the world. Employee's diversity, experience and passion allow us to keep innovating and moving forward, delivering the best. We are proud of our family-feel culture at Beazley that empowers our staff to work from when and where they want, in an adult environment that is big on collaboration, diversity of thought and personal accountability. Our three core values inspire the way we work and how we treat our people and customers.

  • Be bold
  • Strive for better
  • Do the right thing

Upholding these values every day has enabled us to become an innovative and responsive organization in touch with the changing world around us - our ambitious inclusion & diversity and sustainability targets are testament to this.

We are a flexible and innovative employer offering a friendly, collaborative, and inclusive working environment. We actively encourage and expect applications from all backgrounds. Our commitment to fostering a supportive and dynamic workplace ensures that every employee can thrive and contribute to our collective success.

Explore a variety of networks to assist with professional and/or personal development. Our Employee Networks include:

  • Beazley RACE - Including, understanding and celebrating People of Colour
  • Beazley SHE - Successful, High potential, Empowered women in insurance
  • Beazley Proud - Our global LGBTQ+ community
  • Beazley Wellbeing - Supporting employees with their mental wellbeing
  • Beazley Families - Supporting families and parents-to-be

We encourage internal career progression at Beazley, giving you all the tools you need to drive your own career here, such as:

  • Internal Pathways (helping you grow into an underwriting role)
  • iLearn (our own learning & development platform)
  • LinkedIn Learning
  • Mentorship program
  • External qualification sponsorship
  • Continuing education and tuition reimbursement
  • Secondment assignments

The Rewards

  • The opportunity to connect and build long-lasting professional relationships while advancing your career with a growing, dynamic organization
  • Attractive base compensation and discretionary performance related bonus
  • Competitively priced medical, dental and vision insurance
  • Company paid life, and short- and long-term disability insurance
  • 401(k) plan with 5% company match and immediate vesting
  • 22 days PTO (prorated for 1st calendar year of employment), 11 paid holidays per year, with the ability to flex the religious bank holidays to suit your religious beliefs
  • Up to $700 reimbursement for home office setup
  • Free in-office lunch, travel reimbursement for travel to office, and monthly lifestyle allowance
  • Up to 26 weeks of fully paid parental leave
  • Up to 2.5 days paid annually for volunteering at a charity of your choice
  • Flexible working policy, trusting our employees to do what works best for them and their teams

Salary for this role will be tailored to the successful individual's location and experience. The expected compensation range for this position is $130,000-$150,000 per year plus discretionary annual bonus.

Don't meet all the requirements? At Beazley we're committed to building a diverse, inclusive, and authentic workplace. If you're excited about this role but your experience doesn't perfectly align with every requirement and qualification in the job specification, we encourage you to apply anyway. You might just be the right candidate for this, or one of our other roles.

We are an equal opportunities employer and as such, we will make reasonable adjustments to our selection process for candidates that indicate that, owing to disability, our arrangements might otherwise disadvantage them. If you have a disability, including dyslexia or other non-visible ones, which you believe may affect your performance in selection, please advise us in good time and we'll make reasonable adjustments to our processes for you.

Not Specified
Institutional Research Data Analyst
✦ New
Salary not disclosed
Atlanta, GA 1 day ago
Apply for JobJob ID295329

LocationAtlanta, Georgia

Full/Part TimeFull-Time

Regular/TemporaryRegular

Add to Favorite JobsEmail this Job

About Us

Overview
Georgia Tech prides itself on its technological resources, collaborations, high-quality student body, and its commitment to building an outstanding and diverse community of learning, discovery, and creation. We strongly encourage applicants whose values align with our institutional values, as outlined in our Strategic Plan. These values include academic excellence, diversity of thought and experience, inquiry and innovation, collaboration and community, and ethical behavior and stewardship. Georgia Tech has policies to promote a healthy work-life balance and is aware that attracting faculty may require meeting the needs of two careers.

About Georgia Tech
Georgia Tech is a top-ranked public research university situated in the heart of Atlanta, a diverse and vibrant city with numerous economic and cultural strengths. The Institute serves more than 45,000 students through top-ranked undergraduate, graduate, and executive programs in engineering, computing, science, business, design, and liberal arts. Georgia Tech's faculty attracted more than $1.4 billion in research awards this past year in fields ranging from biomedical technology to artificial intelligence, energy, sustainability, semiconductors, neuroscience, and national security. Georgia Tech ranks among the nation's top 20 universities for research and development spending and No. 1 among institutions without a medical school.

Georgia Tech's Mission and Values
Georgia Tech's mission is to develop leaders who advance technology and improve the human condition. The Institute has nine key values that are foundational to everything we do:
1. Students are our top priority.
2. We strive for excellence.
3. We thrive on diversity.
4. We celebrate collaboration.
5. We champion innovation.
6. We safeguard freedom of inquiry and expression.
7. We nurture the wellbeing of our community.
8. We act ethically.
9. We are responsible stewards.

Over the next decade, Georgia Tech will become an example of inclusive innovation, a leading technological research university of unmatched scale, relentlessly committed to serving the public good; breaking new ground in addressing the biggest local, national, and global challenges and opportunities of our time; making technology broadly accessible; and developing exceptional, principled leaders from all backgrounds ready to produce novel ideas and create solutions with real human impact.



Department Information

The Office of Institutional Research and Planning (IRP) at Georgia Tech is a research and analytics service unit dedicated to supporting the campus community. Our team of institutional research and data analytics professionals combines technical and creative skills to inform institutional strategic decision-making, planning, and research across campus. In addition to institutional reporting and compliance, IRP provides data education, support, and resources to all campus units.

Visit our website to learn more about what we do:



Job Summary

Data Analysts analyze data, interpret trends and patterns, and provide insights to support decision-making processes. They develop data models, perform data mining and statistical analysis, and collaborate with stakeholders to optimize data-driven strategies.



Responsibilities

Job Duty 1 -
Collect, analyze, and interpret data from various sources, databases, and systems to extract insights, trends, and patterns that inform business decisions, strategies, and operations.

Job Duty 2 -
Develop and maintain data models, queries, and reports using SQL, Python, R, or data analysis tools to perform data cleansing, transformation, and visualization tasks.

Job Duty 3 -
Identify data quality issues, anomalies, and discrepancies in datasets, conduct data validation, data profiling, and data integrity checks to ensure data accuracy and reliability.

Job Duty 4 -
Create data visualizations, dashboards, and data analytics reports to communicate data findings, trends, and key metrics to stakeholders, management, and decision-makers.

Job Duty 5 -
Conduct ad-hoc data analysis, exploratory data analysis, and statistical analysis to support decision-making processes, performance monitoring, and data-driven insights.

Job Duty 6 -
Perform data mining, predictive analytics, and machine learning tasks to uncover hidden patterns, predict outcomes, and drive data-driven decision-making in organizations.

Job Duty 7 -
Utilize data analytics tools, business intelligence platforms, and statistical software packages to conduct data analysis, data modeling, and data visualization tasks efficiently and accurately.

Job Duty 8 -
Stay current on data analytics trends, tools, and methodologies through training, certifications, and industry publications to enhance data analysis skills and knowledge.

Job Duty 9 -
Collaborate with business users, data scientists, and Information Technology teams to define data requirements, analytics requirements, and data-driven solutions for business problems and opportunities.

Job Duty 10 -
Perform other job-related duties as assigned.



Responsibilities

The Institutional Research Data Analyst will also be expected to perform various duties specific to institutional research, including but not limited to:

  • Responding to intermediate to high difficulty/complexity ad-hoc data and analysis requests
  • Adhering to federal, state, and institutional policies, regulations, and requirements related to data security, privacy, and governance
  • Completing or supporting the completion of externally-driven compliance and data-related reporting including

    • Federal, e.g., IPEDS, NSF-HERD, NSF-GSS, etc.
    • State, e.g., USG data collections, data requests, etc.
    • Higher education organizations, e.g., AAUDE, SREB, NSC, accrediting bodies, etc.


Required Qualifications

Educational Requirements
Bachelor's Degree in related discipline or equivalent combination of education and experience. Advanced certification may be preferred or required (some profiles may require additional education).

Required Experience
Four or more years of relevant experience.



Proposed Salary

Annual Salary Range: $75,751 to $80,000



Knowledge, Skills, & Abilities

SKILLS
o Performs all the standard and technical aspects of the job
o Applies in-depth professional, technical, or industry knowledge to manage significantly complex
assignments/projects/programs
o Advanced knowledge of principles and practices of a particular field of specialization and Institute
policies, practices, and procedures



USG Core Values

The University System of Georgia is comprised of our 25 institutions of higher education and learning as well as the System Office. Our USG Statement of Core Values are Integrity, Excellence, Accountability, and Respect. These values serve as the foundation for all that we do as an organization, and each USG community member is responsible for demonstrating and upholding these standards. More details on the USG Statement of Core Values and Code of Conduct are available in USG Board Policy 8.2.18.1.2 and can be found on-line at policymanual/section8/C224/#p8.2.18_personnel_conduct.

Additionally, USG supports Freedom of Expression as stated in Board Policy 6.5 Freedom of Expression and Academic Freedom found on-line at policymanual/section6/C2653.



Equal Employment Opportunity

The Georgia Institute of Technology (Georgia Tech) is an Equal Employment Opportunity Employer. The Institute is committed to maintaining a fair and respectful environment for all. To that end, and in accordance with federal and state law, Board of Regents policy, and Institute policy, Georgia Tech provides equal opportunity to all faculty, staff, students, and all other members of the Georgia Tech community, including applicants for admission and/or employment, contractors, volunteers, and participants in institutional programs, activities, or services. Georgia Tech complies with all applicable laws and regulations governing equal opportunity in the workplace and in educational activities.

Equal opportunity and decisions based on merit are fundamental values of the University System of Georgia ("USG") and Georgia Tech. Georgia Tech prohibits discrimination, including discriminatory harassment, on the basis of an individual's race, ethnicity, ancestry, color, religion, sex (including pregnancy), national origin, age, disability, genetics, or veteran status in its programs, activities, employment, and admissions. Further, Georgia Tech prohibits citizenship status, immigration status, and national origin discrimination in hiring, firing, and recruitment, except where such restrictions are required in order to comply with law, regulation, executive order, or Attorney General directive, or where they are required by Federal, State, or local government contract.



Other Information

This is not a supervisory position.
This position does not have any financial responsibilities.
This position will not be required to drive.
This role is not considered a position of trust.
This position does not require a purchasing card (P-Card).
This position will not travel
This position does not require security clearance.



Background Check

Successful candidate must be able to pass a background check. Please visit employment/pre-employment-screening



Not Specified
Junior data analyst/Machine learning engineer
✦ New
Salary not disclosed
Oakland 1 day ago
CS/IT Graduates or About to be Grads.

Get Hired by taking action.

If you just graduated (or you're about to) and the job search is already feeling confusing, you're not imagining it.

A degree proves you can learn—but employers hire for job readiness: projects that look like real work, current tech stacks, interview confidence, and the ability to contribute on day one.

That's why many new grads send hundreds of applications and still hear nothing back.

It's not because you're "not smart enough.” It's because most entry-level pipelines are crowded, and hiring teams filter heavily for candidates who look production-ready.

We are actively considering candidates for entry-level software engineering and data roles, especially Java full stack, Java/Python development, DevOps automation, data analytics, data engineering, data science, and ML/AI—full-time opportunities aligned to client needs.

Our core emphasis remains Java/Full Stack/DevOps and Data/Analytics/Engineering/ML.

SynergisticIT focuses on two high-demand lanes: Java / Full Stack / DevOps and Data (Data Analyst, Data Engineer, Data Scientist) + ML/AI—so you don't graduate with scattered skills, you graduate with an employable stack.

SynergisticIT since 2010, has helped candidates land full-time roles at major organizations (examples often cited include Google, Apple, PayPal, Visa, Western Union, Wells Fargo, Client, Banking, Wayfair, Client, Client, and more) with offers commonly in the $95k–$154k range depending on role and skill depth.

For a new grad, the bigger message isn't the number—it's that results require a structured pathway, not random applications.

Here's a realistic way to think about your advantage as a fresh graduate: you're early enough to build the right foundation before bad habits set in.

If you master fundamentals—coding, debugging, data structures, system thinking—and then layer modern tools on top (frameworks, cloud, CI/CD, analytics stacks), you become the kind of "entry-level” candidate who actually feels like a safe hire.

What roles are companies hiring for right now? A typical market demand pattern is clear: organizations still need entry-level software programmers, Java full stack developers, Python/Java developers, DevOps-focused engineers, and on the data side data analysts, BI analysts, data engineers, data scientists, and machine learning engineers.

The strongest candidates aren't "tool collectors”—they're people who can show end-to-end capability: build an API, connect a database, deploy a service, analyze data, explain results, and handle interviews calmly.

Why fresh grads get stuck— Fresh grads often struggle for four predictable reasons: Resume doesn't match job keywords (ATS filters you out).

Projects look like school assignments (not production-aligned).

Interview skills are undertrained (DSA, system design, SQL, behavioral).

No structured pipeline (random applying without feedback loops).

A job-placement-first approach addresses these systematically: build the right portfolio, practice the right interview questions, align your tech stack to roles, and keep improving until the market says "yes.” Who this path fits best If you're a recent graduate, you'll likely fit if you match any of these: New grads in CS, Engineering, Math, or Statistics with limited job experience Students finishing Bachelor's or Master's programs who need a real hiring plan Candidates who apply consistently but don't get callbacks Candidates who reach interviews but struggle to close International students on F-1/OPT who need a job plan for STEM extension/H-1B timing Graduates with strong academics but thin practical experience SynergisticIT helps STEM extension and work authorization pathways, and for candidates who need long-term stability, support related to H-1B and green card processes as part of employer-side realities.

If you're tired of guessing, stop treating your job search like a lottery.

Treat it like a project with milestones: skills → portfolio → interview readiness → targeted applications → scheduled interviews → offer.

If you want to explore, here are the key links: Event videos (OCW, JavaOne, Gartner): USA Today feature Contact & get a roadmap: Please read our blogs Why do Tech Companies not Hire recent Computer Science Graduates | SynergisticIT What Recruiters Look for in Junior Developers | SynergisticIT Software engineering or Data Science as a career? How OPT Students Can Land Tech Jobs – SynergisticIT Bottom line for fresh grads: Your degree is the starting line, not the finish line.

If you want to get hired faster, you don't need "more random courses.” You need a guided, job-focused path and the right people around you.

In tech, it's not just what you learn—it's how you learn and who you build with that decides how far you go.

Please note: Resume databases are shared with clients and interested clients will reach out directly if they find a qualified candidate for their req.

Resume submissions may be shared with our JOPP team database also.

Please unsubscribe if contacted or if you don't want to be contacted please don't submit your resume
Not Specified
Senior Domain Expert Lead- STEM (Contract), AGI - Data Services
✦ New
🏢 Amazon
Salary not disclosed
Boston, MA 1 day ago
**This is an experimental role to support a business pilot and can potentially span up to 12 months**

Embark on a transformative journey as our Sr. Domain Expert Lead, where intellectual rigor meets technological innovation. As a Sr. Domain Expert Lead, you will blend your advanced analytical skills and domain expertise to provide strategic oversight to our human-in-the-loop and model-in-the-loop data pipelines. You will also provide mentorship and guidance to junior team members. Your responsibilities will ensure data excellence through strategic oversight of high-quality data output, while delivering expert consultation throughout the pipeline and fostering iterative development. This position directly impacts the effectiveness and reliability of our AI solutions by maintaining the highest standards of data quality throughout the development process while building capability within the broader team.

Key job responsibilities
• Serve as a trusted domain advisor to cross-functional teams, providing strategic direction and specialized problem-solving support
• Champion domain knowledge sharing across multiple channels and teams to maintain data quality excellence and standardization
• Drive collaborative efforts with science teams to optimize output of complex data collections in your domain expertise, ensuring data excellence through iterative feedback loops
• Foster team excellence through mentorship and motivation of peers and junior team members
• Make informed decisions on behalf of our customers, ensuring that selected code meets industry standards, best practices, and specific client needs
• Collaborate with AI teams to innovate model-in-the-loop and human-in-the-loop approaches, to ensure the collection of high-quality data, safeguarding data privacy and security for LLM training, and more.
• Stay abreast of the latest developments in how LLMs and GenAI can be applied to your area of expertise to ensure our evaluations remain cutting-edge.
• Develop and write demonstrations to illustrate "what good data looks like" in terms of meeting benchmarks for quality and efficiency
• Provide detailed feedback and explanations for your evaluations, helping to refine and improve the LLM's understanding and output
- 2+ years of data scientist experience
- 3+ years of data querying languages (e.g. SQL), scripting languages (e.g. Python) or statistical/mathematical software (e.g. R, SAS, Matlab, etc.) experience
- 3+ years of machine learning/statistical modeling data analysis tools and techniques, and parameters that affect their performance experience
- 1+ years of guiding and coaching a group of researchers experience
- 1+ years of working with or evaluating AI systems experience
- 1+ years of creating or contributing to mathematical textbooks, research papers, or educational content experience
- Master's degree in Science, Technology, Engineering, or Mathematics (STEM), or experience working in Science, Technology, Engineering, or Mathematics (STEM)
- Experience applying theoretical models in an applied environment- Ph.D. in Science, Technology, Engineering, or Mathematics (STEM)
- Knowledge of machine learning concepts and their application to reasoning and problem-solving
- Experience in Python, Perl, or another scripting language
- Experience in a ML or data scientist role with a large technology company
- Experience in defining and creating benchmarks for assessing GenAI model performance
- Experience working on multi-team, cross-disciplinary projects
- Experience applying quantitative analysis to solve business problems and making data-driven business decisions
- Experience effectively communicating complex concepts through written and verbal communication

Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status.

Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

The starting pay for this position is listed below. Final starting pay will be based on factors including experience, qualifications, and location. Starting Day 1 of employment, Amazon offers EAP, Mental Health Support, Medical Advice Line, 401(k) matching. Learn more about our benefits at , MA, Boston - 136, ,000.00 USD annually
USA, WA, BELLEVUE - 136, ,000.00 USD annually
contract
Senior Data Modeler
🏢 Harnham
Salary not disclosed
Phoenix, AZ 3 days ago

Senior Data Modeler

Hybrid 3-4 days onsite

Location: Phoenix, Arizona

Salary: $130,000 - $150,000 base


A large, operationally complex organization is undergoing a major modernization of its data platform and is building a new, cloud-native analytics foundation from the ground up. This is a greenfield opportunity for a senior-level data modeler to establish best practices, influence architecture, and help shape how data is organized and used across the business.

This role sits at the center of a multi-year transformation focused on modern analytics, scalable data products, and strong collaboration between data and business teams.


What You’ll Be Working On

  • Designing and implementing enterprise data models across conceptual, logical, and physical layers
  • Establishing Medallion architecture patterns and reusable modeling assets
  • Building dimensional and semantic models that support analytics and reporting
  • Partnering closely with domain experts and functional leaders to translate business needs into data structures
  • Collaborating with data engineers to align models with ELT pipelines and analytics frameworks
  • Helping define modeling standards and upskilling senior engineers in modern data modeling practices
  • Contributing hands-on to data engineering work where needed (SQL, transformations, optimization)
  • Proactively identifying analytics opportunities and recommending data structures to support them

This role is roughly 40% data modeling, 30% hands-on engineering, and 30% cross-functional collaboration.


Must-Have Experience

  • Strong, hands-on experience with data modeling (dimensional, canonical, semantic)
  • Deep understanding of Medallion architecture
  • Advanced SQL and experience working with a modern cloud data warehouse
  • Experience with dbt for transformations and modeling
  • Hands-on experience in cloud-native data environments (AWS preferred)
  • Ability to work directly with business stakeholders and explain technical concepts clearly
  • Experience collaborating closely with data engineers on execution


Nice to Have

  • Python experience
  • Familiarity with Informatica or reverse-engineering legacy data models
  • Exposure to streaming or near-real-time data pipelines
  • Experience with visualization tools (tool choice is flexible)


Who Will Thrive in This Role

  • A senior individual contributor who enjoys building from scratch
  • Someone who can act as a modeling expert and mentor in an organization formalizing this practice
  • Comfortable working in ambiguity and taking initiative
  • Strong communicator who enjoys partnering with both technical and non-technical teams
  • Equally comfortable discussing business concepts and physical data models


Why This Role Is Unique

  • Greenfield data modeling initiative with real influence
  • Opportunity to define standards that will be used across the organization
  • Work on large-scale, real-world operational and analytical data
  • High visibility within a growing data organization
  • Flexible work setup for individual contributors


If you’re excited about shaping a modern data foundation and want to be the person who defines how data is modeled, understood, and used, this is a rare opportunity to make a lasting impact.

Not Specified
Lead Data Engineer
Salary not disclosed
Atlanta, GA 3 days ago

Job Title – Lead Data Engineer

Please note this role is not able to offer visa transfer or sponsorship now or in the future


About the role


As a Lead Data Engineer, you will make an impact by designing, building, and operating scalable, cloud‑native data platforms supporting batch and streaming use cases, with strong focus on governance, performance, and reliability. You will be a valued member of the Data Engineering team and work collaboratively with cross‑functional engineering, cloud, and architecture stakeholders.


In this role, you will:

  • Design, build, and operate scalable cloud‑native data platforms supporting batch and streaming workloads with strong governance, performance, and reliability.
  • Develop and operate data systems on AWS, Azure, and GCP, designing cloud‑native, scalable, and cost‑efficient data solutions.
  • Build modern data architectures including data lakes, data lakehouses, and data hubs, with strong understanding of ingestion patterns, data governance, data modeling, observability, and platform best practices.
  • Develop data ingestion and collection pipelines using Kafka and AWS Glue; work with modern storage formats such as Apache Iceberg and Parquet.
  • Design and develop real‑time streaming pipelines using Kafka, Flink, or similar streaming frameworks, with understanding of event‑driven architectures and low‑latency data processing.
  • Perform data transformation and modeling using SQL‑based frameworks and orchestration tools such as dbt, AWS Glue, and Airflow, including Slowly Changing Dimensions (SCD) and schema evolution.
  • Use Apache Spark extensively for large‑scale data transformations across batch and streaming workloads.


Work model

We believe hybrid work is the way forward as we strive to provide flexibility wherever possible. Based on this role’s business requirements, this is a hybrid position requiring 4 days a week in a client or Cognizant office in Atlanta, GA. Regardless of your working arrangement, we are here to support a healthy work-life balance though our various wellbeing programs.


The working arrangements for this role are accurate as of the date of posting. This may change based on the project you’re engaged in, as well as business and client requirements. Rest assured; we will always be clear about role expectations.


What you need to have to be considered

  • Hands‑on experience developing and operating data systems on AWS, Azure, and GCP.
  • Proven ability to design cloud‑native, scalable, and cost‑efficient data solutions.
  • Experience building data lakes, data lakehouses, and data hubs with strong understanding of ingestion patterns, governance, modeling, observability, and platform best practices.
  • Expertise in data ingestion and collection using Kafka and AWS Glue, with experience in Apache Iceberg and Parquet.
  • Strong experience designing and developing real‑time streaming pipelines using Kafka, Flink, or similar streaming frameworks.
  • Deep expertise in data transformation and modeling using SQL‑based frameworks and orchestration tools including dbt, AWS Glue, and Airflow, with knowledge of SCD and schema evolution.
  • Extensive experience using Apache Spark for large‑scale batch and streaming data transformations.


These will help you stand out

  • Experience with event‑driven architectures and low‑latency data processing.
  • Strong understanding of schema evolution, SCD modeling, and modern data modeling concepts.
  • Experience with Apache Iceberg, Parquet, and modern ingestion/storage patterns.
  • Strong knowledge of observability, governance, and platform best practices.
  • Ability to partner effectively with cloud, architecture, and engineering teams.



Salary and Other Compensation:

Applications will be accepted until March 17, 2025.

The annual salary for this position is between $81,000 - $135,000, depending on experience and other qualifications of the successful candidate.

This position is also eligible for Cognizant’s discretionary annual incentive program, based on performance and subject to the terms of Cognizant’s applicable plans.

Benefits: Cognizant offers the following benefits for this position, subject to applicable eligibility requirements:

  • Medical/Dental/Vision/Life Insurance
  • Paid holidays plus Paid Time Off
  • 401(k) plan and contributions
  • Long‑term/Short‑term Disability
  • Paid Parental Leave
  • Employee Stock Purchase Plan


Disclaimer: The salary, other compensation, and benefits information is accurate as of the date of this posting. Cognizant reserves the right to modify this information at any time, subject to applicable law.

Not Specified
Data Manager
Salary not disclosed
Minneapolis, MN 3 days ago

Company/Role Overview:

CliftonLarsonAllen (CLA) Search has been retained by Midwestern Higher Education Compact to identify a Data Manager to serve their team. The Midwestern Higher Education Compact (MHEC) brings together leaders from 12 Midwestern states to strengthen postsecondary education, advance student success, and promote regional economic vitality.


MHEC programs and initiatives save member states and students millions of dollars annually through time- and cost-savings opportunities. MHEC research supports workforce readiness and improves the quality, accessibility, and affordability of postsecondary education. MHEC convenings bring together leaders and subject experts to share knowledge, generate ideas, and develop collaborative solutions.


To learn more, click here:


What You’ll Do:

  • Administer and maintain Microsoft Fabric, OneLake, and Azure environments.
  • Design and deliver sophisticated data solutions that are innovative and sustainable.
  • Ensure data infrastructure is secure, reliable, and scalable.
  • Manage and improve how data is brought into the organization from multiple sources.
  • Maintain accurate, well-structured, consistent, and complete data that ensure high quality and useability for internal staff.
  • Develop and oversee standards on how data is collected, stored, and protected across departments.
  • Manage MHEC’s customer relationship management (CRM) system, ensuring data integrity, integration with other platforms, and alignment with organizational needs.
  • Partner with teams across the organization to monitor processes and make recommendations.
  • Partner with research staff to understand data access patterns and develop storage strategies that accelerate research and analytics
  • Develop and maintain Power BI dashboards and reports to deliver clear insights to senior leaders and decision-makers.
  • Ensure staff have access to timely, clear, and meaningful data visualizations.
  • Train staff to use reports and dashboards effectively.
  • Support departments in using data to guide decision-making.
  • Document data pipelines, integrations, and system processes.
  • Recommend tools and practices that help MHEC grow its data capacity.
  • Monitor developments in Microsoft’s data platforms and assess future needs.


What You’ll Need:

  • Bachelor's degree or equivalent experience preferred.
  • 5+ years’ experience, preferably with Microsoft data platforms including Power BI, Azure, and/or Fabric.
  • Experience designing and maintaining data systems and dashboards.
  • Experience in higher education or nonprofit sectors preferred.
  • Strong technical understanding of Microsoft Fabric, OneLake, and Azure.
  • Proficiency demonstrated in Python, R, SAS, SQL or other statistical/data management software
  • Experience with data visualization platforms (Tableau, Power BI, or similar)
  • Experience with Microsoft Dynamics and Power Automate is a plus but not required.
  • Ability to plan, optimize, build, and maintain data pipelines and dashboards.
Not Specified
Senior Manager, Data Science (Marketing)
$137,000
Evanston, Illinois 3 days ago
By clicking the "Apply" button, I understand that my employment application process with Takeda will commence and that the information I provide in my application will be processed in line with Takeda's Privacy Notice ( ) and Terms of Use ( ) . I further attest that all information I submit in my employment application is true to the best of my knowledge.

Job Description

About BioLife Plasma Services

BioLife Plasma Services, a subsidiary of Takeda Pharmaceutical Company Limited, is an industry leader in the collection of high-quality plasma, which is processed into life-saving plasma-based therapies. Some diseases can only be treated with medicines made with plasma. Since plasma can't be made synthetically, many people rely on plasma donors to live healthier, happier lives. BioLife operates 250+ state-of-the-art plasma donation centers across the United States. Our employees are dedicated to enhancing the quality of life for patients and ensuring that the donation process is safe, straightforward, and rewarding for donors who wish to make a positive impact.

When you work at BioLife, you'll feel good knowing that what we do helps improve the lives of patients with rare diseases. While you focus on our donors, we'll support you. We offer a purpose you can believe in, a team you can count on, opportunities for career growth, and a comprehensive benefits program, all in a fast-paced, friendly environment.

This position is currently classified as "hybrid" in accordance with Takeda's Hybrid and Remote Work policy.

BioLife Plasma Services is a subsidiary of Takeda Pharmaceutical Company Ltd.

OBJECTIVES/PURPOSE
The Sr. Manager of Marketing Science drives and executes strategic initiatives that improve our marketing data and analytics capabilities. This role will leverage advanced analytics techniques and data-driven insights to inform marketing strategies, optimize campaigns, and drive business growth. This role requires a deep understanding of paid, owned, and earned media measurement, strong analytics and insights skills, broad knowledge of marketing technologies, and the ability to communicate complex data insights to senior stakeholders. This role is critically important for the success of the Global Forecasting, Pricing, and Analytics (FPA) team and reports to the Head of Analytics within the team.

ACCOUNTABILITIES

Leadership

* Lead marketing science initiatives in the development and execution of advanced analytics to support marketing strategies and goals.
* Provide thought leadership on marketing measurement techniques, including the trade-offs between controlled experiments, natural experiments, and multivariate statistical models for different situations.

Marketing Science

* Partner with our media agency to ensure we are maximizing the output of our media mix model (MMM) partner.
* Deep understanding and experience with creating and managing marketing attribution solutions, i.e., multi-touch attribution (MTA). Ability to build/maintain in-house solutions and/or work with outside partners as necessary.
* Identify and maintain marketing analytics key performance indicators (KPIs) to track and measure performance.
* Partner with data scientists, IT, and consultants to develop advanced analytical models and dashboards related to marketing.
* Ability to perform statistical analyses and tests to quantify the business value of an opportunity.
* Familiarity with AI/ML applications in marketing.

Reporting and Data Management

* Ensure the accurate and timely delivery of marketing performance reports and insights.
* Able to translate data into contextualized insights that can be shared across the business
* Know digital media terminology and concepts (e.g., Demand Side Platforms (DSPs), effectiveness vs. efficiency, SEO/SEM, etc.)
* Leverage existing experience with Google Analytics and Google Tag Manager
* Partner with the Data, Digital, and Technology (DD&T) Team to ensure marketing data accuracy, integration, and integrity, and that good data governance practices are in place.
* Develop solutions (dashboards, data visualizations, reports) for real-time operations performance assessment and agile decision-making.
* Design and automate regular data extracts needed by marketing and other partners.

Collaboration and Adaptability

* Build strong relationships with cross-functional partners for efficient alignment, coordination, and information sharing across teams.

DIMENSIONS AND ASPECTS

Technical/Functional Expertise

* Extensive experience across many areas of marketing science; MMM, MTA, Loyalty, Website, Surveys, Paid/Owned/Earned Media.
* Experience with SQL, Python, and R for data analysis and model development.
* Strong analytical skills with a solid foundation in many of the following statistical and AI/ML methods: regression analysis (continuous, categorical, survival, time-series, and count models, etc.); classification (CART, SVM, Neural Networks, etc.), clustering (k-means/medoid, hierarchical, self-organizing maps, etc.), and other AI/ML techniques; experimental design; and forecasting/sensitivity analysis.
* Comfortable working daily in cloud-based data platforms.
* Expert level MS Excel skills, including advanced functions (e.g., Solver), data analysis, pivot tables, macros, and VBA (Visual Basic for Applications), and applicability of these features for developing and managing financial models for business case development and forecasting.
* Experience working with Power BI, Tableau, or other data visualization software.
* Strong foundation in statistical techniques for quantifying the impact of marketing activities.

Communication

* Excellent verbal and written communication. Proven data analysis background with the ability to transform analysis into insights, recommendations, and proposals for senior management.
* Ability to communicate complex concepts simply and succinctly.

Decision-making and Autonomy

* High self-reliance, self-efficacy, initiative, and learning agility.
* Strong at both structured and unstructured problem solving.

Interaction

* Manage and/or partner on projects with vendors and consultants.

EDUCATION, BEHAVIOURAL COMPETENCIES AND SKILLS:

Required

* Bachelor's and/or master's degree in any area of social science, business, marketing, advertising, or a closely related field.
* Experience with data analytics from end-to-end, i.e., including ideation, proposal creation, getting stakeholder buy-in, gathering requirements, designing analytics models/solutions, building prototypes, and working with IT/Data Science teams to deploy and scale solutions.
* 7+ years of experience in advanced analytics and statistical modeling in the areas of business performance analysis, forecasting, promotion and media effectiveness and optimization, and consumer behavior
* Excellent verbal and written communication and presentation skills. Able to communicate effectively to all levels of the organization, including senior leadership.
* Bring a growth mindset, curiosity, positivity, intuitive thinking, and a passion for excellence.

Preferred

* Media agency or retail industry analytics experience a plus.
* Experience with survival analysis (time-to-event, duration, event history analysis, etc.) a plus.
* Knowledge of CRM systems and marketing automation tools a plus.

ADDITIONAL INFORMATION (Add any information legally required for your country here)

* Domestic travel required (up to 10%).

BioLife Compensation and Benefits Summary

We understand compensation is an important factor as you consider the next step in your career. W e are committed to equitable pay for all employees, and we strive to be more transparent with our pay practices.

For Location: Bannockburn, IL

U.S. Base Salary Range: $137,000.00 - $215,270.00

The estimated salary range reflects an anticipated range for this position. The actual base salary offered may depend on a variety of factors, including the qualifications of the individual applicant for the position, years of relevant experience, specific and unique skills, level of education attained, certifications or other professional licenses held, and the location in which the applicant lives and/or from which they will be performing the job. The actual base salary offered will be in accordance with state or local minimum wage requirements for the job location.

U.S. based employees may be eligible for short-term and/or long-term incentives. U.S. based employees may be eligible to participate in medical, dental, vision insurance, a 401(k) plan and company match, short-term and long-term disability coverage, basic life insurance, a tuition reimbursement program, paid volunteer time off, company holidays, and well-being benefits, among others. U.S. based employees are also eligible to receive, per calendar year, up to 80 hours of sick time, and new hires are eligible to accrue up to 120 hours of paid vacation.

EEO Statement
Takeda is proud in its commitment to creating a diverse workforce and providing equal employment opportunities to all employees and applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, gender expression, parental status, national origin, age, disability, citizenship status, genetic information or characteristics, marital status, status as a Vietnam era veteran, special disabled veteran, or other protected veteran in accordance with applicable federal, state and local laws, and any other characteristic protected by law.

Locations

Bannockburn, IL

Worker Type

Employee

Worker Sub-Type

Regular

Time Type

Full time

Job Exempt Yes
Not Specified
Databricks Architect/ Senior Data Engineer
✦ New
🏢 OZ
Salary not disclosed
Boca Raton, FL 1 day ago

OZ – Databricks Architect/ Senior Data Engineer


Note: Only applications from U.S. citizens or lawful permanent residents (Green Card Holders) will be considered.


We believe work should be innately rewarding and a team-building venture. Working with our teammates and clients should be an enjoyable journey where we can learn, grow as professionals, and achieve amazing results. Our core values revolve around this philosophy. We are relentlessly committed to helping our clients achieve their business goals, leapfrog the competition, and become leaders in their industry. What drives us forward is the culture of creativity combined with a disciplined approach, passion for learning & innovation, and a ‘can-do’ attitude!


What We're Looking For:

We are seeking a highly experienced Databricks professional with deep expertise in data engineering, distributed computing, and cloud-based data platforms. The ideal candidate is both an architect and a hands-on engineer who can design scalable data solutions while actively contributing to development, optimization, and deployment.


This role requires strong technical leadership, a deep understanding of modern data architectures, and the ability to implement best practices in DataOps, performance optimization, and data governance.


Experience with modern AI/GenAI-enabled data platforms and real-time data processing environments is highly desirable.


Position Overview:

The Databricks Senior Data Engineer will play a critical role in designing, implementing, and optimizing enterprise-scale data platforms using the Databricks Lakehouse architecture. This role combines architecture leadership with hands-on engineering, focusing on building scalable, secure, and high-performance data pipelines and platforms. The ideal candidate will establish coding standards, define data architecture frameworks such as the Medallion Architecture, and guide the end-to-end development lifecycle of modern data solutions.


This individual will collaborate with cross-functional stakeholders, including data engineers, BI developers, analysts, and business leaders, to deliver robust data platforms that enable advanced analytics, reporting, and AI-driven decision-making.


Key Responsibilities:

  • Architecture & Design: Architect and design scalable, reliable data platforms and complex ETL/ELT and streaming workflows for the Databricks Lakehouse Platform (Delta Lake, Spark).
  • Hands-On Development: Write, test, and optimize code in Python, PySpark, and SQL for data ingestion, transformation, and processing.
  • DataOps & Automation: Implement CI/CD, monitoring, and automation (e.g., with Azure DevOps, DBX) for data pipelines.
  • Stakeholder Collaboration: Work with BI developers, analysts, and business users to define requirements and deliver data-driven solutions.
  • Performance Optimization: Tune delta tables, Spark jobs, and SQL queries for maximum efficiency and scalability.
  • GenAI Applications Development: It is a big plus to have experience in GenAI application development


Requirements:

  • 8+ years of experience in data engineering, with strong hands-on expertise in Databricks and Apache Spark.
  • Proven experience designing and implementing scalable ETL/ELT pipelines in cloud environments.
  • Strong programming skills in Python and SQL; experience with PySpark required.
  • Hands-on experience with Databricks Lakehouse, Delta Lake, and distributed data processing.
  • Experience working with cloud platforms such as Microsoft Azure, AWS, or GCP (Azure preferred).
  • Experience with CI/CD pipelines, Git, and DevOps practices for data engineering.
  • Strong understanding of data architecture, data modeling, and performance optimization.
  • Experience working with cross-functional teams to deliver enterprise data solutions.
  • Tackles complex data challenges, ensuring data quality and reliable delivery.


Qualifications:

  • Bachelor’s degree in Computer Science, Information Technology, Engineering, or a related field.
  • Experience designing enterprise-scale data platforms and modern data architectures.
  • Experience with data integration tools such as Azure Data Factory or similar platforms.
  • Familiarity with cloud data warehouses such as Databricks, Snowflake, or Azure Fabric.
  • Experience supporting analytics, reporting, or AI/ML workloads is highly desirable.
  • Databricks, Azure, or cloud certifications are preferred.
  • Strong problem-solving, communication, and technical leadership skills.


Technical Proficiency in:

  • Databricks, Apache Spark, PySpark, Delta Lake
  • Python, SQL, Scala (preferred)
  • Cloud platforms: Azure (preferred), AWS, or GCP
  • Azure Data Factory, Kafka, and modern data integration tools
  • Data warehousing: Databricks, Snowflake, or Azure Fabric
  • DevOps tools: Git, Azure DevOps, CI/CD pipelines
  • Data architecture, ETL/ELT design, and performance optimization


What You’re Looking For:

Join a fast-growing organization that thrives on innovation and collaboration. You’ll work alongside talented, motivated colleagues in a global environment, helping clients solve their most critical business challenges. At OZ, your contributions matter – you’ll have the chance to be a key player in our growth and success. If you’re driven, bold, and eager to push boundaries, we invite you to join a company where you can truly make a difference.


About Us:

OZ is a 28-year-old global technology consulting, services, and solutions leader specializing in creating business-focused solutions for our clients by leveraging disruptive digital technologies and innovation.


OZ is committed to creating a continuum between work and life by allowing people to work remotely. We offer competitive compensation and a comprehensive benefits package. You’ll enjoy our work style within an incredible culture. We’ll give you the tools you need to succeed so you can grow and develop with us and become part of a team that lives by its core values.

Not Specified
Data Engineer
✦ New
Salary not disclosed
Montvale, NJ 1 day ago

Summary

We are seeking a highly skilled Data Engineer to build and manage our data infrastructure. The ideal candidate will be an expert in writing complex SQL queries, designing efficient database schemas, and developing ETL/ELT pipelines. You will ensure data is accurate, accessible, and optimized for performance to support business intelligence, analytics, and reporting needs.


Key Responsibilities

  • Database Design & Management: Design, develop, and maintain relational databases (e.g. SQL Server, ProgressSQL, Oracle) and cloud-based data warehouses.
  • Strategic SQL and Data Engineering: Develop sophisticated, optimized SQL queries, stored procedures, and functions to process and analyze large, complex datasets for actionable business insights.
  • Data Pipeline Automation & Orchestration:Help build, automate, and orchestrate ETL/ELT workflows utilizing SQL, Python, and cloud-native tools to integrate and transform data from diverse, distributed sources.
  • Performance Optimization: Tune queries and optimize database schema (indexing, partitioning, normalization) to improve data retrieval and processing speeds.
  • Data Integrity & Security: Ensure data quality, consistency, and integrity across systems. Implement data masking, encryption, and role-based access control (RBAC).
  • Documentation: Maintain technical documentation for database schemas, data dictionaries, and ETL workflows.


Required Skills and Qualifications

  • Education: Bachelor’s degree in computer science, Information Systems, or a related field.
  • SQL Mastery: 5+ years of experience with advanced SQL (window functions, CTEs, query optimization).
  • Database Expertise: Deep understanding of relational database management systems (RDBMS) and data modeling techniques.
  • Cloud Platforms: Demonstrated experience with Azure Data Services and other data warehouse technologies.
  • Programming: Proficiency in Python for scripting and data manipulation.
  • ETL Tools: Familiarity with tools like SSIS or Azure Data Factory.
  • Soft Skills: Strong analytical thinking, problem-solving, and communication skills.

Nice to Have

  • Experience with NoSQL databases (Cosmos DB, MongoDB).
  • Experience with big data frameworks (Apache Spark, Kafka).
  • Relevant certifications (e.g., Microsoft Certified: Azure Data Engineer Associate, Google Professional Data Engineer).

Typical Work Environment

  • Tools Used: SQL IDEs (DBeaver, SSMS), Cloud Consoles, Git, Jira, SSIS.
  • Industry: Leasing.


Salary is $130-$140k

Not Specified
Product Data Analyst
✦ New
Salary not disclosed
Dallas, TX 1 day ago

Loloi Rugs is a leading textile brand that designs and crafts rugs, pillows, and throws for the thoughtfully layered home. Family-owned and led since 2004, Loloi is growing more quickly than ever. To date, we’ve expanded our diverse team to hundreds of employees, invested in multiple distribution facilities, introduced thousands of products, and earned the respect and business of retailers and designers worldwide. A testament to our products and our team, Loloi has earned the ARTS Award for “Best Rug Manufacturer” in 2010, 2011, 2015, 2016, 2018, 2023, and 2025.


Security Advisory: Beware of Frauds

Protect yourself from potential fraud and verify the authenticity of any job offer you receive from Loloi. Rest assured that we never request payment or demand any sensitive personal information, such as bank details or social security numbers, at any stage of the recruiting process. To ensure genuine communication, our recruiters will solely reach out to applicants using an @ email address. Your security is of paramount importance to us at Loloi, and we are committed to maintaining a safe and trustworthy hiring experience for all candidates.


We are building a Business Operations Center of Excellence, and we need a Product Data Analyst to serve as the "Guardian of the Golden Record." In this role, you are the absolute owner of product data integrity as it relates to the digital customer experience. You ensure that every item we sell is accurately represented across every touchpoint—from our ERP and PIM to our website storefront and marketing feeds. This is not a data entry role; it is a high-impact technical logic and investigation role. You will work directly with our Data Platform and Software Engineering teams to define business rules, audit data health via complex SQL, and troubleshoot data transmission errors before they impact the customer.


Responsibilities

  • Storefront Governance: Serve as the absolute owner of product data integrity within the PIM. Ensure that all storefront-critical attributes (pricing, dimensions, weights, image links) are accurate and standardized for a seamless customer experience.
  • Technical Data Auditing: Write and run complex SQL queries against our centralized database to identify anomalies, "orphan" records, and data hygiene issues that need resolution. You will be expected to query across multiple schemas to validate data consistency between systems.
  • Feed Logic & Mapping: You will manage the logic of how data translates from our PIM to external endpoints. You will ensure that our products appear correctly on Google Shopping, Meta, Amazon, and other marketplaces by managing feed rules and mapping definitions.
  • API Payload Analysis: You will act as the first line of defense for data transmission errors. If a product isn't showing up on the site, you will review the JSON/XML response bodies to determine if it is a data payload error or a software code bug.
  • Cross-Functional Impact Analysis: You will act as the gatekeeper for data changes, predicting downstream impacts (e.g., "If Merchandising changes this Category Name, it will break the Finance reporting filter").
  • Hygiene Logic Definition: You will partner with our IT/Database team to define automated health checks. You identify the "rot" (bad data patterns), and they implement the database constraints to stop it.


What You Will NOT Do (The Boundaries)

  • No Web Development: You are not a Front-End Developer. You do not write HTML, CSS, or React code. You ensure the data powering those components is 100% accurate.
  • No Manual Data Entry: Your job is not to copy-paste descriptions. You build the systems, bulk processes, and logic that ensure data quality at scale.
  • No Database Administration: You do not manage server uptime or schema changes (IT owns this). You own the quality of the records inside the database.


Intersection with Technical Teams

  • With IT (Database Mgmt): IT owns the infrastructure and schema; you own the quality of the data within it. When you identify a systemic issue (e.g., "5,000 orphan records"), you partner with IT to implement the technical fix (scripts/constraints).
  • With Software Engineering (Commerce): If a product is missing from the site, you check the data payload. If the data is correct, you hand off to Engineering, confirming it is a code/caching bug rather than a data error.


Experience, Skills, & Ability Requirements

  • 5-8 years of experience in Data Management, PIM Administration, or technical eCommerce Operations.
  • SQL Proficiency: You are comfortable writing queries beyond simple SELECT *. You should be proficient with CTEs (Common Table Expressions), Window Functions (e.g., Rank, Lead/Lag), Subqueries, and complex Joins to act as a forensic data investigator.
  • API Fluency: You can read and understand JSON and XML. You know what a valid payload looks like and can spot formatting errors or missing keys.
  • Data Manipulation: You are an expert at handling large datasets (CSVs, Excel) and understand data types, formatting standards, and normalization concepts.
  • You love hunting down the root cause of an error. You don't just fix the wrong price; you find out why the price was wrong and build a rule to stop it from happening again.
  • You have high standards for accuracy. You understand that a wrong weight in the system means a financial loss on shipping for the business.


Bonus Points (Nice-to-Haves)

  • Familiarity with Visio/Lucidchart to visualize data flows.
  • Ability to build simple dashboards in Tableau to track data health scores.
  • Basic familiarity with Python or R for data manipulation.


What We Offer

  • Health, dental, and vision benefits
  • Paid parental leave
  • 401(k) with employer match
  • A culture of meritocracy that fosters ongoing growth opportunities
  • A stable, growing family-owned company that looks after its employees


Loloi Rugs does not discriminate on the basis of race, sex, color, religion, age, national origin, marital status, disability, veteran status, genetic information, sexual orientation, gender identity or any other reason prohibited by law in provision of employment opportunities and benefits. We seek a diverse pool of applicants and consider all qualified candidates regardless of race, ancestry, color, gender identity or expression, sexual orientation, religion, national origin, citizenship, disability, Veteran status, marital status, or any other protected status. If you have a special need or disability that requires accommodation, please let us know.

Not Specified
Sr. Digital Product Manager (Membership, Customer Data & Loyalty)
✦ New
🏢 Petco
Salary not disclosed
Want to help pets live their best lives?
We’re proud to be where the pets go and where the pet people go. If you want to make a real difference, create an exciting career path, feel welcome to be your whole self and nurture your wellbeing, Petco is the place for you.
Our core values capture that spirit as we work to improve lives by doing what’s right for pets and people.
  • Pet First – Protect & Empower. All pets should Live their Best Life. We put the needs of pets and pet parents at the center of everything we do.
  • Foster the Fun – Connect & Bond. Our Passion for pets brings us together! We celebrate the journey of pet parenthood through district experiences, products, and services.
  • Let’s Go! Own & Commit. We are stronger as One Petco team. We bring our unique superpowers and champion authenticity in everyone to drive success.
About Petco
We’re proud to be "where the pets go" to find everything they need to live their best lives for more than 60 years — from their favorite meals and toys, to trusted supplies and expert support from people who get it, because we live it. We believe in the universal truths of pet parenthood — the boundless boops, missing slippers, late night zoomies and everything in between. And we’re here for it. Every tail wag, every vet visit, every step of the way. We are 29,000+ strong and together we nurture the pet-human bond in more than 1,500 Petco stores across the U.S., Mexico and Puerto Rico, 250+ Vetco Total Care hospitals, hundreds of preventive care clinics and eight distribution centers. In 1999, we founded Petco Love. Together, we support thousands of local animal welfare groups nationwide and have helped find homes for approximately 7 million animals through in-store adoption events.
Membership, Customer Data & Loyalty
Position Overview
The Senior Digital Product Manager will lead digital product initiatives supporting Membership, Customer Data, and Loyalty programs for a $6B specialty retail organization. Will own the end-to-end product strategy and roadmap for customer identity, data platforms, and loyalty experiences across digital and in-store channels.
The ideal candidate brings deep expertise in customer data platforms (CDPs), identity resolution, loyalty ecosystems, personalization, and privacy governance, combined with strong business acumen and cross-functional leadership skills.
Key Responsibilities
Product Strategy & Vision
  • Define and execute the multi-year product strategy for Membership, Customer Data, and Loyalty platforms.
  • Develop and maintain a prioritized product roadmap aligned with enterprise growth, retention, and customer lifetime value (CLV) objectives.
  • Identify opportunities to leverage customer data to drive personalization, engagement, and revenue growth.
Customer Data & Platform Leadership
  • Lead development and optimization of customer data capabilities, including:
    • Identity resolution and profile unification
    • Data governance and compliance (GDPR, CCPA, etc.)
    • Segmentation and audience management
    • Real-time personalization enablement
  • Partner with Engineering and Data teams to evolve CDP, CRM, and marketing technology stacks.
  • Ensure scalable architecture to support omnichannel retail environments.
Membership & Loyalty Programs
  • Own digital product capabilities supporting loyalty enrollment, rewards management, tiering, promotions, and engagement campaigns.
  • Optimize customer lifecycle journeys from acquisition through retention.
  • Develop features that enhance member value proposition and drive repeat purchase behavior.
  • Measure and improve loyalty program ROI, retention rate, and lifetime value.
Cross-Functional Leadership
  • Lead agile product teams and collaborate closely with:
    • Engineering
    • Data Science & Analytics
    • Marketing & CRM
    • eCommerce
    • Store Operations
    • Finance & Legal
  • Serve as the voice of the customer and translate business objectives into clear product requirements.
  • Align stakeholders around KPIs and measurable outcomes.
Analytics & Performance
  • Define success metrics and KPIs (CLV, retention, engagement, incremental revenue, NPS).
  • Use data and experimentation (A/B testing, cohort analysis) to drive product decisions.
  • Build executive-level reporting and business cases for investment prioritization.
Required Qualifications
  • 5+ years of product management experience, with 3+ years in digital product leadership.
  • Deep expertise in customer data management, CDPs, CRM systems, and loyalty platforms.
  • Experience in retail, specialty retail, consumer brands, or omnichannel environments.
  • Proven track record of delivering data-driven personalization initiatives.
  • Strong understanding of privacy regulations and data governance frameworks.
  • Experience leading agile product teams and influencing cross-functional stakeholders.
  • Demonstrated ability to manage complex platform integrations and enterprise-scale systems.
Preferred Qualifications
  • Experience working in a multi-billion-dollar retail organization.
  • Background in subscription or membership-based business models.
  • Familiarity with leading CDP and CRM ecosystems (e.g., Salesforce, Adobe, Tealium, etc.).
  • MBA or advanced degree in business, technology, or related field.
Leadership Competencies
  • Strategic thinker with strong commercial acumen
  • Data-driven decision maker
  • Influential communicator with executive presence
  • Customer-obsessed mindset
  • Bias for action and measurable impact
  • Ability to operate in fast-paced, matrixed organizations
Impact of the Role
This role directly influences customer retention, personalization maturity, and revenue growth by shaping how the organization leverages its customer data assets. The Senior Digital Product Manager will play a critical role in strengthening membership value, loyalty engagement, and long-term customer relationships.
#CORP
Qualified applications with arrest or conviction records will be considered for employment in accordance with the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act.
The pay ranges outlined below are presented in accordance with state-specific regulations. These ranges may differ in other areas and could be subject to variation based on regulatory minimum wage requirements. Actual pay rates will depend on factors such as position, location, level of experience, and applicable state or local minimum wage laws. If the regulatory minimum wage exceeds the minimum indicated in the pay range below, the regulatory minimum wage will be the minimum rate applied.
Salary Range: $103,800.00 - $155,700.00
Hourly or Salary Range will be reflected above. For a more detailed overview of Petco Total Rewards, including health and financial benefits, 401K, incentives, and PTO - see Animal Supplies, Inc. is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, age, protected veteran status, or any other protected classification.
To translate this webpage to Spanish or other languages on your internet browser, click the translate button to the right of your browser address bar. Additional instructions can be found here: Google Chrome Help .
Para traducir esta página web al español u otros idiomas en su navegador de Internet, haga clic en el botón de traducción a la derecha de la barra de direcciones de su navegador. Puede encontrar instrucciones adicionales aquí: Google Chrome Ayuda.
Not Specified
Data Integration & AI Engineer
✦ New
Salary not disclosed
Edison, NJ 1 day ago

About Wakefern

Wakefern Food Corp. is the largest retailer-owned cooperative in the United States and supports its co-operative members' retail operations, trading under the ShopRite®, Price Rite®, The Fresh Grocer®, Dearborn Markets®, and Gourmet Garage® banners.


Employing an innovative approach to wholesale business services, Wakefern focuses on helping the independent retailer compete in a big business world. Providing the tools entrepreneurs need to stay a step ahead of the competition, Wakefern’s co-operative members benefit from the company’s extensive portfolio of services, including innovative technology, private label development, and best-in-class procurement practices.


The ideal candidate will have a strong background in designing, developing, and implementing complex projects, with focus on automating data processes and driving efficiency within the organization. This role requires a close collaboration with application developers, data engineers, data analysts, data scientists to ensure seamless data integration and automation across various platforms. The Data Integration & AI Engineer is responsible for identifying opportunities to automate repetitive data processes, reduce manual intervention, and improve overall data accessibility.


Essential Functions

  • Participate in the development life cycle (requirements definition, project approval, design, development, and implementation) and maintenance of the systems.
  • Implement and enforce data quality and governance standards to ensure the accuracy and consistency.
  • Provide input for project plans and timelines to align with business objectives.
  • Monitor project progress, identify risks, and implement mitigation strategies.
  • Work with cross-functional teams and ensure effective communication and collaboration.
  • Provide regular updates to the management team.
  • Follow the standards and procedures according to Architecture Review Board best practices, revising standards and procedures as requirements change and technological advancements are incorporated into the >tech_ structure.
  • Communicates and promotes the code of ethics and business conduct.
  • Ensures completion of required company compliance training programs.
  • Is trained – either through formal education or through experience – in software / hardware technologies and development methodologies.
  • Stays current through personal development and professional and industry organizations.

Responsibilities

  • Design, build, and maintain automated data pipelines and ETL processes to ensure scalability, efficiency, and reliability across data operations.
  • Develop and implement robust data integration solutions to streamline data flow between diverse systems and databases.
  • Continuously optimize data workflows and automation processes to enhance performance, scalability, and maintainability.
  • Design and develop end-to-end data solutions utilizing modern technologies, including scripting languages, databases, APIs, and cloud platforms.
  • Ensure data solutions and data sources meet quality, security, and compliance standards.
  • Monitor and troubleshoot automation workflows, proactively identifying and resolving issues to minimize downtime.
  • Provide technical training, documentation, and ongoing support to end users of data automation systems.
  • Prepare and maintain comprehensive technical documentation, including solution designs, specifications, and operational procedures.


Qualifications

  • A bachelor's degree or higher in computer science, information systems, or a related field.
  • Hands-on experience with cloud data platforms (e.g., GCP, Azure, etc.)
  • Strong knowledge and skills in data automation technologies, such as Python, SQL, ETL/ELT tools, Kafka, APIs, cloud data pipelines, etc.
  • Experience in GCP BigQuery, Dataflow, Pub/Sub, and Cloud storage.
  • Experience with workflow orchestration tools such as Cloud Composer or Airflow
  • Proficiency in iPaaS (Integration Platform as a Service) platforms, such as Boomi, SAP BTP, etc.
  • Develop and manage data integrations for AI agents, connecting them to internal and external APIs, databases, and knowledge sources to expand their capabilities.
  • Build and maintain scalable Retrieval-Augmented Generation (RAG) pipelines, including the curation and indexing of knowledge bases in vector databases (e.g., Pinecone, Vertex AI Vector Search).
  • Leverage cloud-based AI/ML platforms (e.g., Vertex AI, Azure ML) to build, train, and deploy machine learning models on a scale.
  • Establish and enforce data quality and governance standards for AI/ML datasets, ensuring the accuracy, completeness, and integrity of data used for model training and validation.
  • Collaborate closely with data scientists and machine learning engineers to understand data requirements and deliver optimized data solutions that support the entire machine learning lifecycle.
  • Hands-on experience with IBM DataStage and Alteryx is a plus.
  • Strong understanding of database design principles, including normalization, indexing, partitioning, and query optimization.
  • Ability to design and maintain efficient, scalable, and well-structured database schemas to support both analytical and transactional workloads,
  • Familiarity with BI visualization tools such as MicroStrategy, Power BI, Looker, or similar.
  • Familiarity with data modeling tools.
  • Familiarity with DevOps practices for data (CI/CD pipelines)
  • Proficiency in project management software (e.g., JIRA, Clarizen, etc.)
  • Familiarity with DevOps practices for data (CI/CD pipelines)
  • Strong knowledge and skills in data management, data quality, and data governance.
  • Strong communication, collaboration, and problem-solving skills.
  • Ability to work on multiple projects and prioritize tasks effectively.
  • Ability to work independently and in a team environment.
  • Ability to learn new technologies and tools quickly.
  • The ability to handle stressful situations.
  • Highly developed business acuity and acumen.
  • Strong critical thinking and decision-making skills.


Working Conditions & Physical Demands

This position requires in-person office presence at least 4x a week.


Compensation and Benefits

The salary range for this position is $75,868 - $150,644. Placement in the range depends on several factors, including experience, skills, education, geography, and budget considerations.

Wakefern is proud to offer a comprehensive benefits package designed to support the health, well-being, and professional development of our Associates. Benefits include medical, dental, and vision coverage, life and disability insurance, a 401(k) retirement plan with company match & annual company contribution, paid time off, holidays, and parental leave.


Associates also enjoy access to wellness and family support programs, fitness reimbursement, educational and training opportunities through our corporate university, and a collaborative, team-oriented work environment. Many of these benefits are fully or partially funded by the company, with some subject to eligibility requirements

Not Specified
Data Architect - Consumer Platform
✦ New
Salary not disclosed

The pay range for this role is $150,000 - $200,000/yr USD.


WHO WE ARE:


Headquartered in Southern California, Skechers—the Comfort Technology Company®—has spent over 30 years helping men, women, and kids everywhere look and feel good. Comfort innovation is at the core of everything we do, driving the development of stylish, high-quality products at a great value. From our diverse footwear collections to our expanding range of apparel and accessories, Skechers is a complete lifestyle brand.


ABOUT THE ROLE:


Skechers Digital Team is seeking a Digital Data Architect reporting to the Director, Digital Architecture, Consumer Domain. This role is responsible for designing and governing Skechers’ Consumer Data 360 ecosystem, enabling identity resolution, high-quality data foundations, personalization, loyalty intelligence, and machine learning capabilities across digital and retail channels.


The ideal candidate will be a strong technical leader, have hands-on full-stack technical knowledge in enterprise technologies related to Skecher’s consumer domain, and have the ability to work in a fast-paced agile environment. You should have knowledge of consumer programs from an architecture/industry perspective, and you should have strong hands-on experience designing solutions on the Salesforce Core Platform (including configuration, integration, and data model best practices).


You will work cross-functionally with Digital Engineering, Data Engineering, Data Science, Loyalty, and Marketing teams to architect scalable, secure, and high-performance data platforms that support advanced personalization and recommender systems.


WHAT YOU’LL DO:


  • Responsible for the full technical life cycle of consumer platform capabilities which includes:
  • Capability roadmap and technical architecture in alignment to consumer experience
  • Technical planning, design, and execution
  • Operations, analytics/reporting, and adoption
  • Define and evolve Skechers’ Consumer Data 360 architecture, including identity resolution (deterministic and probabilistic matching) and unified customer profiles.
  • Architect scalable data models and pipelines across CDP, CRM, e-commerce, marketing automation, data lake, and warehouse platforms.
  • Establish enterprise data quality frameworks including validation, deduplication, anomaly detection, and observability.
  • Optimize SQL workloads and large-scale distributed queries through performance tuning, partitioning, indexing, and workload management strategies.
  • Design and oversee ML pipelines supporting personalization, churn modeling, and recommender systems.
  • Partner with Data Science teams to productionize models using distributed platforms such as Databricks (Spark, Delta Lake, MLflow preferred).
  • Ensure secure data governance, access control (RBAC/ABAC), and compliance with GDPR, CCPA, and related privacy regulations.
  • Provide architectural oversight ensuring performance, scalability, resilience, and maintainability.
  • Collaborate with stakeholders to translate business objectives (LTV growth, personalization lift, engagement) into scalable data solutions.


REQUIREMENTS:


  • Computer Science, Data Engineering, or related degree or equivalent experience.
  • 12+ years experience architecting enterprise data platforms in cloud environments.
  • 9+ years experience with data engineering with a focus on consumer data.
  • 6+ years experience working with Salesforce platforms, including data models and enterprise integrations.
  • Strong experience with Data 360 and identity resolution architectures.
  • Proven expertise in SQL performance tuning and large-scale data modeling.
  • Hands-on experience implementing ML pipelines and recommender systems in production environments.
  • Experience with cloud technologies (AWS, GCP, or Azure).
  • Experience with integration patterns (API, ETL, event streaming).
  • Experience providing technical leadership and guidance across multiple projects and development teams.
  • Experience translating business requirements into detailed technical specifications and working with development teams through implementation, including issue resolution and stakeholder communication.
  • Strong project management skills including scope assessment, estimation, and clear technical communication with both business users and technical teams.
  • Must hold at least one of the following Salesforce Certifications (Platform App Builder, Platform Developer 1, JavaScript Developer 1).
  • Experience with Databricks or similar distributed data/ML platforms preferred.
Not Specified
Data QA Engineer
✦ New
Salary not disclosed
Dallas, TX 1 day ago

Title : Data QA Engineer

Location: Minneapolis , Dallas , Atlanta (Onsite)

Job Type : Contract

Exp : 8-15 Years


Key Responsibilities:

  • Design, build, and maintain automated data quality frameworks to validate accuracy, completeness, consistency, and timeliness of data.
  • Develop automation scripts using Python/SQL to test data pipelines, ETL/ELT processes, and analytics workflows.
  • Implement data quality checks and monitoring within Azure-based data platforms.
  • Work extensively with Azure services (ADF, ADLS, Synapse) and Databricks for large-scale data processing.
  • Integrate data quality validations into CI/CD pipelines and support proactive issue detection.
  • Perform root cause analysis for data issues and collaborate with data engineering, analytics, and business teams to resolve them.
  • Define and enforce data quality standards, metrics, and SLAs.

Required Skills & Qualifications:

  • Strong experience (8–15 years) in data engineering, data quality, or data automation roles.
  • Hands-on expertise with Azure data ecosystem and Databricks.
  • Strong programming skills in Python and SQL.
  • Experience building automated data validation and reconciliation frameworks.
  • Solid understanding of data warehousing, data lakes, and distributed data processing.
  • Familiarity with DevOps/CI-CD practices for data platforms.

Preferred Skills:

  • Experience with data observability or data quality tools.
  • Exposure to cloud-scale analytics and performance optimization.
  • Strong communication and stakeholder management skills.
Not Specified
Data Quality Specialist
Salary not disclosed
Kennett Square, PA 3 days ago

Job Description:

Overview:

We don't simply hire employees. We invest in them. When you work at Chatham, we empower you - offering professional development opportunities to help you grow in your career, no matter if you've been here for five months or 15 years. Chatham has worked hard to create a distinct work environment that values people, teamwork, integrity, and client service. You will have immediate opportunities to partner with talented subject matter experts, work on complex projects, and contribute to the value Chatham delivers every day.

We seek to enhance our Controls and Data Integrity team with a role specializing in data quality for interest rate, currency, and commodity transactions. The role is part of our global central operations group charged with ensuring the accuracy and reliability of Chatham's transaction, market, and valuation data.

In this role you will:

The purpose of the role is to ensure all transaction details are in Chatham's systems accurately and as agreed upon at execution. Data entry errors can have significant consequences to the economics of the transaction or to their accounting treatment, and it is therefore critical that team members understand transaction-related market conventions, payments, and valuations. This role will provide support for transactions executed by Chatham's real estate, private equity, corporate, and financial institutions sectors. We expect primary responsibilities to include:

  • Transaction and data review
    • Work as part of the larger team to check the data entry on transactions as they are executed
    • Verify calculation amounts and build payment schedules
    • Develop an understanding of the underlying transactions in order to identify loading errors
    • Check daily control reports to monitor unusual movements in transaction valuations and market data
    • Assist with data clean-up related transaction data and Client Relationship Management (CRM) software
  • Communicate and coordinate across other internal teams and with clients
    • Interact with sector team members to verify/clarify data, as needed
    • Work with internal models, analytics, and technology teams to resolve issues
    • Play an active role in liaising between the business and technical teams
    • Check and send out monthly valuation reports to clients
  • Develop and share subject matter expertise
    • Take part in the training of new Chatham employees on sector teams
    • Serve as an integral member of ad hoc project teams to improve processes, solve problems, and provide insight from a data quality perspective
    • Develop SQL skills and help create database queries
  • The role may also include opportunities to contribute to the team in other capacities as interests and team needs align.

Your impact:

Our team works in partnership with Chatham's sector advisory teams and clients to help them efficiently navigate the data quality, operational, and regulatory compliance aspects of a transaction. We strive to continually improve the workflows we are responsible for and have the chance to do so by implementing process changes and/or leveraging supporting technology. Team members play a crucial role in these process improvements and serve as subject matter experts, providing regular training and resources for all Chatham teams.

Contributors to your success:

  • 2 years of experience working in operations or data quality may be beneficial but is not required
  • An interest in data quality, data management, and process improvement
  • Comfort with basic math skills and use of Microsoft Excel
  • High level of attention to detail, accuracy, and organization
  • Ability to multitask and independently prioritize workload
  • Strong verbal and written communication skills
  • Ability to work extra/non-standard hours around month- and quarter-ends (and other special cases) to support critical business processes
  • Experience with VBA and SQL are beneficial, but not necessary

We seek individuals that will thrive in our culture and can make a significant impact over the long term. Most of our team members do not come to Chatham with a deep understanding of derivatives; therefore, we conduct classroom and apprentice-style training. We look for people who have consistently demonstrated drive, determination, and academic/professional accomplishment throughout their lives. We invest a great deal of time and training with our employees and we are looking for individuals who want to make a long-term commitment to the company.

About Chatham Financial:

Chatham Financial is the largest independent financial risk management advisory and technology firm. A leader in debt and derivative solutions, Chatham provides clients with access to in-depth knowledge, innovative tools, and an incomparable team of over 700 employees to help mitigate risks associated with interest rate, foreign currency, and commodity exposures. Founded in 1991, Chatham serves more than 3,500 companies across a wide range of industries - handling over $1 trillion in transaction volume annually and helping businesses maximize their value in the capital markets, every day. To learn more, .

Chatham Financial is an equal opportunity employer.

Not Specified
Strategic Data(RWD) Acquisition Manager
Salary not disclosed
Minneapolis, MN 3 days ago

Surescripts serves the nation through simpler, trusted health intelligence sharing, in order to increase patient safety, lower costs and ensure quality care. We deliver insights at critical points of care for better decisions - from streamlining prior authorizations to delivering comprehensive medication histories to facilitating messages between providers.

Job Summary:

The Strategic Data(RWD) Acquisition Manager will be an integral part of Surescripts' data ecosystem by executing negotiations with Surescripts Network Alliance partners to secure data usage rights, while also identifying and acquiring new, strategic data sources. This person will play a critical role in maintaining access to high quality data necessary for the development of solutions that will deliver value and improve the experience for stakeholders across the healthcare ecosystem. This position requires a deep understanding of healthcare data, the regulatory landscape and business development experience to successfully negotiate and secure data agreements that will enhance our product portfolio.

Responsibilities:
  • Identify and evaluate potential data sources of interest that expand Surescripts' data portfolio. Create comprehensive value propositions for how the data could be used within Surescripts' solutions, and valuation of the data to make offers to data sources for data acquisition.
  • Drive business development efforts to secure agreements that enhance Surescripts' data portfolio. With guidance from leadership, execute strategies to identify and approach potential data partners, and successfully negotiate terms.
  • Collaborate with sales and product teams to develop strategies to align customer incentives with broader data-dependent initiatives. Interface with Surescripts Network Alliance partners to negotiate data usage rights, ensuring alignment with business goals and regulatory requirements.
  • Interface with data providers, industry partners, and other stakeholders.
  • Manage day-to-day data procurement-related inquiries and negotiations with data providers and customers.
  • Maintain a thorough understanding of privacy laws, including HIPAA permitted purposes. Collaborate with compliance, privacy, security, and data governance teams to ensure all data procurement activities comply with all state and federal regulations, internal policies, and customer contracts.
  • Monitor and report on data procurement activities. Track progress of data procurement efforts, report on key metrics, and provide regular updates to senior management. Proactively identify and address any challenges or obstacles in the procurement process. Monitor and evaluate the ROI of data acquisition initiatives to prioritize high-impact opportunities.
  • Keep up-to-date with the latest developments in data rights, privacy regulations, and the healthcare industry. Apply and share this knowledge to improve data procurement strategies and ensure the company remains compliant and competitive.

Qualifications:

Basic Requirements:

  • Bachelor's degree in Business, Economics, Data Science, or related field;
  • 8+ years of experience in business development and/or related experience in the procurement/acquisition of healthcare data.
  • Strong understanding of regulations around healthcare data, including Health Insurance Portability and Accountability Act (HIPAA) and Trusted Exchange Framework and Common Agreement (TEFCA).
  • Ability to evaluate the value and quality of data assets and their applicability to business needs.
  • Proven experience in negotiating contracts and managing vendor relationships.
  • Demonstrated success in business development and deal negotiation.
  • Excellent written and verbal communication and interpersonal skills.
  • Ability to work independently and as part of a team.
  • Ability to travel for team, customer and vendor meetings as needed.
  • Strategic thinker with strong analytical and problem-solving abilities and results-driven mindset.

Preferred Qualifications:

  • MBA or advanced degree preferred in a related field.
  • Strong understanding of healthcare interoperability standards, such as Fast Healthcare Interoperability Resource (FHIR).
  • Strong understanding of electronic health records (EHR), pharmacy and claims data, health information exchanges (HIE), and TEFCA qualified health information networks (QHINs)
  • Familiarity with data governance tools (e.g. data mapping, lineage

#LI-remote

Surescripts embraces flexibility through its Flexible Hybrid Work model for most positions. This model allows employees to work virtually while still utilizing our offices as collaboration centers. With alignment and agreement from your leadership, you can come and go from the office as needed.

To be considered for employment, applicants must have a valid U.S. work authorization allowing work without restrictions with Surecripts in the U.S. At this time, we are unable to provide support or provide sponsorship for immigration benefits such as work visas. Additionally, we do not participate in academic training programs or work-study programs through an academic institution that require employer endorsement of F-1/CPT or F-1/STEM.

Why Wait? Apply Now


We're a midsize company. This means you're not just another employee ID number. Here, you can build real relationships and feel supported by truly awesome people with diverse backgrounds and talents in an innovative and collaborative work culture. We strive to create an environment where you can be yourself, share your ideas and work your way. We offer opportunities for employee development, as well as competitive compensation packages and extensive benefits.

At Surescripts, base pay is one part of our Total Rewards Package (which may also include bonus, benefits etc.) and is determined within a range. The base pay range for this position is $138,100 - $168,700 per year. Your base pay may vary within or outside of this range depending on a number of factors, including (but not limited to) your qualifications, skills, experience, and location.


Benefits include, but are not limited to, comprehensive healthcare (including infertility coverage), generous paid time off including paid childbirth and parental leave and mental health days, pet insurance, and 401(k) with company match and immediate vesting. To learn more, review the Keep You and Yours Healthy, Balancing Work and Life, and Where Talent Takes Shape links under the Better Benefits. Better Work. Better Life section of our careers site.

Physical and Mental Requirements

While performing duties of this job, an employee may be required to perform any, or all of the following: attend meetings in and out of the office, travel, communicate effectively (both orally and in writing), and be able to effectively use computers and other electronic and standard office equipment with, or without, a reasonable accommodation. Additionally, this job requires certain mental demands, including the ability to use judgement, withstand moderate amounts of stress and maintain attention to detail with, or without, a reasonable accommodation.

Surescripts is proud to be an Equal Employment Opportunity and Affirmative Action employer. We do not discriminate on the basis of race, color, religion, age, national origin, ancestry, disability, medical condition, marital status, pregnancy, genetic information, gender, sexual orientation, parental status, gender identity, gender expression, veteran status, or any other status protected under federal, state, or local law.


Not Specified
Staff Data Engineer, tvScientific
Salary not disclosed
San Francisco, CA 3 days ago

About Pinterest:


Millions of people around the world come to our platform to find creative ideas, dream about new possibilities and plan for memories that will last a lifetime. At Pinterest, we're on a mission to bring everyone the inspiration to create a life they love, and that starts with the people behind the product.


Discover a career where you ignite innovation for millions, transform passion into growth opportunities, celebrate each other's unique experiences and embrace theflexibility to do your best work. Creating a career you love? It's Possible.


At Pinterest, AI isn't just a feature, it's a powerful partner that augments our creativity and amplifies our impact, and we're looking for candidates who are excited to be a part of that. To get a complete picture of your experience and abilities, we'll explore your foundational skills and how you collaborate with AI.


Through our interview process, what matters most is that you can always explain your approach, showing us not just what you know, but how you think. You can read more about our AI interview philosophy and how we use AI in our recruiting process here.

About tvScientific


tvScientific is the first and only CTV advertising platform purpose-built for performance marketers. We leverage massive data and cutting-edge science to automate and optimize TV advertising to drive business outcomes. Our solution combines media buying, optimization, measurement, and attribution in one, efficient platform. Our platform is built by industry leaders with a long history in programmatic advertising, digital media, and ad verification who have now purpose-built a CTV performance platform advertisers can trust to grow their business.



We are seeking a Staff Data Engineer to lead the design, implementation, and evolution of our identity services and data governance platform. This role is critical to ensuring trusted, privacy-safe, and well-governed data across the organization. You will work at the intersection of data engineering, identity resolution, privacy, and platform reliability.This is anindividual contributor role, where you will work to define and implement a strategic vision for data engineering within the organization.


What you'll do:



  • Identity Services:

    • Design and maintain a scalable identity resolution platform
    • Build pipelines and services to ingest, normalize, link, and version identity data across multiple sources
    • Ensure deterministic and probabilistic matching logic that is transparent, auditable, and measurable
    • Partner with product and analytics teams to expose identity data through reliable, well-documented APIs and datasets
    • Build and operate batch and streaming pipelines using modern data stack tools
    • Create clear documentation, standards, and runbooks for identity and governance systems


  • Data Governance & Trust

    • Own data governance foundations including data lineage, quality checks, schema enforcement, and access controls
    • Implement privacy-by-design principles (PII handling, consent enforcement, retention policies)
    • Collaborate with legal, privacy, and security teams to operationalize regulatory requirements (e.g., GDPR, CCPA)
    • Establish monitoring and alerting for data quality, freshness, and integrity



What we're looking for:



  • Data engineering experience with proven track record building data infrastructure using Spark with Scala
  • Proven experience building data infrastructure using Spark with Scala for at least 5 years
  • Experience in delivering significant technical initiatives and building reliable, large scale services
  • Experience in delivering APIs backed by relationship-heavy datasets
  • Experience implementing data governance practices, including data quality, metadata management, and access controls
  • Strong understanding of privacy-by-design principles and handling of sensitive or regulated data
  • Familiarity with data lakes, cloud warehouses, and storage formats
  • Strong proficiency in AWS services
  • Successful design and implementation of scalable and efficient data infrastructure
  • High attention to detail in implementation of automated data quality checks
  • Effective collaboration with cross-functional teams
  • Excellent written and verbal communication skills
  • Bachelor's degree in Computer Science or a related field


In-Office Requirement Statement:



  • We recognize that the ideal environment for work is situational and may differ across departments. What this looks like day-to-day can vary based on the needs of each organization or role.


Relocation Statement:



  • This position is not eligible for relocation assistance. Visit ourPinFlexpage to learn more about our working model.


#LI-SM4


#LI-REMOTE

At Pinterest we believe the workplace should be equitable, inclusive, and inspiring for every employee. In an effort to provide greater transparency, we are sharing the base salary range for this position. The position is also eligible for equity. Final salary is based on a number of factors including location, travel, relevant prior experience, or particular skills and expertise.


Information regarding the culture at Pinterest and benefits available for this position can be found here.

US based applicants only$155,584—$320,320 USD

Our Commitment to Inclusion:


Pinterest is an equal opportunity employer and makes employment decisions on the basis of merit. We want to have the best qualified people in every job. All qualified applicants will receive consideration for employment without regard to race, color, ancestry, national origin, religion or religious creed, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, age, marital status, status as a protected veteran, physical or mental disability, medical condition, genetic information or characteristics (or those of a family member) or any other consideration made unlawful by applicable federal, state or local laws. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. If you require a medical or religious accommodation during the job application process, please completethis formfor support.

Not Specified
System Administrator - Microsoft Purview (Data Catalog & Governance)
Salary not disclosed
Raleigh, NC 2 days ago
Role: System Administrator - Microsoft Purview (Data Catalog & Governance)

Location: 100% Remote

Duration: 12+ Months

Overview:

An experienced Administrator to operate and support the enterprise implementation of Microsoft Purview Data Catalog across a complex, multi-platform data environment. The administrator will be responsible for the day-to-day configuration, monitoring, and maintenance of Purview capabilities, ensuring reliable metadata ingestion, catalog quality, lineage visibility, and compliance alignment across governed data domains.

This role focuses on platform operations and governance execution, working within established architecture and enterprise governance standards.

Key Responsibilities

Platform Administration & Operations:


  • Administer and operate Microsoft Purview Data Map and Data Catalog environments.
  • Monitor platform health, scan execution, metadata ingestion, and lineage availability.
  • Troubleshoot and resolve catalog, scan, and connectivity issues.
  • Perform routine maintenance, configuration updates, and service optimizations.
  • Coordinate incident resolution with internal engineering teams and Microsoft support as required.

Data Source Management & Scanning:


  • Register, configure, and maintain data sources across Azure, M365, on?prem, and approved third?party platforms.
  • Configure and schedule metadata scans for supported sources.
  • Manage authentication for scans using managed identities, service principals, and Key Vault secrets.
  • Monitor scan performance, failures, and coverage; take corrective action as needed.
  • Optimize scan frequency and scope to balance cost, performance, and governance coverage.

Catalog Configuration & Metadata Management:


  • Maintain and enforce enterprise metadata standards within the Purview Catalog.
  • Manage business metadata, classifications, glossary terms, and custom attributes.
  • Ensure metadata accuracy, completeness, and consistency across data assets.
  • Support curation activities including asset certification and publishing.
  • Resolve duplicate, incomplete, or stale catalog entries.

Lineage & Discovery Enablement:


  • Enable and validate data lineage ingestion from supported data platforms.
  • Monitor lineage completeness and visibility for critical data assets.
  • Assist data consumers and stewards with lineage?based impact analysis.
  • Escalate lineage gaps or tool limitations requiring architectural or engineering remediation.

Security, Access & Governance Controls:


  • Configure and manage Purview role?based access control (RBAC) within collections.
  • Provision and maintain access for administrators, data curators, and data stewards.
  • Enforce domain?based access controls and separation of duties.
  • Integrate Purview access with Microsoft Entra ID.
  • Support sensitivity labels and classification alignment with Microsoft Information Protection.

Compliance & Risk Support:


  • Support automated discovery of sensitive data (PII, PCI, PHI).
  • Assist risk, audit, and compliance teams with catalog evidence and reporting.
  • Validate scan coverage for regulated data domains.
  • Support regulatory and audit initiatives (SOX, GLBA, NYDFS, GDPR, etc.).

User Support & Enablement:


  • Provide operational support to data producers, consumers, and data stewards.
  • Respond to access requests, catalog issues, and usage questions.
  • Maintain operational documentation, runbooks, and standard operating procedures.
  • Support onboarding of new data domains following established governance patterns.
  • Assist with training and adoption initiatives led by governance or architecture teams.


Required Qualifications:


  • 5+ years experience supporting enterprise data platforms or governance tools and 4+ years hands?on MS Purview experience at enterprise scale.
  • Hands?on experience administering Microsoft Purview Data Catalog.
  • Strong understanding of metadata management, data classification, and lineage concepts.
  • Working knowledge of Azure data services and enterprise data ecosystems.
  • Experience managing access controls and identities using Microsoft Entra ID.
  • Familiarity with regulated data environments and compliance requirements.
  • Strong troubleshooting, operational support, and documentation skills.


Preferred Qualifications:


  • Experience supporting Purview integrations with Synapse, Fabric, Databricks, Snowflake, or SQL Server.
  • Exposure to financial services or other regulated industries.
  • Experience with PowerShell, REST APIs, or basic automation for operational tasks.
  • Prior experience supporting enterprise data governance or stewardship programs.
Not Specified
Bus Data Analyst - Blue Bell, PA
Salary not disclosed
Blue Bell, PA 2 days ago
Back Bus Data Analyst #4735 Blue Bell, Pennsylvania, United States Apply X Facebook LinkedIn Email Copy Job Description:

The Business Data Analyst will play a critical role in supporting data-driven decision-making for core PMA business functions. This position is focused on extracting valuable insights from complex datasets, creating operational reports, and developing intuitive BI dashboards tailored to business needs. Working within an enterprise reporting structure, the analyst will perform on-demand data discovery, conduct trend analysis, and develop analytics tools that empower stakeholders with meaningful insights. By ensuring data accuracy, quality and relevance, this role will support data governance activities and continuous process improvements that align with strategic objectives.




Responsibilities:




Data Analysis & Business Insights
* Conduct in-depth data analysis to support strategic business initiatives.
* Perform trend analysis and develop predictive insights to help business teams identify patterns, risks, and opportunities.
* Respond to data discovery requests and operational reports development to support key business metrics and decision-making.
* Deploy best practices and make recommendations for improved understanding.
* Translate complex data findings into actionable recommendations, presenting insights in a clear and meaningful way for non-technical stakeholders.
Enterprise Reporting & BI Dashboard Development
* Work closely with business stakeholders to understand their reporting needs, providing insights that drive data-informed decisions.
* Design, develop, and maintain interactive BI dashboards tailored to answering critical business questions, providing real-time access to critical metrics and performance insights.
* Utilize enterprise BI tools to create data visualizations that enable easy exploration of data and insights.
* Partner with stakeholders to test and refine dashboards, ensuring they align with business requirements and enhance decision-making capabilities.
* Facilitate training and support for business users on BI dashboards and reporting tools, enabling self-service access to data insights.
Data Quality Support & Validation
* Collaborate with data governance and data engineering teams to ensure high data quality and integrity in enterprise reports and dashboards.
* Perform data validation and verification as part of report development to ensure data accuracy, consistency, and relevance for business users.
* Monitor data accuracy metrics and support data issue resolution, maintaining a high standard of data quality across reporting tools.
* Demonstrate commitment to Company's Code of Business Conduct and Ethics, and apply knowledge of compliance policies and procedures, standards and laws applicable to job responsibilities in the performance of work.



Requirements:

* 3+ years of experience in data, analytics, or business intelligence.
* Bachelor's degree in Information Management, Data Science, Computer Science, Mathematics, Statistics, Economics, Psychology or a related field.
* Proficient in SQL for data extraction and manipulation across various data sources.
* Strong analytical skills to interpret complex datasets and draw actionable insights.
* Experience with BI platforms like QlikSense or Power BI for data visualization and dashboard development.
* Familiar with advanced Excel functions for data manipulation and reporting.
* Understanding of statistical methods and trend analysis for identifying patterns and creating projections.
* Familiar with predictive modeling or basic machine learning concepts is a plus.
* Proficiency with scripting languages or tools (such as Python, R, or VBA) for process automation is a plus.
* Basic understanding of data integration, ETL processes, and data warehousing concepts.
* Skilled in presenting data in a way that tells a compelling story and drives informed decision-making.
* Strong interpersonal skills to work effectively with cross-functional teams in underwriting, finance, and IT.
* High level of precision in data analysis, ensuring reports and insights are accurate and free of errors.
* Analytical mindset to investigate data challenges, identify root causes, and develop efficient solutions.
* Ability to adapt to evolving data requirements and troubleshoot issues with minimal supervision.
* Strong organizational skills to balance multiple projects and meet reporting deadlines.
* Effective time management to handle ad hoc requests and prioritize tasks in a fast-paced environment.
* Open and motivated to learn new tools, methods, and data practices.



Not Specified
jobs by JobLookup