Ceic Data Jobs in Usa
11,328 positions found — Page 2
Job Title – Lead Data Engineer
Please note this role is not able to offer visa transfer or sponsorship now or in the future
About the role
As a Lead Data Engineer, you will make an impact by designing, building, and operating scalable, cloud‑native data platforms supporting batch and streaming use cases, with strong focus on governance, performance, and reliability. You will be a valued member of the Data Engineering team and work collaboratively with cross‑functional engineering, cloud, and architecture stakeholders.
In this role, you will:
- Design, build, and operate scalable cloud‑native data platforms supporting batch and streaming workloads with strong governance, performance, and reliability.
- Develop and operate data systems on AWS, Azure, and GCP, designing cloud‑native, scalable, and cost‑efficient data solutions.
- Build modern data architectures including data lakes, data lakehouses, and data hubs, with strong understanding of ingestion patterns, data governance, data modeling, observability, and platform best practices.
- Develop data ingestion and collection pipelines using Kafka and AWS Glue; work with modern storage formats such as Apache Iceberg and Parquet.
- Design and develop real‑time streaming pipelines using Kafka, Flink, or similar streaming frameworks, with understanding of event‑driven architectures and low‑latency data processing.
- Perform data transformation and modeling using SQL‑based frameworks and orchestration tools such as dbt, AWS Glue, and Airflow, including Slowly Changing Dimensions (SCD) and schema evolution.
- Use Apache Spark extensively for large‑scale data transformations across batch and streaming workloads.
Work model
We believe hybrid work is the way forward as we strive to provide flexibility wherever possible. Based on this role’s business requirements, this is a hybrid position requiring 4 days a week in a client or Cognizant office in Atlanta, GA. Regardless of your working arrangement, we are here to support a healthy work-life balance though our various wellbeing programs.
The working arrangements for this role are accurate as of the date of posting. This may change based on the project you’re engaged in, as well as business and client requirements. Rest assured; we will always be clear about role expectations.
What you need to have to be considered
- Hands‑on experience developing and operating data systems on AWS, Azure, and GCP.
- Proven ability to design cloud‑native, scalable, and cost‑efficient data solutions.
- Experience building data lakes, data lakehouses, and data hubs with strong understanding of ingestion patterns, governance, modeling, observability, and platform best practices.
- Expertise in data ingestion and collection using Kafka and AWS Glue, with experience in Apache Iceberg and Parquet.
- Strong experience designing and developing real‑time streaming pipelines using Kafka, Flink, or similar streaming frameworks.
- Deep expertise in data transformation and modeling using SQL‑based frameworks and orchestration tools including dbt, AWS Glue, and Airflow, with knowledge of SCD and schema evolution.
- Extensive experience using Apache Spark for large‑scale batch and streaming data transformations.
These will help you stand out
- Experience with event‑driven architectures and low‑latency data processing.
- Strong understanding of schema evolution, SCD modeling, and modern data modeling concepts.
- Experience with Apache Iceberg, Parquet, and modern ingestion/storage patterns.
- Strong knowledge of observability, governance, and platform best practices.
- Ability to partner effectively with cloud, architecture, and engineering teams.
Salary and Other Compensation:
Applications will be accepted until March 17, 2025.
The annual salary for this position is between $81,000 - $135,000, depending on experience and other qualifications of the successful candidate.
This position is also eligible for Cognizant’s discretionary annual incentive program, based on performance and subject to the terms of Cognizant’s applicable plans.
Benefits: Cognizant offers the following benefits for this position, subject to applicable eligibility requirements:
- Medical/Dental/Vision/Life Insurance
- Paid holidays plus Paid Time Off
- 401(k) plan and contributions
- Long‑term/Short‑term Disability
- Paid Parental Leave
- Employee Stock Purchase Plan
Disclaimer: The salary, other compensation, and benefits information is accurate as of the date of this posting. Cognizant reserves the right to modify this information at any time, subject to applicable law.
OZ – Databricks Architect/ Senior Data Engineer
Note: Only applications from U.S. citizens or lawful permanent residents (Green Card Holders) will be considered.
We believe work should be innately rewarding and a team-building venture. Working with our teammates and clients should be an enjoyable journey where we can learn, grow as professionals, and achieve amazing results. Our core values revolve around this philosophy. We are relentlessly committed to helping our clients achieve their business goals, leapfrog the competition, and become leaders in their industry. What drives us forward is the culture of creativity combined with a disciplined approach, passion for learning & innovation, and a ‘can-do’ attitude!
What We're Looking For:
We are seeking a highly experienced Databricks professional with deep expertise in data engineering, distributed computing, and cloud-based data platforms. The ideal candidate is both an architect and a hands-on engineer who can design scalable data solutions while actively contributing to development, optimization, and deployment.
This role requires strong technical leadership, a deep understanding of modern data architectures, and the ability to implement best practices in DataOps, performance optimization, and data governance.
Experience with modern AI/GenAI-enabled data platforms and real-time data processing environments is highly desirable.
Position Overview:
The Databricks Senior Data Engineer will play a critical role in designing, implementing, and optimizing enterprise-scale data platforms using the Databricks Lakehouse architecture. This role combines architecture leadership with hands-on engineering, focusing on building scalable, secure, and high-performance data pipelines and platforms. The ideal candidate will establish coding standards, define data architecture frameworks such as the Medallion Architecture, and guide the end-to-end development lifecycle of modern data solutions.
This individual will collaborate with cross-functional stakeholders, including data engineers, BI developers, analysts, and business leaders, to deliver robust data platforms that enable advanced analytics, reporting, and AI-driven decision-making.
Key Responsibilities:
- Architecture & Design: Architect and design scalable, reliable data platforms and complex ETL/ELT and streaming workflows for the Databricks Lakehouse Platform (Delta Lake, Spark).
- Hands-On Development: Write, test, and optimize code in Python, PySpark, and SQL for data ingestion, transformation, and processing.
- DataOps & Automation: Implement CI/CD, monitoring, and automation (e.g., with Azure DevOps, DBX) for data pipelines.
- Stakeholder Collaboration: Work with BI developers, analysts, and business users to define requirements and deliver data-driven solutions.
- Performance Optimization: Tune delta tables, Spark jobs, and SQL queries for maximum efficiency and scalability.
- GenAI Applications Development: It is a big plus to have experience in GenAI application development
Requirements:
- 8+ years of experience in data engineering, with strong hands-on expertise in Databricks and Apache Spark.
- Proven experience designing and implementing scalable ETL/ELT pipelines in cloud environments.
- Strong programming skills in Python and SQL; experience with PySpark required.
- Hands-on experience with Databricks Lakehouse, Delta Lake, and distributed data processing.
- Experience working with cloud platforms such as Microsoft Azure, AWS, or GCP (Azure preferred).
- Experience with CI/CD pipelines, Git, and DevOps practices for data engineering.
- Strong understanding of data architecture, data modeling, and performance optimization.
- Experience working with cross-functional teams to deliver enterprise data solutions.
- Tackles complex data challenges, ensuring data quality and reliable delivery.
Qualifications:
- Bachelor’s degree in Computer Science, Information Technology, Engineering, or a related field.
- Experience designing enterprise-scale data platforms and modern data architectures.
- Experience with data integration tools such as Azure Data Factory or similar platforms.
- Familiarity with cloud data warehouses such as Databricks, Snowflake, or Azure Fabric.
- Experience supporting analytics, reporting, or AI/ML workloads is highly desirable.
- Databricks, Azure, or cloud certifications are preferred.
- Strong problem-solving, communication, and technical leadership skills.
Technical Proficiency in:
- Databricks, Apache Spark, PySpark, Delta Lake
- Python, SQL, Scala (preferred)
- Cloud platforms: Azure (preferred), AWS, or GCP
- Azure Data Factory, Kafka, and modern data integration tools
- Data warehousing: Databricks, Snowflake, or Azure Fabric
- DevOps tools: Git, Azure DevOps, CI/CD pipelines
- Data architecture, ETL/ELT design, and performance optimization
What You’re Looking For:
Join a fast-growing organization that thrives on innovation and collaboration. You’ll work alongside talented, motivated colleagues in a global environment, helping clients solve their most critical business challenges. At OZ, your contributions matter – you’ll have the chance to be a key player in our growth and success. If you’re driven, bold, and eager to push boundaries, we invite you to join a company where you can truly make a difference.
About Us:
OZ is a 28-year-old global technology consulting, services, and solutions leader specializing in creating business-focused solutions for our clients by leveraging disruptive digital technologies and innovation.
OZ is committed to creating a continuum between work and life by allowing people to work remotely. We offer competitive compensation and a comprehensive benefits package. You’ll enjoy our work style within an incredible culture. We’ll give you the tools you need to succeed so you can grow and develop with us and become part of a team that lives by its core values.
Job Summary:
Our client is seeking a Data Steward to join their team! This position is located Hybrid in Creve Coeur, Missouri.
Duties:
- Understand business capability needs and processes as they relate to IT solutions through partnering with Product Managers and business and functional IT stakeholders
- Participate in data scraping, data curation and data compilation efforts
- Ensure high quality of the data to end users
- Ensure high quality of the inhouse data via data stewardship
- Implement and utilize data solutions for data analysis and profiling using a variety of tools such as SQL, Postman, R, or Python and following the team’s established processes and methodologies
- Collaborate with other data stewards and engineers within the team and across teams on aligning delivery dates and integration efforts
- Define data quality rules and implement automated monitoring, reporting, and remediation solutions
- Coordinate intake and resolution of data support tickets
- Support data migration from legacy systems, data inserts and updates not supported by applications
- Partner with the Data Governance organization to ensure data is secured and access is being managed appropriately
- Identify gaps within existing processes and capable of creating new documentation templates to improve the existing processes and procedures
- Create mapping documents and templates to improve existing manual processes
- Perform data discoveries to understand data formats, source systems, etc. and engage with business partners in this discovery process
- Help answer questions from the end-users and coordinate with technical resources as needed
- Build prototype SQL and continuously engage with end consumers with enhancements
Desired Skills/Experience:
- Bachelor's Degree in Computer Science, Engineering, Science, or other related field
- Applied experience with modern engineering technologies and data principles, for instance: Big Data Cloud Compute, NoSQL, etc..
- Applied experience with querying SQL and/orNoSQL databases
- Experience in designing data catalogs, including data design, metadata structures, object relations, catalog population, etc.
- Data Warehousing experience
- Strong written and verbal communication skills
- Comfortable balancing demands across multiple projects / initiatives
- Ability to identify gaps in requirements based on business subject matter domain expertise
- Ability to deliver detailed technical documentation
- Expert level experience in relevant business domain
- Experience managing data within SAP
- Experience managing data using APIs
- Big Query experience
Benefits:
- Medical, Dental, & Vision Insurance Plans
- Employee-Owned Profit Sharing (ESOP)
- 401K offered
The approximate pay range for this position starting at $104,000 - $115,000+ Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.
At KellyMitchell, our culture is world class. We’re movers and shakers! We don’t mind a bit of friendly competition, and we reward hard work with unlimited potential for growth. This is an exciting opportunity to join a company known for innovative solutions and unsurpassed customer service. We're passionate about helping companies solve their biggest IT staffing & project solutions challenges. As an employee-owned, women-led organization serving Fortune 500 companies nationwide, we deliver expert service at a moment's notice.
By applying for this job, you agree to receive calls, AI-generated calls, text messages, or emails from KellyMitchell and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy at
Loloi Rugs is a leading textile brand that designs and crafts rugs, pillows, and throws for the thoughtfully layered home. Family-owned and led since 2004, Loloi is growing more quickly than ever. To date, we’ve expanded our diverse team to hundreds of employees, invested in multiple distribution facilities, introduced thousands of products, and earned the respect and business of retailers and designers worldwide. A testament to our products and our team, Loloi has earned the ARTS Award for “Best Rug Manufacturer” in 2010, 2011, 2015, 2016, 2018, 2023, and 2025.
Security Advisory: Beware of Frauds
Protect yourself from potential fraud and verify the authenticity of any job offer you receive from Loloi. Rest assured that we never request payment or demand any sensitive personal information, such as bank details or social security numbers, at any stage of the recruiting process. To ensure genuine communication, our recruiters will solely reach out to applicants using an @ email address. Your security is of paramount importance to us at Loloi, and we are committed to maintaining a safe and trustworthy hiring experience for all candidates.
We are building a Business Operations Center of Excellence, and we need a Product Data Analyst to serve as the "Guardian of the Golden Record." In this role, you are the absolute owner of product data integrity as it relates to the digital customer experience. You ensure that every item we sell is accurately represented across every touchpoint—from our ERP and PIM to our website storefront and marketing feeds. This is not a data entry role; it is a high-impact technical logic and investigation role. You will work directly with our Data Platform and Software Engineering teams to define business rules, audit data health via complex SQL, and troubleshoot data transmission errors before they impact the customer.
Responsibilities
- Storefront Governance: Serve as the absolute owner of product data integrity within the PIM. Ensure that all storefront-critical attributes (pricing, dimensions, weights, image links) are accurate and standardized for a seamless customer experience.
- Technical Data Auditing: Write and run complex SQL queries against our centralized database to identify anomalies, "orphan" records, and data hygiene issues that need resolution. You will be expected to query across multiple schemas to validate data consistency between systems.
- Feed Logic & Mapping: You will manage the logic of how data translates from our PIM to external endpoints. You will ensure that our products appear correctly on Google Shopping, Meta, Amazon, and other marketplaces by managing feed rules and mapping definitions.
- API Payload Analysis: You will act as the first line of defense for data transmission errors. If a product isn't showing up on the site, you will review the JSON/XML response bodies to determine if it is a data payload error or a software code bug.
- Cross-Functional Impact Analysis: You will act as the gatekeeper for data changes, predicting downstream impacts (e.g., "If Merchandising changes this Category Name, it will break the Finance reporting filter").
- Hygiene Logic Definition: You will partner with our IT/Database team to define automated health checks. You identify the "rot" (bad data patterns), and they implement the database constraints to stop it.
What You Will NOT Do (The Boundaries)
- No Web Development: You are not a Front-End Developer. You do not write HTML, CSS, or React code. You ensure the data powering those components is 100% accurate.
- No Manual Data Entry: Your job is not to copy-paste descriptions. You build the systems, bulk processes, and logic that ensure data quality at scale.
- No Database Administration: You do not manage server uptime or schema changes (IT owns this). You own the quality of the records inside the database.
Intersection with Technical Teams
- With IT (Database Mgmt): IT owns the infrastructure and schema; you own the quality of the data within it. When you identify a systemic issue (e.g., "5,000 orphan records"), you partner with IT to implement the technical fix (scripts/constraints).
- With Software Engineering (Commerce): If a product is missing from the site, you check the data payload. If the data is correct, you hand off to Engineering, confirming it is a code/caching bug rather than a data error.
Experience, Skills, & Ability Requirements
- 5-8 years of experience in Data Management, PIM Administration, or technical eCommerce Operations.
- SQL Proficiency: You are comfortable writing queries beyond simple SELECT *. You should be proficient with CTEs (Common Table Expressions), Window Functions (e.g., Rank, Lead/Lag), Subqueries, and complex Joins to act as a forensic data investigator.
- API Fluency: You can read and understand JSON and XML. You know what a valid payload looks like and can spot formatting errors or missing keys.
- Data Manipulation: You are an expert at handling large datasets (CSVs, Excel) and understand data types, formatting standards, and normalization concepts.
- You love hunting down the root cause of an error. You don't just fix the wrong price; you find out why the price was wrong and build a rule to stop it from happening again.
- You have high standards for accuracy. You understand that a wrong weight in the system means a financial loss on shipping for the business.
Bonus Points (Nice-to-Haves)
- Familiarity with Visio/Lucidchart to visualize data flows.
- Ability to build simple dashboards in Tableau to track data health scores.
- Basic familiarity with Python or R for data manipulation.
What We Offer
- Health, dental, and vision benefits
- Paid parental leave
- 401(k) with employer match
- A culture of meritocracy that fosters ongoing growth opportunities
- A stable, growing family-owned company that looks after its employees
Loloi Rugs does not discriminate on the basis of race, sex, color, religion, age, national origin, marital status, disability, veteran status, genetic information, sexual orientation, gender identity or any other reason prohibited by law in provision of employment opportunities and benefits. We seek a diverse pool of applicants and consider all qualified candidates regardless of race, ancestry, color, gender identity or expression, sexual orientation, religion, national origin, citizenship, disability, Veteran status, marital status, or any other protected status. If you have a special need or disability that requires accommodation, please let us know.
About Wakefern
Wakefern Food Corp. is the largest retailer-owned cooperative in the United States and supports its co-operative members' retail operations, trading under the ShopRite®, Price Rite®, The Fresh Grocer®, Dearborn Markets®, and Gourmet Garage® banners.
Employing an innovative approach to wholesale business services, Wakefern focuses on helping the independent retailer compete in a big business world. Providing the tools entrepreneurs need to stay a step ahead of the competition, Wakefern’s co-operative members benefit from the company’s extensive portfolio of services, including innovative technology, private label development, and best-in-class procurement practices.
The ideal candidate will have a strong background in designing, developing, and implementing complex projects, with focus on automating data processes and driving efficiency within the organization. This role requires a close collaboration with application developers, data engineers, data analysts, data scientists to ensure seamless data integration and automation across various platforms. The Data Integration & AI Engineer is responsible for identifying opportunities to automate repetitive data processes, reduce manual intervention, and improve overall data accessibility.
Essential Functions
- Participate in the development life cycle (requirements definition, project approval, design, development, and implementation) and maintenance of the systems.
- Implement and enforce data quality and governance standards to ensure the accuracy and consistency.
- Provide input for project plans and timelines to align with business objectives.
- Monitor project progress, identify risks, and implement mitigation strategies.
- Work with cross-functional teams and ensure effective communication and collaboration.
- Provide regular updates to the management team.
- Follow the standards and procedures according to Architecture Review Board best practices, revising standards and procedures as requirements change and technological advancements are incorporated into the >tech_ structure.
- Communicates and promotes the code of ethics and business conduct.
- Ensures completion of required company compliance training programs.
- Is trained – either through formal education or through experience – in software / hardware technologies and development methodologies.
- Stays current through personal development and professional and industry organizations.
Responsibilities
- Design, build, and maintain automated data pipelines and ETL processes to ensure scalability, efficiency, and reliability across data operations.
- Develop and implement robust data integration solutions to streamline data flow between diverse systems and databases.
- Continuously optimize data workflows and automation processes to enhance performance, scalability, and maintainability.
- Design and develop end-to-end data solutions utilizing modern technologies, including scripting languages, databases, APIs, and cloud platforms.
- Ensure data solutions and data sources meet quality, security, and compliance standards.
- Monitor and troubleshoot automation workflows, proactively identifying and resolving issues to minimize downtime.
- Provide technical training, documentation, and ongoing support to end users of data automation systems.
- Prepare and maintain comprehensive technical documentation, including solution designs, specifications, and operational procedures.
Qualifications
- A bachelor's degree or higher in computer science, information systems, or a related field.
- Hands-on experience with cloud data platforms (e.g., GCP, Azure, etc.)
- Strong knowledge and skills in data automation technologies, such as Python, SQL, ETL/ELT tools, Kafka, APIs, cloud data pipelines, etc.
- Experience in GCP BigQuery, Dataflow, Pub/Sub, and Cloud storage.
- Experience with workflow orchestration tools such as Cloud Composer or Airflow
- Proficiency in iPaaS (Integration Platform as a Service) platforms, such as Boomi, SAP BTP, etc.
- Develop and manage data integrations for AI agents, connecting them to internal and external APIs, databases, and knowledge sources to expand their capabilities.
- Build and maintain scalable Retrieval-Augmented Generation (RAG) pipelines, including the curation and indexing of knowledge bases in vector databases (e.g., Pinecone, Vertex AI Vector Search).
- Leverage cloud-based AI/ML platforms (e.g., Vertex AI, Azure ML) to build, train, and deploy machine learning models on a scale.
- Establish and enforce data quality and governance standards for AI/ML datasets, ensuring the accuracy, completeness, and integrity of data used for model training and validation.
- Collaborate closely with data scientists and machine learning engineers to understand data requirements and deliver optimized data solutions that support the entire machine learning lifecycle.
- Hands-on experience with IBM DataStage and Alteryx is a plus.
- Strong understanding of database design principles, including normalization, indexing, partitioning, and query optimization.
- Ability to design and maintain efficient, scalable, and well-structured database schemas to support both analytical and transactional workloads,
- Familiarity with BI visualization tools such as MicroStrategy, Power BI, Looker, or similar.
- Familiarity with data modeling tools.
- Familiarity with DevOps practices for data (CI/CD pipelines)
- Proficiency in project management software (e.g., JIRA, Clarizen, etc.)
- Familiarity with DevOps practices for data (CI/CD pipelines)
- Strong knowledge and skills in data management, data quality, and data governance.
- Strong communication, collaboration, and problem-solving skills.
- Ability to work on multiple projects and prioritize tasks effectively.
- Ability to work independently and in a team environment.
- Ability to learn new technologies and tools quickly.
- The ability to handle stressful situations.
- Highly developed business acuity and acumen.
- Strong critical thinking and decision-making skills.
Working Conditions & Physical Demands
This position requires in-person office presence at least 4x a week.
Compensation and Benefits
The salary range for this position is $75,868 - $150,644. Placement in the range depends on several factors, including experience, skills, education, geography, and budget considerations.
Wakefern is proud to offer a comprehensive benefits package designed to support the health, well-being, and professional development of our Associates. Benefits include medical, dental, and vision coverage, life and disability insurance, a 401(k) retirement plan with company match & annual company contribution, paid time off, holidays, and parental leave.
Associates also enjoy access to wellness and family support programs, fitness reimbursement, educational and training opportunities through our corporate university, and a collaborative, team-oriented work environment. Many of these benefits are fully or partially funded by the company, with some subject to eligibility requirements
The pay range for this role is $150,000 - $200,000/yr USD.
WHO WE ARE:
Headquartered in Southern California, Skechers—the Comfort Technology Company®—has spent over 30 years helping men, women, and kids everywhere look and feel good. Comfort innovation is at the core of everything we do, driving the development of stylish, high-quality products at a great value. From our diverse footwear collections to our expanding range of apparel and accessories, Skechers is a complete lifestyle brand.
ABOUT THE ROLE:
Skechers Digital Team is seeking a Digital Data Architect reporting to the Director, Digital Architecture, Consumer Domain. This role is responsible for designing and governing Skechers’ Consumer Data 360 ecosystem, enabling identity resolution, high-quality data foundations, personalization, loyalty intelligence, and machine learning capabilities across digital and retail channels.
The ideal candidate will be a strong technical leader, have hands-on full-stack technical knowledge in enterprise technologies related to Skecher’s consumer domain, and have the ability to work in a fast-paced agile environment. You should have knowledge of consumer programs from an architecture/industry perspective, and you should have strong hands-on experience designing solutions on the Salesforce Core Platform (including configuration, integration, and data model best practices).
You will work cross-functionally with Digital Engineering, Data Engineering, Data Science, Loyalty, and Marketing teams to architect scalable, secure, and high-performance data platforms that support advanced personalization and recommender systems.
WHAT YOU’LL DO:
- Responsible for the full technical life cycle of consumer platform capabilities which includes:
- Capability roadmap and technical architecture in alignment to consumer experience
- Technical planning, design, and execution
- Operations, analytics/reporting, and adoption
- Define and evolve Skechers’ Consumer Data 360 architecture, including identity resolution (deterministic and probabilistic matching) and unified customer profiles.
- Architect scalable data models and pipelines across CDP, CRM, e-commerce, marketing automation, data lake, and warehouse platforms.
- Establish enterprise data quality frameworks including validation, deduplication, anomaly detection, and observability.
- Optimize SQL workloads and large-scale distributed queries through performance tuning, partitioning, indexing, and workload management strategies.
- Design and oversee ML pipelines supporting personalization, churn modeling, and recommender systems.
- Partner with Data Science teams to productionize models using distributed platforms such as Databricks (Spark, Delta Lake, MLflow preferred).
- Ensure secure data governance, access control (RBAC/ABAC), and compliance with GDPR, CCPA, and related privacy regulations.
- Provide architectural oversight ensuring performance, scalability, resilience, and maintainability.
- Collaborate with stakeholders to translate business objectives (LTV growth, personalization lift, engagement) into scalable data solutions.
REQUIREMENTS:
- Computer Science, Data Engineering, or related degree or equivalent experience.
- 12+ years experience architecting enterprise data platforms in cloud environments.
- 9+ years experience with data engineering with a focus on consumer data.
- 6+ years experience working with Salesforce platforms, including data models and enterprise integrations.
- Strong experience with Data 360 and identity resolution architectures.
- Proven expertise in SQL performance tuning and large-scale data modeling.
- Hands-on experience implementing ML pipelines and recommender systems in production environments.
- Experience with cloud technologies (AWS, GCP, or Azure).
- Experience with integration patterns (API, ETL, event streaming).
- Experience providing technical leadership and guidance across multiple projects and development teams.
- Experience translating business requirements into detailed technical specifications and working with development teams through implementation, including issue resolution and stakeholder communication.
- Strong project management skills including scope assessment, estimation, and clear technical communication with both business users and technical teams.
- Must hold at least one of the following Salesforce Certifications (Platform App Builder, Platform Developer 1, JavaScript Developer 1).
- Experience with Databricks or similar distributed data/ML platforms preferred.
Title : Data QA Engineer
Location: Minneapolis , Dallas , Atlanta (Onsite)
Job Type : Contract
Exp : 8-15 Years
Key Responsibilities:
- Design, build, and maintain automated data quality frameworks to validate accuracy, completeness, consistency, and timeliness of data.
- Develop automation scripts using Python/SQL to test data pipelines, ETL/ELT processes, and analytics workflows.
- Implement data quality checks and monitoring within Azure-based data platforms.
- Work extensively with Azure services (ADF, ADLS, Synapse) and Databricks for large-scale data processing.
- Integrate data quality validations into CI/CD pipelines and support proactive issue detection.
- Perform root cause analysis for data issues and collaborate with data engineering, analytics, and business teams to resolve them.
- Define and enforce data quality standards, metrics, and SLAs.
Required Skills & Qualifications:
- Strong experience (8–15 years) in data engineering, data quality, or data automation roles.
- Hands-on expertise with Azure data ecosystem and Databricks.
- Strong programming skills in Python and SQL.
- Experience building automated data validation and reconciliation frameworks.
- Solid understanding of data warehousing, data lakes, and distributed data processing.
- Familiarity with DevOps/CI-CD practices for data platforms.
Preferred Skills:
- Experience with data observability or data quality tools.
- Exposure to cloud-scale analytics and performance optimization.
- Strong communication and stakeholder management skills.
Surescripts serves the nation through simpler, trusted health intelligence sharing, in order to increase patient safety, lower costs and ensure quality care. We deliver insights at critical points of care for better decisions - from streamlining prior authorizations to delivering comprehensive medication histories to facilitating messages between providers.
The Strategic Data(RWD) Acquisition Manager will be an integral part of Surescripts' data ecosystem by executing negotiations with Surescripts Network Alliance partners to secure data usage rights, while also identifying and acquiring new, strategic data sources. This person will play a critical role in maintaining access to high quality data necessary for the development of solutions that will deliver value and improve the experience for stakeholders across the healthcare ecosystem. This position requires a deep understanding of healthcare data, the regulatory landscape and business development experience to successfully negotiate and secure data agreements that will enhance our product portfolio.
Responsibilities:- Identify and evaluate potential data sources of interest that expand Surescripts' data portfolio. Create comprehensive value propositions for how the data could be used within Surescripts' solutions, and valuation of the data to make offers to data sources for data acquisition.
- Drive business development efforts to secure agreements that enhance Surescripts' data portfolio. With guidance from leadership, execute strategies to identify and approach potential data partners, and successfully negotiate terms.
- Collaborate with sales and product teams to develop strategies to align customer incentives with broader data-dependent initiatives. Interface with Surescripts Network Alliance partners to negotiate data usage rights, ensuring alignment with business goals and regulatory requirements.
- Interface with data providers, industry partners, and other stakeholders.
- Manage day-to-day data procurement-related inquiries and negotiations with data providers and customers.
- Maintain a thorough understanding of privacy laws, including HIPAA permitted purposes. Collaborate with compliance, privacy, security, and data governance teams to ensure all data procurement activities comply with all state and federal regulations, internal policies, and customer contracts.
- Monitor and report on data procurement activities. Track progress of data procurement efforts, report on key metrics, and provide regular updates to senior management. Proactively identify and address any challenges or obstacles in the procurement process. Monitor and evaluate the ROI of data acquisition initiatives to prioritize high-impact opportunities.
- Keep up-to-date with the latest developments in data rights, privacy regulations, and the healthcare industry. Apply and share this knowledge to improve data procurement strategies and ensure the company remains compliant and competitive.
Qualifications:
Basic Requirements:
- Bachelor's degree in Business, Economics, Data Science, or related field;
- 8+ years of experience in business development and/or related experience in the procurement/acquisition of healthcare data.
- Strong understanding of regulations around healthcare data, including Health Insurance Portability and Accountability Act (HIPAA) and Trusted Exchange Framework and Common Agreement (TEFCA).
- Ability to evaluate the value and quality of data assets and their applicability to business needs.
- Proven experience in negotiating contracts and managing vendor relationships.
- Demonstrated success in business development and deal negotiation.
- Excellent written and verbal communication and interpersonal skills.
- Ability to work independently and as part of a team.
- Ability to travel for team, customer and vendor meetings as needed.
- Strategic thinker with strong analytical and problem-solving abilities and results-driven mindset.
Preferred Qualifications:
- MBA or advanced degree preferred in a related field.
- Strong understanding of healthcare interoperability standards, such as Fast Healthcare Interoperability Resource (FHIR).
- Strong understanding of electronic health records (EHR), pharmacy and claims data, health information exchanges (HIE), and TEFCA qualified health information networks (QHINs)
- Familiarity with data governance tools (e.g. data mapping, lineage
#LI-remote
Surescripts embraces flexibility through its Flexible Hybrid Work model for most positions. This model allows employees to work virtually while still utilizing our offices as collaboration centers. With alignment and agreement from your leadership, you can come and go from the office as needed.
To be considered for employment, applicants must have a valid U.S. work authorization allowing work without restrictions with Surecripts in the U.S. At this time, we are unable to provide support or provide sponsorship for immigration benefits such as work visas. Additionally, we do not participate in academic training programs or work-study programs through an academic institution that require employer endorsement of F-1/CPT or F-1/STEM.
Why Wait? Apply Now
We're a midsize company. This means you're not just another employee ID number. Here, you can build real relationships and feel supported by truly awesome people with diverse backgrounds and talents in an innovative and collaborative work culture. We strive to create an environment where you can be yourself, share your ideas and work your way. We offer opportunities for employee development, as well as competitive compensation packages and extensive benefits.
Benefits include, but are not limited to, comprehensive healthcare (including infertility coverage), generous paid time off including paid childbirth and parental leave and mental health days, pet insurance, and 401(k) with company match and immediate vesting. To learn more, review the Keep You and Yours Healthy, Balancing Work and Life, and Where Talent Takes Shape links under the Better Benefits. Better Work. Better Life section of our careers site.
While performing duties of this job, an employee may be required to perform any, or all of the following: attend meetings in and out of the office, travel, communicate effectively (both orally and in writing), and be able to effectively use computers and other electronic and standard office equipment with, or without, a reasonable accommodation. Additionally, this job requires certain mental demands, including the ability to use judgement, withstand moderate amounts of stress and maintain attention to detail with, or without, a reasonable accommodation.
Surescripts is proud to be an Equal Employment Opportunity and Affirmative Action employer. We do not discriminate on the basis of race, color, religion, age, national origin, ancestry, disability, medical condition, marital status, pregnancy, genetic information, gender, sexual orientation, parental status, gender identity, gender expression, veteran status, or any other status protected under federal, state, or local law.
About Pinterest:
Millions of people around the world come to our platform to find creative ideas, dream about new possibilities and plan for memories that will last a lifetime. At Pinterest, we're on a mission to bring everyone the inspiration to create a life they love, and that starts with the people behind the product.
Discover a career where you ignite innovation for millions, transform passion into growth opportunities, celebrate each other's unique experiences and embrace theflexibility to do your best work. Creating a career you love? It's Possible.
At Pinterest, AI isn't just a feature, it's a powerful partner that augments our creativity and amplifies our impact, and we're looking for candidates who are excited to be a part of that. To get a complete picture of your experience and abilities, we'll explore your foundational skills and how you collaborate with AI.
Through our interview process, what matters most is that you can always explain your approach, showing us not just what you know, but how you think. You can read more about our AI interview philosophy and how we use AI in our recruiting process here.
About tvScientific
tvScientific is the first and only CTV advertising platform purpose-built for performance marketers. We leverage massive data and cutting-edge science to automate and optimize TV advertising to drive business outcomes. Our solution combines media buying, optimization, measurement, and attribution in one, efficient platform. Our platform is built by industry leaders with a long history in programmatic advertising, digital media, and ad verification who have now purpose-built a CTV performance platform advertisers can trust to grow their business.
We are seeking a Staff Data Engineer to lead the design, implementation, and evolution of our identity services and data governance platform. This role is critical to ensuring trusted, privacy-safe, and well-governed data across the organization. You will work at the intersection of data engineering, identity resolution, privacy, and platform reliability.This is anindividual contributor role, where you will work to define and implement a strategic vision for data engineering within the organization.
What you'll do:
- Identity Services:
- Design and maintain a scalable identity resolution platform
- Build pipelines and services to ingest, normalize, link, and version identity data across multiple sources
- Ensure deterministic and probabilistic matching logic that is transparent, auditable, and measurable
- Partner with product and analytics teams to expose identity data through reliable, well-documented APIs and datasets
- Build and operate batch and streaming pipelines using modern data stack tools
- Create clear documentation, standards, and runbooks for identity and governance systems
- Data Governance & Trust
- Own data governance foundations including data lineage, quality checks, schema enforcement, and access controls
- Implement privacy-by-design principles (PII handling, consent enforcement, retention policies)
- Collaborate with legal, privacy, and security teams to operationalize regulatory requirements (e.g., GDPR, CCPA)
- Establish monitoring and alerting for data quality, freshness, and integrity
What we're looking for:
- Data engineering experience with proven track record building data infrastructure using Spark with Scala
- Proven experience building data infrastructure using Spark with Scala for at least 5 years
- Experience in delivering significant technical initiatives and building reliable, large scale services
- Experience in delivering APIs backed by relationship-heavy datasets
- Experience implementing data governance practices, including data quality, metadata management, and access controls
- Strong understanding of privacy-by-design principles and handling of sensitive or regulated data
- Familiarity with data lakes, cloud warehouses, and storage formats
- Strong proficiency in AWS services
- Successful design and implementation of scalable and efficient data infrastructure
- High attention to detail in implementation of automated data quality checks
- Effective collaboration with cross-functional teams
- Excellent written and verbal communication skills
- Bachelor's degree in Computer Science or a related field
In-Office Requirement Statement:
- We recognize that the ideal environment for work is situational and may differ across departments. What this looks like day-to-day can vary based on the needs of each organization or role.
Relocation Statement:
- This position is not eligible for relocation assistance. Visit ourPinFlexpage to learn more about our working model.
#LI-SM4
#LI-REMOTE
At Pinterest we believe the workplace should be equitable, inclusive, and inspiring for every employee. In an effort to provide greater transparency, we are sharing the base salary range for this position. The position is also eligible for equity. Final salary is based on a number of factors including location, travel, relevant prior experience, or particular skills and expertise.
Information regarding the culture at Pinterest and benefits available for this position can be found here.
US based applicants only$155,584—$320,320 USDOur Commitment to Inclusion:
Pinterest is an equal opportunity employer and makes employment decisions on the basis of merit. We want to have the best qualified people in every job. All qualified applicants will receive consideration for employment without regard to race, color, ancestry, national origin, religion or religious creed, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, age, marital status, status as a protected veteran, physical or mental disability, medical condition, genetic information or characteristics (or those of a family member) or any other consideration made unlawful by applicable federal, state or local laws. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. If you require a medical or religious accommodation during the job application process, please completethis formfor support.
Location: 100% Remote
Duration: 12+ Months
Overview:
An experienced Administrator to operate and support the enterprise implementation of Microsoft Purview Data Catalog across a complex, multi-platform data environment. The administrator will be responsible for the day-to-day configuration, monitoring, and maintenance of Purview capabilities, ensuring reliable metadata ingestion, catalog quality, lineage visibility, and compliance alignment across governed data domains.
This role focuses on platform operations and governance execution, working within established architecture and enterprise governance standards.
Key Responsibilities
Platform Administration & Operations:
- Administer and operate Microsoft Purview Data Map and Data Catalog environments.
- Monitor platform health, scan execution, metadata ingestion, and lineage availability.
- Troubleshoot and resolve catalog, scan, and connectivity issues.
- Perform routine maintenance, configuration updates, and service optimizations.
- Coordinate incident resolution with internal engineering teams and Microsoft support as required.
Data Source Management & Scanning:
- Register, configure, and maintain data sources across Azure, M365, on?prem, and approved third?party platforms.
- Configure and schedule metadata scans for supported sources.
- Manage authentication for scans using managed identities, service principals, and Key Vault secrets.
- Monitor scan performance, failures, and coverage; take corrective action as needed.
- Optimize scan frequency and scope to balance cost, performance, and governance coverage.
Catalog Configuration & Metadata Management:
- Maintain and enforce enterprise metadata standards within the Purview Catalog.
- Manage business metadata, classifications, glossary terms, and custom attributes.
- Ensure metadata accuracy, completeness, and consistency across data assets.
- Support curation activities including asset certification and publishing.
- Resolve duplicate, incomplete, or stale catalog entries.
Lineage & Discovery Enablement:
- Enable and validate data lineage ingestion from supported data platforms.
- Monitor lineage completeness and visibility for critical data assets.
- Assist data consumers and stewards with lineage?based impact analysis.
- Escalate lineage gaps or tool limitations requiring architectural or engineering remediation.
Security, Access & Governance Controls:
- Configure and manage Purview role?based access control (RBAC) within collections.
- Provision and maintain access for administrators, data curators, and data stewards.
- Enforce domain?based access controls and separation of duties.
- Integrate Purview access with Microsoft Entra ID.
- Support sensitivity labels and classification alignment with Microsoft Information Protection.
Compliance & Risk Support:
- Support automated discovery of sensitive data (PII, PCI, PHI).
- Assist risk, audit, and compliance teams with catalog evidence and reporting.
- Validate scan coverage for regulated data domains.
- Support regulatory and audit initiatives (SOX, GLBA, NYDFS, GDPR, etc.).
User Support & Enablement:
- Provide operational support to data producers, consumers, and data stewards.
- Respond to access requests, catalog issues, and usage questions.
- Maintain operational documentation, runbooks, and standard operating procedures.
- Support onboarding of new data domains following established governance patterns.
- Assist with training and adoption initiatives led by governance or architecture teams.
Required Qualifications:
- 5+ years experience supporting enterprise data platforms or governance tools and 4+ years hands?on MS Purview experience at enterprise scale.
- Hands?on experience administering Microsoft Purview Data Catalog.
- Strong understanding of metadata management, data classification, and lineage concepts.
- Working knowledge of Azure data services and enterprise data ecosystems.
- Experience managing access controls and identities using Microsoft Entra ID.
- Familiarity with regulated data environments and compliance requirements.
- Strong troubleshooting, operational support, and documentation skills.
Preferred Qualifications:
- Experience supporting Purview integrations with Synapse, Fabric, Databricks, Snowflake, or SQL Server.
- Exposure to financial services or other regulated industries.
- Experience with PowerShell, REST APIs, or basic automation for operational tasks.
- Prior experience supporting enterprise data governance or stewardship programs.