Cloudera Data Platform Cdp Jobs in Usa
12,118 positions found — Page 2
Must be local to TX
Skills:
Delivery manager
2026 road map
To deliver roadmap, interact with business, explain value prop, understand their rules, standard rules
Manage timelines
Partner with segments
Before and after Data Quality scores
Technical
Articulate technical design and solutions
Capabilities of Collibra, Soda
How to use those tools
Proactive communication skills
12+ years kind of role Technical Project Manager with solutioning and problem skills
Role Summary
The Data Governance Lead will design, build, and scale an enterprise data governance program from the ground up, using Collibra as the core platform for a large real estate enterprise. This senior role combines strategic leadership, hands‑on Collibra configuration, stakeholder management, and deep domain knowledge of real estate data. The incumbent will own the governance vision, operating model, and tooling, and will partner with business, IT, data engineering, analytics, legal, and compliance teams.
Key Responsibilities
1. Data Governance Strategy and Operating Model
- Define and implement the enterprise data governance strategy, roadmap, and operating model aligned to business objectives.
- Define governance KPIs, maturity metrics, and success measures.
- Drive adoption through change management, communications, and training.
2. Collibra Implementation from Scratch
- Lead end‑to‑end Collibra implementation: platform setup, environment planning (Dev/Test/Prod), domain modeling, and taxonomy design.
- Customize asset models for real estate use cases.
- Configure and manage Business Glossary, Data Dictionary, Data Catalog, and Reference Data & Code Sets.
- Design and implement Collibra workflows for glossary lifecycle, owner/steward assignment, issue management, and escalation.
- Implement Collibra operating model with defined roles (Data Owner, Data Steward, Custodian, Consumer) and RACI mappings.
- Integrate Collibra with data warehouses/lakes (Snowflake, BigQuery, Azure), BI tools (Power BI, Tableau), and ETL/ELT tools (Informatica, dbt, ADF).
- Lead metadata ingestion across technical, operational, and business metadata.
3. Data Ownership, Stewardship, and Accountability
- Define and institutionalize data ownership and stewardship across business units.
- Partner with business leaders to assign Data Owners and Stewards.
- Drive accountability for data definitions, data quality, and metadata completeness.
- Establish Data Governance Councils and working groups.
4. Data Quality and Issue Management
- Collaborate with data quality teams to define Critical Data Elements (CDEs) and align rules and thresholds.
- Configure Collibra issue management workflows and ensure traceability from issues to root causes and remediation actions.
- Provide governance oversight for remediation and continuous improvement.
5. Compliance, Risk, and Security Governance
- Define governance controls for regulatory compliance, contractual data, and financial reporting.
- Partner with Legal, Risk, and Security to classify sensitive data and apply access and usage policies.
- Implement data classification and privacy metadata within Collibra.
6. Stakeholder and Program Leadership
- Serve as the single point of accountability for the data governance program.
- Present progress, metrics, and risks to senior leadership.
- Mentor governance analysts, stewards, and platform administrators.
- Coordinate with system integrators and vendors as required.
Required Skills and Qualifications
Mandatory
- 12–18+ years in data management, data governance, or analytics leadership.
- Deep hands‑on experience implementing Collibra from scratch at enterprise scale.
- Strong expertise in business glossary and metadata management, stewardship models, and workflow automation in Collibra.
- Proven track record driving enterprise adoption of governance platforms.
- Excellent stakeholder management and communication skills.
Preferred
- Experience in real estate, property management, construction, facilities, or capital projects.
- Familiarity with DAMA‑DMBOK, DCAM, or similar governance frameworks.
- Exposure to data quality tools such as SODA, Great Expectations, or Informatica DQ.
- Experience integrating Collibra with cloud data platforms.
- Prior experience leading governance programs in large, federated organizations.
- Collibra certification is a plus.
Behavioral and Leadership Attributes
- Strategic thinker with strong execution capability.
- Balances business pragmatism with governance rigor.
- Influences without formal authority and drives change.
- Excellent storytelling and change management skills.
- Hands‑on leader who can configure Collibra and mentor teams.
Success Measures First 12 Months
- Collibra platform live with core real estate domains onboarded.
- Business glossary adopted across key business units.
- Formal data ownership established for critical datasets.
- Measurable improvement in metadata completeness and data quality visibility.
- Governance operating model embedded into daily business processes.
Location: 100% Remote
Duration: 12+ Months
Overview:
An experienced Administrator to operate and support the enterprise implementation of Microsoft Purview Data Catalog across a complex, multi-platform data environment. The administrator will be responsible for the day-to-day configuration, monitoring, and maintenance of Purview capabilities, ensuring reliable metadata ingestion, catalog quality, lineage visibility, and compliance alignment across governed data domains.
This role focuses on platform operations and governance execution, working within established architecture and enterprise governance standards.
Key Responsibilities
Platform Administration & Operations:
- Administer and operate Microsoft Purview Data Map and Data Catalog environments.
- Monitor platform health, scan execution, metadata ingestion, and lineage availability.
- Troubleshoot and resolve catalog, scan, and connectivity issues.
- Perform routine maintenance, configuration updates, and service optimizations.
- Coordinate incident resolution with internal engineering teams and Microsoft support as required.
Data Source Management & Scanning:
- Register, configure, and maintain data sources across Azure, M365, on?prem, and approved third?party platforms.
- Configure and schedule metadata scans for supported sources.
- Manage authentication for scans using managed identities, service principals, and Key Vault secrets.
- Monitor scan performance, failures, and coverage; take corrective action as needed.
- Optimize scan frequency and scope to balance cost, performance, and governance coverage.
Catalog Configuration & Metadata Management:
- Maintain and enforce enterprise metadata standards within the Purview Catalog.
- Manage business metadata, classifications, glossary terms, and custom attributes.
- Ensure metadata accuracy, completeness, and consistency across data assets.
- Support curation activities including asset certification and publishing.
- Resolve duplicate, incomplete, or stale catalog entries.
Lineage & Discovery Enablement:
- Enable and validate data lineage ingestion from supported data platforms.
- Monitor lineage completeness and visibility for critical data assets.
- Assist data consumers and stewards with lineage?based impact analysis.
- Escalate lineage gaps or tool limitations requiring architectural or engineering remediation.
Security, Access & Governance Controls:
- Configure and manage Purview role?based access control (RBAC) within collections.
- Provision and maintain access for administrators, data curators, and data stewards.
- Enforce domain?based access controls and separation of duties.
- Integrate Purview access with Microsoft Entra ID.
- Support sensitivity labels and classification alignment with Microsoft Information Protection.
Compliance & Risk Support:
- Support automated discovery of sensitive data (PII, PCI, PHI).
- Assist risk, audit, and compliance teams with catalog evidence and reporting.
- Validate scan coverage for regulated data domains.
- Support regulatory and audit initiatives (SOX, GLBA, NYDFS, GDPR, etc.).
User Support & Enablement:
- Provide operational support to data producers, consumers, and data stewards.
- Respond to access requests, catalog issues, and usage questions.
- Maintain operational documentation, runbooks, and standard operating procedures.
- Support onboarding of new data domains following established governance patterns.
- Assist with training and adoption initiatives led by governance or architecture teams.
Required Qualifications:
- 5+ years experience supporting enterprise data platforms or governance tools and 4+ years hands?on MS Purview experience at enterprise scale.
- Hands?on experience administering Microsoft Purview Data Catalog.
- Strong understanding of metadata management, data classification, and lineage concepts.
- Working knowledge of Azure data services and enterprise data ecosystems.
- Experience managing access controls and identities using Microsoft Entra ID.
- Familiarity with regulated data environments and compliance requirements.
- Strong troubleshooting, operational support, and documentation skills.
Preferred Qualifications:
- Experience supporting Purview integrations with Synapse, Fabric, Databricks, Snowflake, or SQL Server.
- Exposure to financial services or other regulated industries.
- Experience with PowerShell, REST APIs, or basic automation for operational tasks.
- Prior experience supporting enterprise data governance or stewardship programs.
Duration: 6+ months
Location: 100% Remote
Job Overview
The Marketplace Data Product Engineer serves as the primary technical facilitator, and adoption champion for the Marketplace platform. This role bridges engineering, product, and business domains - leading workshops, demos, onboarding sessions, and cross?domain engagements to accelerate Marketplace adoption. You will configure demo environments, support development, translate complex technical concepts for business audiences, gather product feedback, and partner closely with product and engineering teams to shape the Marketplace roadmap. This will guide domains through the process of understanding, showcasing, and maturing their data products within the ecosystem.
Key Responsibilities
- Facilitate workshops, demos, onboarding sessions, and cross?domain engagements to drive Marketplace adoption.
- Serve as the primary technical presenter of the Marketplace for domain teams and stakeholders.
- Engage with domain owners to understand their data products, help refine their articulation, and showcase how they integrate into the Marketplace ecosystem.
- Configure and maintain demo environments for Marketplace capabilities, data products, and new features.
- Support light development, proof?of?concept configurations, and sample integrations to demonstrate platform capabilities.
- Translate technical Marketplace concepts into clear, business?friendly language for non?technical audiences.
- Collect structured feedback from domain teams, synthesize insights, and partner with product and engineering to influence the roadmap.
- Develop and refine training materials, demos, playbooks, and onboarding assets to support continuous adoption.
- Act as an advocate for domains, ensuring their data product needs and challenges are well represented in Marketplace planning.
- Support ongoing adoption initiatives, including community sessions, office hours, and cross?domain knowledge sharing.
Required Skills & Qualifications
- 4-7+ years of experience in data engineering, platform engineering, solution engineering, technical consulting, or similar roles.
- Strong understanding of data products, data modeling concepts, data APIs, enterprise integrations and metadata?driven architectures.
- Ability to configure and demonstrate platform features, build light proofs?of?concept, and support technical onboarding.
- Excellent communication and presentation skills, with experience translating technical concepts for business partners.
- Experience facilitating workshops, leading demos, or driving customer/product adoption initiatives.
- Ability to engage domain teams, understand their data product needs, and help articulate value within a larger ecosystem.
- Strong collaboration and stakeholder management skills across engineering, product, and business teams.
- Comfortable working in fast?moving environments and driving clarity through ambiguity.
Preferred Qualifications
- Experience with data product and governance frameworks, data marketplaces, data mesh concepts, or platform adoption roles.
- Hands?on experience with cloud data platforms (Azure, AWS, or GCP), data pipelines, or integration tooling.
- Familiarity with REST/GraphQL APIs, event-driven patterns, and data ingestion workflows.
- Background in solution architecture, customer engineering, or sales engineering.
- Experience developing demo environments, sample apps, or repeatable platform enablement assets.
- Strong storytelling ability when explaining data product value, domain capabilities, and Marketplace patterns.
Location: Remote
Duration: 8+ months
Marketplace Platform Lead
Job Overview
The Marketplace Platform Lead is responsible for driving the end?to?end technical architecture and implementation of the enterprise Data Marketplace platform. This role spans stakeholder engagement, architectural definition, integration design, and hands-on leadership throughout implementation. The ideal candidate is a seasoned technical leader with deep experience designing integration patterns, building scalable platforms, and guiding engineering teams through complex cross-system solutions.
Key Responsibilities
Lead stakeholder meetings to gather business requirements, align on platform objectives, and clarify workflows and user journeys.
Conduct tool evaluations, build scoring frameworks, and make recommendations on platforms, vendors, and integration technologies.
Define end-to-end Marketplace architecture, including data flows, APIs, domain models, integration strategies, and platform components.
Design and lead the implementation of integration patterns, including API-based integrations, event-driven patterns, workflow orchestration, and cross-system interoperability.
Develop technical designs, architectural documents, and standards for Marketplace workflows, user flows, and extensibility patterns.
Provide hands-on architectural guidance to engineering teams throughout solution design, development, and delivery.
Oversee technical quality, scalability, performance, and security across Marketplace components and integrations.
Collaborate with product, engineering, data, and security teams to ensure compliance with enterprise data governance, privacy, and reliability standards.
Lead technical reviews, drive design decisions, and ensure alignment across cross-functional stakeholders.
Required Skills & Qualifications
8+ years of experience in software engineering, platform development, or technical architecture roles.
Strong expertise in designing and implementing integration architectures, including REST/GraphQL APIs, event-driven patterns, synchronous/asynchronous messaging, and workflow engines.
Deep understanding of distributed systems, microservices, and cloud-native solutions (Azure, AWS, or GCP).
Proficiency with API design, messaging systems, and enterprise integration frameworks.
Experience defining technical architecture, data flows, and workflow designs for complex platforms.
Ability to translate business requirements into technical designs, user flows, and actionable engineering plans.
Demonstrated leadership in guiding engineering teams through architectural decisions and implementation.
Strong communication skills with the ability to influence technical and non-technical partners.
Experience evaluating and scoring platforms, tools, or vendor solutions.
Solid knowledge of DevOps practices, CI/CD, infrastructure-as-code, observability, and security best practices.
Preferred Qualifications
Experience building or leading a Data Marketplace platform.
Familiarity with workflow orchestration platforms, rules engines, BPM tools, or catalog management systems.
Experience with enterprise identity systems (OAuth, SAML, SSO), access governance, and data privacy frameworks.
Background working with enterprise data platforms, data governance, or cross-domain integration patterns.
Prior experience leading architectural governance or serving as a platform architect in an enterprise environment.
About Cygnus Professionals, Inc.
Cygnus is a Princeton, NJ-headquartered global Business IT consulting and software Services firm with offices in the USA and Asia. Cygnus offers and enables innovation and helps our clients accelerate time to market & grow their business. Over 15 years, we have taken great pride in continuing our deep relationships with our clients.
For further information about CYGNUS, please visit our website Title: Data Architect
Location: Princeton, New Jersey – Onsite
W2 Contract
Job Summary
We are seeking an experienced Data Architect to design, build, and maintain scalable data architecture solutions supporting enterprise analytics, data integration, and digital transformation initiatives. The ideal candidate will work closely with business stakeholders, data engineers, and application teams to design robust data models, data pipelines, and enterprise data platforms that support advanced analytics and reporting.
Key Responsibilities
- Design and implement enterprise data architecture frameworks and best practices.
- Develop logical and physical data models for enterprise data platforms.
- Architect data lakes, data warehouses, and data integration solutions across cloud and on-prem environments.
- Collaborate with data engineers and application teams to build scalable data pipelines and ETL/ELT processes.
- Ensure data governance, data quality, security, and compliance standards are implemented across the data ecosystem.
- Evaluate and recommend data technologies, tools, and frameworks aligned with enterprise strategy.
- Provide architectural guidance for cloud-based data platforms (AWS/Azure/GCP).
- Optimize performance for large-scale data processing and analytics workloads.
- Support business intelligence, reporting, and advanced analytics initiatives.
Required Qualifications
- 10+ years of experience in data architecture, data engineering, or enterprise data management.
- Strong experience with data modeling (conceptual, logical, physical).
- Expertise with data warehouse and data lake architectures.
- Hands-on experience with ETL/ELT tools and data integration platforms.
- Experience with SQL and large-scale data platforms (Snowflake, Redshift, BigQuery, etc.).
- Experience working with cloud data platforms (AWS, Azure, or GCP).
- Strong understanding of data governance, data quality, and metadata management.
- Experience with big data technologies (Spark, Hadoop, Kafka) is a plus.
Preferred Skills
- Experience in Healthcare, Pharmaceutical, or Life Sciences domain.
- Knowledge of Master Data Management (MDM) and data catalog tools.
- Familiarity with BI tools such as Tableau, Power BI, or Looker.
- Strong communication skills to interact with business and technical teams.
Education
- Bachelor’s or Master’s degree in Computer Science, Information Systems, Data Science, or related field.
Cygnus Belief
We believe in our commitment to diversity & inclusion.
Equal Employment Opportunity Statement
Cygnus is an Equal Opportunity Employer. We ensure that no one should be discriminated against because of their differences, such as age, disability, ethnicity, gender, gender identity and expression, religion, or sexual orientation.
All our employment decisions are taken without looking into age, race, creed, color, religion, sex, nationality, disability status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status, or any other aspects of employment protected by federal, state, or local law. Applicants for employment in the US must have work authorization.
Job Title – Lead Data Engineer
Please note this role is not able to offer visa transfer or sponsorship now or in the future
About the role
As a Lead Data Engineer, you will make an impact by designing, building, and operating scalable, cloud‑native data platforms supporting batch and streaming use cases, with strong focus on governance, performance, and reliability. You will be a valued member of the Data Engineering team and work collaboratively with cross‑functional engineering, cloud, and architecture stakeholders.
In this role, you will:
- Design, build, and operate scalable cloud‑native data platforms supporting batch and streaming workloads with strong governance, performance, and reliability.
- Develop and operate data systems on AWS, Azure, and GCP, designing cloud‑native, scalable, and cost‑efficient data solutions.
- Build modern data architectures including data lakes, data lakehouses, and data hubs, with strong understanding of ingestion patterns, data governance, data modeling, observability, and platform best practices.
- Develop data ingestion and collection pipelines using Kafka and AWS Glue; work with modern storage formats such as Apache Iceberg and Parquet.
- Design and develop real‑time streaming pipelines using Kafka, Flink, or similar streaming frameworks, with understanding of event‑driven architectures and low‑latency data processing.
- Perform data transformation and modeling using SQL‑based frameworks and orchestration tools such as dbt, AWS Glue, and Airflow, including Slowly Changing Dimensions (SCD) and schema evolution.
- Use Apache Spark extensively for large‑scale data transformations across batch and streaming workloads.
Work model
We believe hybrid work is the way forward as we strive to provide flexibility wherever possible. Based on this role’s business requirements, this is a hybrid position requiring 4 days a week in a client or Cognizant office in Atlanta, GA. Regardless of your working arrangement, we are here to support a healthy work-life balance though our various wellbeing programs.
The working arrangements for this role are accurate as of the date of posting. This may change based on the project you’re engaged in, as well as business and client requirements. Rest assured; we will always be clear about role expectations.
What you need to have to be considered
- Hands‑on experience developing and operating data systems on AWS, Azure, and GCP.
- Proven ability to design cloud‑native, scalable, and cost‑efficient data solutions.
- Experience building data lakes, data lakehouses, and data hubs with strong understanding of ingestion patterns, governance, modeling, observability, and platform best practices.
- Expertise in data ingestion and collection using Kafka and AWS Glue, with experience in Apache Iceberg and Parquet.
- Strong experience designing and developing real‑time streaming pipelines using Kafka, Flink, or similar streaming frameworks.
- Deep expertise in data transformation and modeling using SQL‑based frameworks and orchestration tools including dbt, AWS Glue, and Airflow, with knowledge of SCD and schema evolution.
- Extensive experience using Apache Spark for large‑scale batch and streaming data transformations.
These will help you stand out
- Experience with event‑driven architectures and low‑latency data processing.
- Strong understanding of schema evolution, SCD modeling, and modern data modeling concepts.
- Experience with Apache Iceberg, Parquet, and modern ingestion/storage patterns.
- Strong knowledge of observability, governance, and platform best practices.
- Ability to partner effectively with cloud, architecture, and engineering teams.
Salary and Other Compensation:
Applications will be accepted until March 17, 2025.
The annual salary for this position is between $81,000 - $135,000, depending on experience and other qualifications of the successful candidate.
This position is also eligible for Cognizant’s discretionary annual incentive program, based on performance and subject to the terms of Cognizant’s applicable plans.
Benefits: Cognizant offers the following benefits for this position, subject to applicable eligibility requirements:
- Medical/Dental/Vision/Life Insurance
- Paid holidays plus Paid Time Off
- 401(k) plan and contributions
- Long‑term/Short‑term Disability
- Paid Parental Leave
- Employee Stock Purchase Plan
Disclaimer: The salary, other compensation, and benefits information is accurate as of the date of this posting. Cognizant reserves the right to modify this information at any time, subject to applicable law.
Job Title: Senior Data Engineer
Location: Chicago, IL (Hybrid)
Department: Data & Analytics
Reports To: Head of Data Engineering / Data Platform Lead
Role Overview
We are seeking a highly skilled Senior Data Engineer with strong Python development expertise and deep experience in Snowflake to design, build, and optimize scalable enterprise data solutions. This role is based in Chicago, IL and will support regulatory and risk data initiatives in a highly governed environment.
The ideal candidate has hands-on experience building modern cloud data platforms and is familiar with risk management frameworks, BCBS 239 principles, and Governance, Risk & Compliance (GRC) requirements within financial services.
Key Responsibilities
Data Engineering & Architecture
Design, develop, and maintain scalable data pipelines using Python.
Build and optimize data models, transformations, and data marts within Snowflake.
Develop robust ELT/ETL frameworks for structured and semi-structured data.
Optimize Snowflake performance, cost efficiency, clustering, and workload management.
Implement automation, monitoring, and CI/CD for data pipelines.
Risk & Regulatory Data Management
Support regulatory reporting aligned with BCBS 239 (risk data aggregation and reporting).
Ensure data traceability, lineage, reconciliation, and auditability.
Implement controls aligned with Governance, Risk & Compliance (GRC) frameworks.
Partner with Risk, Finance, Compliance, and Audit teams to deliver accurate and governed data assets.
Data Governance & Quality
Develop and enforce data quality validation frameworks.
Maintain metadata, lineage documentation, and data catalog integration.
Implement data access controls and security best practices.
Technical Leadership
Provide mentorship and code reviews for data engineering team members.
Promote engineering best practices and documentation standards.
Collaborate cross-functionally with architects, analysts, and business stakeholders.
Required Qualifications
7+ years of experience in Data Engineering or Data Platform development.
Strong Python programming expertise (Pandas, PySpark, Airflow, etc.).
Hands-on experience with Snowflake (data modeling, Snowpipe, Streams & Tasks, performance tuning).
Advanced SQL skills and deep understanding of data warehousing concepts.
Experience supporting BCBS 239 compliance or similar regulatory reporting frameworks.
Experience working within Governance, Risk & Compliance (GRC) structures.
Experience in cloud environments (AWS, Azure, or GCP).
Strong understanding of data lineage, controls, reconciliation, and audit requirements.
Preferred Qualifications
Experience in banking, capital markets, or financial services.
Knowledge of credit risk, market risk, liquidity risk, or regulatory reporting domains.
Experience with data governance tools (Collibra, Alation, etc.).
Familiarity with DevOps practices, Docker, Kubernetes.
Experience building enterprise data platforms in highly regulated environments.
Key Competencies
Strong problem-solving and analytical thinking.
Ability to operate in a regulated, audit-driven environment.
Excellent communication and stakeholder management skills.
Detail-oriented with a focus on data accuracy and integrity.
Leadership mindset with hands-on technical capability.
Job Summary
We are seeking a skilled Data Engineer with 5+ years of hands-on experience designing, building, and maintaining scalable data pipelines and data platforms. The ideal candidate has strong experience working with DAG-based orchestration, cloud technologies (preferably Google Cloud Platform), SQL-driven data processing, Apache Spark, and Python-based API development using Fast API. You will play a key role in enabling reliable data ingestion, transformation, and quality assurance across enterprise systems.
Key Responsibilities
- Design, develop, and maintain DAG-based data pipelines (Airflow or similar orchestration tools).
- Build and optimize SQL-based data transformations for analytics and reporting.
- Develop and manage batch and streaming data pipelines using Apache Spark.
- Implement Python-based REST APIs using FastAPI for data services and integrations.
- Perform data quality checks, validation, reconciliation, and anomaly detection.
- Work with cloud platforms (preferably Google Cloud Platform) for storage, compute, and orchestration.
- Architect and implement cloud-native data platforms on GCP, leveraging services such as BigQuery, BigTable, Dataflow, Dataproc, Pub/Sub, and Cloud Storage.
- Monitor pipeline performance, troubleshoot failures, and optimize processing efficiency.
- Collaborate with analytics, application, and business teams to understand data requirements.
- Ensure best practices around security, scalability, and maintainability.
- Ensure data quality, reliability, security, governance, and compliance with enterprise standards
Required Skills & Experience
- 5 + years of experience as a Data Engineer
- Strong experience with DAG orchestration (e.g., Apache Airflow).
- Solid understanding of cloud technologies, preferably Google Cloud Platform (GCP).
- Advanced proficiency in SQL for data processing and transformations.
- Hands-on experience running and tuning Apache Spark jobs.
- Experience developing APIs using Python and FastAPI.
- Strong understanding of data quality frameworks, checks, and validation techniques.
- Proficiency in Python, Java, Scala, or PySpark, with strong SQL expertise.
- Hands-on experience with GCP data services, including BigQuery, BigTable, Dataproc, Dataflow, and cloud-native ETL patterns.
- Experience with software delivery methodologies such as Agile, Scrum, and CI/CD practices.
- Strong analytical and problem-solving skills.
- Ability to work independently and in cross-functional teams.
- Good communication and documentation skills.
Join the team leading the next evolution of virtual care.
At Teladoc Health, you are empowered to bring your true self to work while helping millions of people live their healthiest lives.
Here you will be part of a high-performance culture where colleagues embrace challenges, drive transformative solutions, and create opportunities for growth. Together, we're transforming how better health happens.
Summary of Position
As a Staff Software Engineer, you are a senior individual contributor who leads the design and delivery of significant platform features and raises the bar for engineering quality across the team. You'll work handson in code-designing APIs and data flows, building services in Python/FastAPI and React frontends, and guiding solutions from idea to production. You'll mentor engineers, influence architecture and standards within and adjacent to your team, and partner closely with product and design to achieve clear, measurable outcomes. This role blends deep implementation work with pragmatic technical leadership by example.
Essential Duties and Responsibilities
Lead technical design for platform features and services, breaking ambiguous requirements into clear, incremental designs and stories for your team and adjacent partners.
Implement backend services in Python/FastAPI and React frontends end-to-end, owning a continuous stream of stories from idea to production.
Define and use clear API contracts and data flows between services and UIs, creating patterns and templates others can follow.
Champion high-quality engineering practices, including code reviews, documentation, and maintainable, testable designs.
Develop and improve automated testing (unit, integration, endtoend) and integrate these into everyday development and CI.
Improve CI/CD pipelines and release workflows for your team so the team can ship small, safe changes frequently and confidently.
Own the operational lifecycle of the features and services you build, including monitoring, observability, on-call participation, and incident follow-up.
Design and implement secure-by-default solutions, including robust authentication/authorization, input validation, and safe handling of sensitive data.
Identify and address reliability and performance risks early, proposing concrete technical improvements and sequencing them into the roadmap.
Mentor and unblock engineers through pairing, design discussions, and clear feedback; influence without formal authority.
Partners with product/design to shape requirements into incremental deliverables; escalates tradeoff decisions; proposes sequencing that optimizes value/risk.
The time spent on each responsibility reflects an estimate and is subject to change dependent on business needs.
Supervisory Responsibilities
No
Required Qualifications
Bachelor's degree in Computer Science, Engineering, or related field; equivalent work experience is acceptable.
7+ years of experience in software engineering.
Strong proficiency with Python and modern web backends (FastAPI, Flask, Django, or similar) and solid understanding of HTTP, API design, and data modeling.
Significant experience with React (or a comparable SPA framework) and building production frontends that talk to backend APIs.
Demonstrated ability to own features end-to-end in a small team: from shaping requirements through design, implementation, testing, deployment, and support.
Experience designing and working with distributed systems or multi-service architectures (e.g., service boundaries, async jobs, integration patterns).
Solid understanding of observability and operations for production systems (metrics, logs, traces, dashboards, alerting, incident response).
Strong understanding of security fundamentals (authentication, authorization, secure data handling) and how they apply to web services and UIs.
Deep familiarity with automated testing and CI/CD, and a track record of improving engineering workflows and quality.
Excellent communication and collaboration skills; comfortable working closely with product, design, and other stakeholders.
Proven ability to provide technical leadership in a hands-on way: unblocking others, making clear decisions, and raising the bar through code and reviews.
Bonus Qualifications
Experience in early-stage or small platform teams where engineers wear multiple hats and balance shipping with building foundations.
Experience with Azure and containerized deployments (or similar cloud-native environments).
Experience building platforms (developer platforms, data platforms, or similar) that serve multiple product teams.
Exposure to AI/ML or data-intensive applications (e.g., integrating with model inference APIs, data pipelines, or analytical data stores).
The base salary range for this position is$180,000 - $200,000. In addition to a base salary, this position is eligible for a performance bonus and benefits (subject to eligibility requirements) listed here: Teladoc Health Benefits 2026.Total compensation is based on several factors including, but not limited to, type of position, location, education level, work experience, and certifications.This information is applicable for all full-time positions.
#LI-SS2 #LI-Remote
We follow a Flexible Vacation Policy, intended for rest, relaxation, and personal time. All time off must be approved by your manager prior to use. You will also receive 80 hours of Paid Sick, Safe, and Caregiver Leave annually. This applies to full-time positions only. If you are applying for a part-time role, your recruiter can provide additional details.
As part of our hiring process, we verify identity and credentials, conduct interviews (live or video), and screen for fraud or misrepresentation. Applicants who falsify information will be disqualified.
Teladoc Health will not sponsor or transfer employment work visas for this position. Applicants must be currently authorized to work in the United States without the need for visa sponsorship now or in the future.
Why join Teladoc Health?
Teladoc Health is transforming how better health happens. Learn how when you join us in pursuit of our impactful mission.
Chart your career path with meaningful opportunities that empower you to grow, lead, and make a difference.
Join a multi-faceted community that celebrates each colleague's unique perspective and is focused on continually improving, each and every day.
Contribute to an innovative culture where fresh ideas are valued as we increase access to care in new ways.
Enjoy an inclusive benefits program centered around you and your family, with tailored programs that address your unique needs.
Explore candidate resources with tips and tricks from Teladoc Health recruiters and learn more about our company culture by exploring #TeamTeladocHealth on LinkedIn.
As an Equal Opportunity Employer, we never have and never will discriminate against any job candidate or employee due to age, race, religion, color, ethnicity, national origin, gender, gender identity/expression, sexual orientation, membership in an employee organization, medical condition, family history, genetic information, veteran status, marital status, parental status, or pregnancy). In our innovative and inclusive workplace, we prohibit discrimination and harassment of any kind.
Teladoc Health respects your privacy and is committed to maintaining the confidentiality and security of your personal information. In furtherance of your employment relationship with Teladoc Health, we collect personal information responsibly and in accordance with applicable data privacy laws, including but not limited to, the California Consumer Privacy Act (CCPA). Personal information is defined as: Any information or set of information relating to you, including (a) all information that identifies you or could reasonably be used to identify you, and (b) all information that any applicable law treats as personal information. Teladoc Health's Notice of Privacy Practices for U.S. Employees' Personal information is available at this link.
Company Description
PG Forsta is the leading experience measurement, data analytics, and insights provider for complex industries-a status we earned over decades of deep partnership with clients to help them understand and meet the needs of their key stakeholders. Our earliest roots are in U.S. healthcare -perhaps the most complex of all industries. Today we serve clients around the globe in every industry to help them improve the Human Experiences at the heart of their business. We serve our clients through an unparalleled offering that combines technology, data, and expertise to enable them to pinpoint and prioritize opportunities, accelerate improvement efforts and build lifetime loyalty among their customers and employees.
Like all great companies, our success is a function of our people and our culture. Our employees have world-class talent, a collaborative work ethic, and a passion for the work that have earned us trusted advisor status among the world's most recognized brands. As a member of the team, you will help us create value for our clients, you will make us better through your contribution to the work and your voice in the process. Ours is a path of learning and continuous improvement; team efforts chart the course for corporate success.
Our Mission:
We empower organizations to deliver the best experiences. With industry expertise and technology, we turn data into insights that drive innovation and action.
Our Values:
To put Human Experience at the heart of organizations so every person can be seen and understood.
- Energize the customer relationship:Our clients are our partners. We make their goals our own, working side by side to turn challenges into solutions.
- Success starts with me:Personal ownership fuels collective success. We each play our part and empower our teammates to do the same.
- Commit to learning:Every win is a springboard. Every hurdle is a lesson. We use each experience as an opportunity to grow.
- Dare to innovate:We challenge the status quo with creativity and innovation as our true north.
- Better together:We check our egos at the door. We work together, so we win together.
Duties & Responsibilities
Design and implement processes, systems and automation to streamline the development and deployment of AI solutions.
Architect robust, reliable solutions for specific AI applications using appropriate cloud-based and open source technologies.
Design and automate data pipelines to deliver complex data products to power training and online inference of AI systems.
Deploy ML models, LLMs and GenAI systems into production, ensuring reliability, efficiency, and scalability across cloud or hybrid environments.
Build and maintain robust CI/CD pipelines tailored to ML model lifecycle management, ensuring a streamlined and agile deployment process.
Monitor model performance, identify potential improvements, and integrate feedback loops for continuous learning and adaptation.
Integrate models with chat interfaces and conversational platforms to create responsive, user-centric applications.
Investigate and implement agent-based architectures that support conversational intelligence and interaction modeling.
Collaborate with cross-functional teams to design AI-driven features that enhance user experience and interaction within chat interfaces.
Work closely with data scientists, product managers, and engineers to ensure alignment on project goals, data requirements, and system constraints.
Mentor junior engineers and provide guidance on best practices in ML model development, deployment, and maintenance.
Create and maintain comprehensive documentation for model architectures, code implementations, data workflows, and deployment procedures to ensure reproducibility, transparency, and ease of collaboration.
Technical Skills
Experience with large-scale deployment tools and environments, including Docker, Kubernetes, and cloud platforms like AWS, Azure, or GCP.
Experience deploying and managing a variety of database technologies.
Experience deploying ML models at scale and optimizing models for low-latency, high-availability environments.
Strong programming skills in Python and proficiency in libraries such as NumPy, Pandas, and Scikit-learn.
Experience with data pipelines, ETL processes, and experience with distributed data frameworks like Apache Spark or Dask.
Familiarity with machine learning frameworks such as TensorFlow, PyTorch, and Hugging Face Transformers.
Knowledge of conversational AI, agent-based systems, and chat interface development.
Proven track record in deploying and maintaining ML and AI solutions in a production setting.
Experience with version control (e.g., Git) and CI/CD tools tailored to ML workflows.
Experience with MLOps.
Experience with Databricks is a plus.
Qualifications
Minimum Qualifications
5+ years of experience in platform engineering with a focus on with a focus on data and ML systems.
Bachelor's degree in Computer Science, Engineering, Data Science, or a related field.
Don't meet every single requirement?Studies have shown that women and people of color are less likely to apply to jobs unless they meet every single qualification. At Press Ganey we are dedicated to building a diverse, inclusive and authentic workplace, so if you're excited about this role but your past experience doesn't align perfectly with every qualification in the job description, we encourage you to apply anyways. You may be just the right candidate for this or other roles.
Additional Information for US based jobs:
Press Ganey Associates LLC is an Equal Employment Opportunity/Affirmative Action employer and well committed to a diverse workforce. We do not discriminate against any employee or applicant for employment because of race, color, sex, age, national origin, religion, sexual orientation, gender identity, veteran status, and basis of disability or any other federal, state, or local protected class.
Pay Transparency Non-Discrimination Notice - Press Ganey will not discharge or in any other manner discriminate against employees or applicants because they have inquired about, discussed, or disclosed their own pay or the pay of another employee or applicant. However, employees who have access to the compensation information of other employees or applicants as a part of their essential job functions cannot disclose the pay of other employees or applicants to individuals who do not otherwise have access to compensation information, unless the disclosure is (a) in response to a formal complaint or charge, (b) in furtherance of an investigation, proceeding, hearing, or action, including an investigation conducted by the employer, or (c) consistent with the contractor's legal duty to furnish information.
The expected base salary for this position ranges from $100,000 to $140,000. It is not typical for offers to be made at or near the top of the range. Salary offers are based on a wide range of factors including relevant skills, training, experience, education, and, where applicable, licensure or certifications obtained. Market and organizational factors are also considered. In addition to base salary and a competitive benefits package, successful candidates are eligible to receive a discretionary bonus or commission tied to achieved results.
All your information will be kept confidential according to EEO guidelines.
Our privacy policy can be found here:legal-privacy/