Smart Data Solutions Address Jobs in Usa
16,341 positions found — Page 3
Hi ,
This is Vamshi ,from Software Technology We have a job opening with our client for position DBA/DATA Architect If you are available and looking for any new opportunities, please send me your updated resume for below position ASAP.
Job Title: DBA/DATA Architect
Location: Denver, CO (Hybrid 3 days work form office)
Duration: Full Time/Longterm Contract
Must have skills: Architect level good; but should be hands-on with proactive driver mindset (individual contributor; no team management responsibility)
Skills To Be Evaluated On
Azure Data Factory, MS SQL Server DBA, Performance Tuning, Data Pipeline, Data Management
Technical Skills
- Azure Data Factory, Azure Data Lake Storage (ADLS), Azure Databricks, Azure SQL Database.
- Strong SQL, Python, Scala, and PySpark.
- Experience in building data models and schema design.
- Experience with SSIS
- Oracle DB experience would be a Plus
Roles & Responsibilities
- Design, build, and optimize data pipelines and ETL processes using Azure Data Factory and Databricks.
- Manage Azure SQL Database, including performance tuning, indexing, and query optimization.
- Design scalable data solutions, data lakes (ADLS), and data warehousing solutions.
- Ensure data security, quality, and integrity throughout the lifecycle.
- Monitor data pipelines and resolve performance issues or failures.
- Work with data scientists and analysts to support data-driven decision-making.
Thanks,
Vamshi Thangadpalli
Technical Recruiter
Email: | Web: :// Overlook Center, Suite 200
Princeton, NJ 08540.
Visa Status: US Citizen or Green Card Only
Location: Irving, TX (Local Candidates Only)
Employment Type: Full-time / Direct Hire
Work Environment: Hybrid (Monday thru Thursday - in office / Friday - at home)
***MUST HAVE 10+ YEARS EXPERIENCE AS A DATA ENGINEER***
***US Citizen or Green Card Only***
The AWS Senior Data Engineer will own the planning, design, and implementation of data structures for this leading Hospitality Corporation in their AWS environment. This role will be responsible for incorporating all internal and external data sources into a robust, scalable, and comprehensive data model within AWS to support business intelligence and analytics needs throughout the company.
Responsibilities:
- Collaborate with cross-functional teams to understand and define business intelligence needs and translate them into data modeling solutions
- Develops, builds and maintains scalable data pipelines, data schema design, and dimensional data modelling in Databricks and AWS for all system data sources, API integrations, and bespoke data ingestion files from external sources. Includes Batch and real-time pipelines.
- Responsible for data cleansing, standardization, and quality control
- Create data models that will support comprehensive data insights, business intelligence tools, and other data science initiatives
- Create data models and ETL procedures with traceability, data lineage and source control
- Design and implement data integration and data quality framework
- Implement data monitoring best practices with trigger based alerts for data processing KPIs and anomalies
- Investigate and remediate data problems, performing and documenting thorough and complete root cause analyses. Make recommendation for mitigation and prevention of future issues.
- Work with Business and IT to assess efficacy of all legacy data sources, making recommendations for migration, anonymization, archival and/or destruction.
- Continually seek to optimize performance through database indexing, query optimization, stored procedures, etc.
- Ensure compliance with data governance and data security requirements, including data life cycle management, purge and traceability.
- Create and manage documentation and change control mechanisms for all technical design, implementations and systems maintenance.
Target Skills and Experience
- Bachelor's or graduate degree in computer science, information systems or related field preferred, or similar combination of education and experience
- At least 10 years’ experience designing and managing data pipelines, schema modeling, and data processing systems.
- Experience with Databricks a plus (or similar tools like Microsoft Fabric, Snowflake, etc.) to drive scalable data solutions.
- Experience with SAP a plus
- Proficient in Python, with a track record of solving real-world data challenges.
- Advanced SQL skills, including experience with database design, query optimization, and stored procedures.
- Experience with Terraform or other infrastructure-as-code tools is a plus.
About Pinterest:
Millions of people around the world come to our platform to find creative ideas, dream about new possibilities and plan for memories that will last a lifetime. At Pinterest, we're on a mission to bring everyone the inspiration to create a life they love, and that starts with the people behind the product.
Discover a career where you ignite innovation for millions, transform passion into growth opportunities, celebrate each other's unique experiences and embrace theflexibility to do your best work. Creating a career you love? It's Possible.
At Pinterest, AI isn't just a feature, it's a powerful partner that augments our creativity and amplifies our impact, and we're looking for candidates who are excited to be a part of that. To get a complete picture of your experience and abilities, we'll explore your foundational skills and how you collaborate with AI.
Through our interview process, what matters most is that you can always explain your approach, showing us not just what you know, but how you think. You can read more about our AI interview philosophy and how we use AI in our recruiting process here.
About tvScientific
tvScientific is the first and only CTV advertising platform purpose-built for performance marketers. We leverage massive data and cutting-edge science to automate and optimize TV advertising to drive business outcomes. Our solution combines media buying, optimization, measurement, and attribution in one, efficient platform. Our platform is built by industry leaders with a long history in programmatic advertising, digital media, and ad verification who have now purpose-built a CTV performance platform advertisers can trust to grow their business.
As a Senior Data Engineer at tvScientific, you will be a key player in implementing the robust data infrastructure to power our data-heavy company. You will collaborate with our cross-functional teams to evolve our core data pipelines, design for efficiency as we scale, and store data in optimal engines and formats. This is an individual contributor role, where you will work to define and implement a strategic vision for data engineering within the organization.
What you'll do:
- Implement robust data infrastructure in AWS, using Spark with Scala
- Evolve our core data pipelines to efficiently scale for our massive growth
- Store data in optimal engines and formats
- Collaborate with our cross-functional teams to design data solutions that meet business needs
- Built out fault-tolerant batch and streaming pipelines
- Leverage and optimize AWS resources while designing for scale
- Collaborate closely with our Data Science and Product teams
- How we'll define success:
- Successful implementation of scalable and efficient data infrastructure
- Timely delivery and optimization of data assets and APIs
- High attention to detail in implementation of automated data quality checks
- Effective collaboration with cross-functional teams
What we're looking for:
- Production data engineering experience
- Proficiency in Spark and Scala, with proven experience building data infrastructure in Spark using Scala
- Familiarity with data lakes, cloud warehouses, and storage formats
- Strong proficiency in AWS services
- Expertise in SQL for data manipulation and extraction
- Excellent written and verbal communication skills
- Bachelor's degree in Computer Science or a related field
- Nice-to-Haves
- Experience in adtech
- Experience implementing data governance practices, including data quality, metadata management, and access controls
- Strong understanding of privacy-by-design principles and handling of sensitive or regulated data
- Familiarity with data table formats like Apache Iceberg, Delta
In-Office Requirement Statement:
- We recognize that the ideal environment for work is situational and may differ across departments. What this looks like day-to-day can vary based on the needs of each organization or role.
Relocation Statement:
- This position is not eligible for relocation assistance. Visit ourPinFlexpage to learn more about our working model.
#LI-SM4
#LI-REMOTE
At Pinterest we believe the workplace should be equitable, inclusive, and inspiring for every employee. In an effort to provide greater transparency, we are sharing the base salary range for this position. The position is also eligible for equity. Final salary is based on a number of factors including location, travel, relevant prior experience, or particular skills and expertise.
Information regarding the culture at Pinterest and benefits available for this position can be found here.
US based applicants only$123,696—$254,667 USDOur Commitment to Inclusion:
Pinterest is an equal opportunity employer and makes employment decisions on the basis of merit. We want to have the best qualified people in every job. All qualified applicants will receive consideration for employment without regard to race, color, ancestry, national origin, religion or religious creed, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, age, marital status, status as a protected veteran, physical or mental disability, medical condition, genetic information or characteristics (or those of a family member) or any other consideration made unlawful by applicable federal, state or local laws. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. If you require a medical or religious accommodation during the job application process, please completethis formfor support.
About Pinterest:
Millions of people around the world come to our platform to find creative ideas, dream about new possibilities and plan for memories that will last a lifetime. At Pinterest, we're on a mission to bring everyone the inspiration to create a life they love, and that starts with the people behind the product.
Discover a career where you ignite innovation for millions, transform passion into growth opportunities, celebrate each other's unique experiences and embrace theflexibility to do your best work. Creating a career you love? It's Possible.
At Pinterest, AI isn't just a feature, it's a powerful partner that augments our creativity and amplifies our impact, and we're looking for candidates who are excited to be a part of that. To get a complete picture of your experience and abilities, we'll explore your foundational skills and how you collaborate with AI.
Through our interview process, what matters most is that you can always explain your approach, showing us not just what you know, but how you think. You can read more about our AI interview philosophy and how we use AI in our recruiting process here.
About tvScientific
tvScientific is the first and only CTV advertising platform purpose-built for performance marketers. We leverage massive data and cutting-edge science to automate and optimize TV advertising to drive business outcomes. Our solution combines media buying, optimization, measurement, and attribution in one, efficient platform. Our platform is built by industry leaders with a long history in programmatic advertising, digital media, and ad verification who have now purpose-built a CTV performance platform advertisers can trust to grow their business.
As aStaff Data Engineer at tvScientific, you will be a key player in implementing the robust data infrastructure to power our data-heavy company. You will collaborate with our cross-functional teams to evolve our core data pipelines, design for efficiency as we scale, and store data in optimal engines and formats.This is anindividual contributor role, where you will work to define and implement a strategic vision for data engineering within the organization.
What you'll do:
- Design and implement robust data infrastructure in AWS, using Spark with Scala
- Evolve our core data pipelines to efficiently scale for our massive growth
- Store data in optimal engines and formats, matching your designs to our performance needs and cost factors
- Collaborate with our cross-functional teams to design data solutions that meet business needs
- Design and implement knowledge graphs, exposing their functionality both via Batch Processing and APIs
- Leverage and optimize AWS resources while designing for scale
- Collaborate closely with our Data Science and Product teams
- How we'll define success:
- Successful design and implementation of scalable and efficient data infrastructure
- Timely delivery and optimization of data assets and APIs
- High attention to detail in implementation of automated data quality checks
- Effective collaboration with cross-functional teams
What we're looking for:
- Production data engineering experience
- Proficiency in Spark and Scala, with proven experience building data infrastructure in Spark using Scala
- Experience in delivering significant technical initiatives and building reliable, large scale services
- Experience in delivering APIs backed by relationship-heavy datasets
- Familiarity with data lakes, cloud warehouses, and storage formats
- Strong proficiency in AWS services
- Expertise in SQL for data manipulation and extraction
- Excellent written and verbal communication skills
- Bachelor's degree in Computer Science or a related field
- Nice-to-haves:
- Experience in adtech
- Experience implementing data governance practices, including data quality, metadata management, and access controls
- Strong understanding of privacy-by-design principles and handling of sensitive or regulated data
- Familiarity with data table formats like Apache Iceberg, Delta
- Previous experience building out a Data Engineering function
- Proven experience working closely with Data Science teams on machine learning pipelines
In-Office Requirement Statement:
- We recognize that the ideal environment for work is situational and may differ across departments. What this looks like day-to-day can vary based on the needs of each organization or role.
Relocation Statement:
- This position is not eligible for relocation assistance. Visit ourPinFlexpage to learn more about our working model.
#LI-SM4
#LI-REMOTE
At Pinterest we believe the workplace should be equitable, inclusive, and inspiring for every employee. In an effort to provide greater transparency, we are sharing the base salary range for this position. The position is also eligible for equity. Final salary is based on a number of factors including location, travel, relevant prior experience, or particular skills and expertise.
Information regarding the culture at Pinterest and benefits available for this position can be found here.
US based applicants only$155,584—$320,320 USDOur Commitment to Inclusion:
Pinterest is an equal opportunity employer and makes employment decisions on the basis of merit. We want to have the best qualified people in every job. All qualified applicants will receive consideration for employment without regard to race, color, ancestry, national origin, religion or religious creed, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, age, marital status, status as a protected veteran, physical or mental disability, medical condition, genetic information or characteristics (or those of a family member) or any other consideration made unlawful by applicable federal, state or local laws. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. If you require a medical or religious accommodation during the job application process, please completethis formfor support.
Overall Responsibility:
This role supports the design, development, and optimization of Arora’s enterprise data and ERP systems. This role reports directly under the Data Analytics Manager to improve financial reporting, support platform integrations, and build scalable data architecture that enables informed decision-making across the organization.
The position combines technical execution (SQL, automation, system configuration) with financial reporting support and cross-platform integration work to ensure accuracy, efficiency, and long-term system sustainability.
Essential Functions:
- Execute reporting and system requests in alignment with established data governance standards and reporting frameworks under the direction of the Data Analytics Manager.
- Contribute to the design of data models and system workflows that reduce manual processes and improve cross-functional data visibility.
- Support internal dashboards by creating backend data solutions and integrating with Vision.
- Provide system-level troubleshooting and ensure data consistency and reliability across platforms.
- Collaborate with teams to streamline processes through automation and data tools.
- Maintain documentation of data procedures, workflows, and system modifications.
- Support financial reporting and analysis by developing standardized, scalable reporting solutions aligned with company-wide data architecture.
- Assist in translating financial and operational requirements into structured reporting outputs and automation workflows.
- Assist in platform integrations (ERP, CRM, BI tools, and other enterprise systems) to support long-term architectural alignment and scalability.
Needed Skills:
- Ability to program in SQL at an expert level to assist data processes. Potential need for other programming language knowledge (Java, Python, etc.).
- Ability to create and maintain productive relationships with employees, clients, and vendors.
Education/Experience Minimum:
- 3-5 years of experience
- Strong programming skills having the ability to write complex queries.
- Preferred familiarity with all Microsoft platforms, including but not limited to Excel, Power BI, SharePoint, and SQL Server.
- Preferred experience with Deltek Vision v7.6 and VantagePoint
- Experience in building automated processes and data workflows.
- Strong problem-solving and attention to detail.
Our client, a leading organization in the real estate investment and property management sector, is seeking a Data Analyst to join their IT team. This role will focus on building and maintaining data solutions, developing dashboards, and delivering actionable insights to support business decision-making across the organization.
Responsibilities
- Build and maintain data warehouses and reporting solutions
- Develop dashboards and ad hoc reports using Power BI
- Write and optimize SQL queries for data extraction and analysis
- Partner with stakeholders to gather and translate reporting requirements
- Troubleshoot and resolve data and reporting issues
- Support internal users with reporting tools and best practices
- Contribute to data governance and process improvements
- Stay current with Microsoft Fabric, Azure, and related technologies
Qualifications
- Bachelor’s degree in Computer Science, Statistics, Mathematics, or related field
- 3+ years of experience in data analysis and visualization
- Strong experience with Power BI
- Advanced SQL skills
- Experience with SQL Server, SSRS, SSIS, Excel, Power Query, and Power Pivot
- Experience with Microsoft Fabric preferred
- Familiarity with Azure environments preferred
- Experience with Azure DevOps Git is a plus
- Strong analytical, problem-solving, and communication skills
Job Title: Senior Data Engineer
Location: Chicago, IL (Hybrid)
Department: Data & Analytics
Reports To: Head of Data Engineering / Data Platform Lead
Role Overview
We are seeking a highly skilled Senior Data Engineer with strong Python development expertise and deep experience in Snowflake to design, build, and optimize scalable enterprise data solutions. This role is based in Chicago, IL and will support regulatory and risk data initiatives in a highly governed environment.
The ideal candidate has hands-on experience building modern cloud data platforms and is familiar with risk management frameworks, BCBS 239 principles, and Governance, Risk & Compliance (GRC) requirements within financial services.
Key Responsibilities
Data Engineering & Architecture
Design, develop, and maintain scalable data pipelines using Python.
Build and optimize data models, transformations, and data marts within Snowflake.
Develop robust ELT/ETL frameworks for structured and semi-structured data.
Optimize Snowflake performance, cost efficiency, clustering, and workload management.
Implement automation, monitoring, and CI/CD for data pipelines.
Risk & Regulatory Data Management
Support regulatory reporting aligned with BCBS 239 (risk data aggregation and reporting).
Ensure data traceability, lineage, reconciliation, and auditability.
Implement controls aligned with Governance, Risk & Compliance (GRC) frameworks.
Partner with Risk, Finance, Compliance, and Audit teams to deliver accurate and governed data assets.
Data Governance & Quality
Develop and enforce data quality validation frameworks.
Maintain metadata, lineage documentation, and data catalog integration.
Implement data access controls and security best practices.
Technical Leadership
Provide mentorship and code reviews for data engineering team members.
Promote engineering best practices and documentation standards.
Collaborate cross-functionally with architects, analysts, and business stakeholders.
Required Qualifications
7+ years of experience in Data Engineering or Data Platform development.
Strong Python programming expertise (Pandas, PySpark, Airflow, etc.).
Hands-on experience with Snowflake (data modeling, Snowpipe, Streams & Tasks, performance tuning).
Advanced SQL skills and deep understanding of data warehousing concepts.
Experience supporting BCBS 239 compliance or similar regulatory reporting frameworks.
Experience working within Governance, Risk & Compliance (GRC) structures.
Experience in cloud environments (AWS, Azure, or GCP).
Strong understanding of data lineage, controls, reconciliation, and audit requirements.
Preferred Qualifications
Experience in banking, capital markets, or financial services.
Knowledge of credit risk, market risk, liquidity risk, or regulatory reporting domains.
Experience with data governance tools (Collibra, Alation, etc.).
Familiarity with DevOps practices, Docker, Kubernetes.
Experience building enterprise data platforms in highly regulated environments.
Key Competencies
Strong problem-solving and analytical thinking.
Ability to operate in a regulated, audit-driven environment.
Excellent communication and stakeholder management skills.
Detail-oriented with a focus on data accuracy and integrity.
Leadership mindset with hands-on technical capability.
We Are Hiring: Databricks Lead Data Engineer – Director Equivalent Role
Location: Atlanta, USA
Work Model: Hybrid – 3 to 4 days in office per week (mandatory)
Eligibility: US Citizens and Green Card (GC) holders only
How to Apply
If you are interested in this position and have the required skills, please send across your resume at:
; ;
Paves Technologies is seeking a highly experienced Databricks Lead Data Engineer – Lead Level (Director Equivalent Role) to drive enterprise-scale data architecture, governance, and advanced analytics initiatives on Azure Cloud. This is a senior leadership role requiring deep Databricks expertise, strong data modeling capabilities, and hands-on architectural ownership across PySpark based distributed systems.
Role Overview
The ideal candidate will bring 10-12 + years of overall data engineering experience, including strong hands-on expertise with Azure Databricks, PySpark, Python, and Azure Cloud data services. You will define architecture standards, lead modernization initiatives, and implement scalable Medallion Architecture (Bronze, Silver, Gold layers) to support enterprise analytics and business intelligence.
Key Responsibilities
- Lead end-to-end architecture and implementation of enterprise-scale data platforms using Azure Databricks on Azure Cloud.
- Design and implement Medallion Architecture (Bronze, Silver, Gold layers) using Delta Lake best practices.
- Build scalable PySpark-based ETL/ELT pipelines across ingestion (Bronze), transformation (Silver), and curated analytics (Gold) layers.
- Develop advanced data transformations using Python, PySpark, Spark SQL, and advanced SQL constructs.
- Architect robust data models (dimensional, star schema, normalized models) aligned to analytics and reporting needs.
- Drive adoption of advanced Databricks capabilities including Unity Catalog, Declarative Pipelines, Delta Lake optimization, and governance frameworks.
- Establish best practices for partitioning strategies, file compaction, Z-ordering, caching, broadcast joins, and query optimization.
- Define and standardize reusable Azure Cloud data platform tools, templates, CI/CD frameworks, and infrastructure automation.
- Work across Azure ecosystem components such as Azure Data Factory (ADF), Azure Data Lake Storage (ADLS), Azure DevOps, networking, and security services.
- Ensure high standards for data quality, RBAC, lineage tracking, governance, and production stability.
- Provide architectural leadership and mentorship to data engineering teams.
Required Experience & Skills
- 10–12+ years of overall experience in Data Engineering.
- Minimum 3+ years of strong hands-on Databricks experience.
- Mandatory Certifications:
- Databricks Certified Data Engineer Associate
- Databricks Certified Data Engineer Professional
- Deep hands-on expertise in PySpark, Python programming, and distributed Spark processing.
- Strong experience designing and implementing Medallion Architecture (Bronze/Silver/Gold layers).
- Advanced knowledge of Data Modeling, Data Analysis, and complex SQL (window functions, CTEs, execution plan tuning).
- Strong understanding of Delta Lake architecture, schema evolution, partition strategies, performance optimization, and data governance.
- Well-versed in enterprise Azure Cloud data platforms, reusable accelerators, CI/CD templates, and governance standards.
- Proven experience architecting scalable, secure, cloud-native data solutions.
- Strong leadership, stakeholder management, and executive communication skills.
How to Apply
If you are interested in this position and have the required skills, please send across your resume at:
; ;
Sr. Data Engineer (Hybrid)
Chicago, IL
The American Medical Association (AMA) is the nation's largest professional Association of physicians and a non-profit organization. We are a unifying voice and powerful ally for America's physicians, the patients they care for, and the promise of a healthier nation. To be part of the AMA is to be part of our Mission to promote the art and science of medicine and the betterment of public health.
At AMA, our mission to improve the health of the nation starts with our people. We foster an inclusive, people-first culture where every employee is empowered to perform at their best. Together, we advance meaningful change in health care and the communities we serve.
We encourage and support professional development for our employees, and we are dedicated to social responsibility. We invite you to learn more about us and we look forward to getting to know you.
We have an opportunity at our corporate offices in Chicago for a Sr. Data Engineer (Hybrid) on our Information Technology team. This is a hybrid position reporting into our Chicago, IL office, requiring 3 days a week in the office.
As a Sr. Data Engineer, you will play a key role in implementing
and maintaining AMA's enterprise data platform to support analytics,
interoperability, and responsible AI adoption. This role partners closely with
platform engineering, data governance, data science, IT security, and business
stakeholders to deliver highquality, reliable, and secure data products. This
role contributes to AMA's modern lakehouse architecture, optimizing data
operations, and embedding governance and quality standards into engineering
workflows. This role serves as a
senior technical contributor within the team-providing mentorship to junior
engineers and implementing engineering best practices within the data platform function,
in alignment with architectural direction set by leadership.
RESPONSIBILITIES:
Data Engineering & AI Enablement
- Build and maintain scalable data pipelines and
ETL/ELT workflows supporting analytics, operational reporting, and AI/ML use
cases. - Implement best practice patterns for ingestion,
transformation, modeling, and orchestration within a modern lakehouse
environment (e.g., Databricks, Delta Lake, Azure Data Lake). - Develop highperformance
data models and curated datasets with strong attention to quality, usability,
and interoperability; create reusable engineering components and automation. - Collaborate with the Architecture Team, the Data
Platform Lead, and federated IT teams to optimize storage, compute, and
architectural patterns for performance and costefficiency. - Build model-ready data sets and feature
pipelines to support AI/ ML use cases; serve as a technical coordination point
supporting business units' AI-related infrastructure needs. - Collaborate with data scientists and AI Working
Group to operationalize models responsibly and maintain ongoing monitoring
signals.
Governance, Quality & Compliance
- Embed data governance, metadata standards,
lineage tracking, and quality controls directly into engineering workflows;
ensure technical implementation and alignment within engineering workflows. - Work with the Data Governance Lead and business
stakeholders to operationalize stewardship, classification, validation,
retention, and access standards. - Implement privacybydesign and securitybydesign
principles, ensuring compliance with internal policies and regulatory
obligations. - Maintain documentation for pipelines, datasets,
and transformations to support transparency and audit requirements.
Platform Reliability, Observability & Optimization
- Monitor and troubleshoot pipeline failures,
performance bottlenecks, data anomalies, and platformlevel issues. - Implement observability tooling, alerts,
logging, and dashboards to ensure endtoend reliability. - Support cost governance by optimizing compute
resources, refining job schedules, and advising on efficient architecture. - Collaborate with the Data Platform Lead on
scaling, configuration management, CI/CD pipelines, and environment management. - Collaborate with business units to understand
data needs, translate them into engineering requirements, and deliver
fit-for-purpose data solutions; share and apply best practices and emerging
technologies within assigned initiatives. - Work with IT Security and Legal/ Compliance to
ensure platform and datasets meet risk and regulatory standards.
Staff Management
- Lead, mentor, and provide management oversight
for staff. - Responsible for setting objectives, evaluating
employee performance, and fostering a collaborative team environment. - Responsible for developing staff knowledge and
skills to support career development.
May include other responsibilities as assigned
REQUIREMENTS:
- Bachelor's degree in Computer Science, Engineering, Information Systems, or related field preferred or equivalent work experience and HS diploma/equivalent education required.
- 5+ years of experience in data engineering within cloud environments
- Experience in people management preferred.
- Demonstrated hands-on experience with modern data platforms (Databricks preferred).
- Proficiency in Python, SQL, and data
transformation frameworks. - Experience designing and operationalizing
ETL/ELT pipelines, orchestration workflows (Airflow, Databricks Workflows), and
CI/CD processes. - Solid understanding of data modeling,
structured/unstructured data patterns, and schema design. - Experience implementing governance and quality
controls: metadata, lineage, validation, stewardship workflows. - Working knowledge of cloud architecture, IAM,
networking, and security best practices. - Demonstrated ability to collaborate across
technical and business teams. - Exposure to AI/ML engineering concepts, feature
stores, model monitoring, or MLOps patterns. - Experience with infrastructureascode
(Terraform, CloudFormation) or DevOps tooling.
The American Medical Association is located at 330 N. Wabash Avenue, Chicago, IL 60611 and is convenient to all public transportation in Chicago.
This role is an exempt position, and the salary range for this position is $115,523.42-$150,972.44. This is the lowest to highest salary we believe we would pay for this role at the time of this posting. An employee's pay within the salary range will be determined by a variety of factors including but not limited to business consideration and geographical location, as well as candidate qualifications, such as skills, education, and experience. Employees are also eligible to participate in an incentive plan. To learn more about the American Medical Association's benefits offerings, please click here.
We are an equal opportunity employer, committed to diversity in our workforce. All qualified applicants will receive consideration for employment. As an EOE/AA employer, the American Medical Association will not discriminate in its employment practices due to an applicant's race, color, religion, sex, age, national origin, sexual orientation, gender identity and veteran or disability status.
THE AMA IS COMMITTED TO IMPROVING THE HEALTH OF THE NATION
Apply NowShare Save JobRemote working/work at home options are available for this role.
At MVP Health Care, we're on a mission to create a healthier future for everyone. That means embracing innovation, championing equity, and continuously improving how we serve our communities. Our team is powered by people who are curious, humble, and committed to making a difference-every interaction, every day. We've been putting people first for over 40 years, offering high-quality health plans across New York and Vermont and partnering with forward-thinking organizations to deliver more personalized, equitable, and accessible care. As a not-for-profit, we invest in what matters most: our customers, our communities, and our team.
What's in it for you:
- Growth opportunities to uplevel your career
- A people-centric culture embracing and celebrating diverse perspectives, backgrounds, and experiences within our team
- Competitive compensation and comprehensive benefits focused on well-being
- An opportunity to shape the future of health care by joining a team recognized as a Best Place to Work For in the NY Capital District, one of the Best Companies to Work For in New York, and an Inclusive Workplace.
You'll contribute to our humble pursuit of excellence by bringing curiosity to spark innovation, humility to collaborate as a team, and a deep commitment to being the difference for our customers. Your role will reflect our shared goal of enhancing health care delivery and building healthier, more vibrant communities.
The Sr. Quality Data Analyst will be responsible for leading and overseeing operational workflows within the Health Care Quality Analytics team. The ideal candidate will be accountable for ensuring the team delivers routine and ad hoc analyses and data visualizations to support MVP's health care quality functional area. The ideal candidate will have experience working with NCQA and CMS quality measures and HEDIS data to support improved health care outcomes and member satisfaction. They will also participate in automation efforts that create efficiencies and help to create a data-driven organization. The Sr. Quality Data Analyst will work with cross-functional teams, including business, technical, and Data Governance teams, to ensure the availability, accuracy, and reliability of data.
In alignment with MVP's core values, the Sr. Quality Data Analyst will be expected to demonstrate strong interpersonal and communication skills, promoting cooperation across organizational boundaries and encouraging groups to work together cooperatively. They will have strong analytical thinking skills, and a focus on continuously improving processes and reducing technical debt. Additionally, they will be self-motivated, with a sense of accountability and urgency in completing assignments.
Key Responsibilities:
- Lead and oversee the successful execution of operational workflows and health care quality data deliverables.
- Have experience working with HEDIS, Medicare Stars, and NYSDOH QARR measures data and a good understanding of health care quality measurement.
- Conduct analysis of large data sets to support health care quality improvement initiatives, including gap analysis, process optimization, and patient engagement.
- Collaborate with cross-functional teams to design, implement, and maintain data solutions that meet the needs of stakeholders and business partners.
- Ensure the accuracy and integrity of data through the development and implementation of data quality control processes and procedures.
- Provide training and mentorship to team members to promote growth and development.
- Participate in the development of data governance policies, standards, and procedures, and ensure compliance with regulatory requirements and industry best practices.
- Present data insights and recommendations to leadership, effectively communicating complex technical information to non-technical stakeholders.
- Continuously monitor and evaluate the effectiveness of operational workflows, making recommendations for improvements and leading implementation efforts as necessary.
Position Qualifications
Minimum Education
Bachelor's degree in a related field (e.g. Mathematics, Statistics, Computer Science, Epidemiology, or Healthcare) required; Master's degree preferred.
Minimum Experience
5+ years of experience in healthcare data analysis, with a strong focus on health care quality analytics and operations.
Experience leading teams and executing on operational workflows.
Required Skills
- Strong analytical skills, with the ability to turn data into actionable insights.
- Proficiency in SQL, Azure Databricks, data visualization tools (e.g. Tableau, PowerBI), and data manipulation tools (e.g. Alteryx, R, Python).
- Excellent verbal and written communication skills, with the ability to effectively communicate technical information to both technical and non-technical stakeholders.
- Ability to work independently and as part of a team, with strong project management skills and the ability to prioritize tasks effectively.
- Keen attention to detail.
- Subject matter expertise of healthcare industry quality metrics, Medicare Stars and HEDIS standards.
Pay Transparency
MVP Health Care is committed to providing competitive employee compensation and benefits packages. The base pay range provided for this role reflects our good faith compensation estimate at the time of posting. MVP adheres to pay transparency nondiscrimination principles. Specific employment offers and associated compensation will be extended individually based on several factors, including but not limited to geographic location; relevant experience, education, and training; and the nature of and demand for the role.
We do not request current or historical salary information from candidates.
$93,667.00-$124,576.75
MVP's Inclusion Statement
At MVP Health Care, we believe creating healthier communities begins with nurturing a healthy workplace. As an organization, we strive to create space for individuals from diverse backgrounds and all walks of life to have a voice and thrive. Our shared curiosity and connectedness make us stronger, and our unique perspectives are catalysts for creativity and collaboration.
MVP is an equal opportunity employer and recruits, employs, trains, compensates, and promotes without discrimination based on race, color, creed, national origin, citizenship, ethnicity, ancestry, sex, gender identity, gender expression, religion, age, marital status, personal appearance, sexual orientation, family responsibilities, familial status, physical or mental disability, handicapping condition, medical condition, pregnancy status, predisposing genetic characteristics or information, domestic violence victim status, political affiliation, military or veteran status, Vietnam-era or special disabled Veteran or other legally protected classifications.
To support a safe, drug-free workplace, pre-employment criminal background checks and drug testing are part of our hiring process. If you require accommodations during the application process due to a disability, please contact our Talent team at .