Codashop, ML Jobs in Usa
658 positions found — Page 7
About Wakefern
Wakefern Food Corp. is the largest retailer-owned cooperative in the United States and supports its co-operative members' retail operations, trading under the ShopRite®, Price Rite®, The Fresh Grocer®, Dearborn Markets®, and Gourmet Garage® banners.
Employing an innovative approach to wholesale business services, Wakefern focuses on helping the independent retailer compete in a big business world. Providing the tools entrepreneurs need to stay a step ahead of the competition, Wakefern’s co-operative members benefit from the company’s extensive portfolio of services, including innovative technology, private label development, and best-in-class procurement practices.
The ideal candidate will have a strong background in designing, developing, and implementing complex projects, with focus on automating data processes and driving efficiency within the organization. This role requires a close collaboration with application developers, data engineers, data analysts, data scientists to ensure seamless data integration and automation across various platforms. The Data Integration & AI Engineer is responsible for identifying opportunities to automate repetitive data processes, reduce manual intervention, and improve overall data accessibility.
Essential Functions
- Participate in the development life cycle (requirements definition, project approval, design, development, and implementation) and maintenance of the systems.
- Implement and enforce data quality and governance standards to ensure the accuracy and consistency.
- Provide input for project plans and timelines to align with business objectives.
- Monitor project progress, identify risks, and implement mitigation strategies.
- Work with cross-functional teams and ensure effective communication and collaboration.
- Provide regular updates to the management team.
- Follow the standards and procedures according to Architecture Review Board best practices, revising standards and procedures as requirements change and technological advancements are incorporated into the >tech_ structure.
- Communicates and promotes the code of ethics and business conduct.
- Ensures completion of required company compliance training programs.
- Is trained – either through formal education or through experience – in software / hardware technologies and development methodologies.
- Stays current through personal development and professional and industry organizations.
Responsibilities
- Design, build, and maintain automated data pipelines and ETL processes to ensure scalability, efficiency, and reliability across data operations.
- Develop and implement robust data integration solutions to streamline data flow between diverse systems and databases.
- Continuously optimize data workflows and automation processes to enhance performance, scalability, and maintainability.
- Design and develop end-to-end data solutions utilizing modern technologies, including scripting languages, databases, APIs, and cloud platforms.
- Ensure data solutions and data sources meet quality, security, and compliance standards.
- Monitor and troubleshoot automation workflows, proactively identifying and resolving issues to minimize downtime.
- Provide technical training, documentation, and ongoing support to end users of data automation systems.
- Prepare and maintain comprehensive technical documentation, including solution designs, specifications, and operational procedures.
Qualifications
- A bachelor's degree or higher in computer science, information systems, or a related field.
- Hands-on experience with cloud data platforms (e.g., GCP, Azure, etc.)
- Strong knowledge and skills in data automation technologies, such as Python, SQL, ETL/ELT tools, Kafka, APIs, cloud data pipelines, etc.
- Experience in GCP BigQuery, Dataflow, Pub/Sub, and Cloud storage.
- Experience with workflow orchestration tools such as Cloud Composer or Airflow
- Proficiency in iPaaS (Integration Platform as a Service) platforms, such as Boomi, SAP BTP, etc.
- Develop and manage data integrations for AI agents, connecting them to internal and external APIs, databases, and knowledge sources to expand their capabilities.
- Build and maintain scalable Retrieval-Augmented Generation (RAG) pipelines, including the curation and indexing of knowledge bases in vector databases (e.g., Pinecone, Vertex AI Vector Search).
- Leverage cloud-based AI/ML platforms (e.g., Vertex AI, Azure ML) to build, train, and deploy machine learning models on a scale.
- Establish and enforce data quality and governance standards for AI/ML datasets, ensuring the accuracy, completeness, and integrity of data used for model training and validation.
- Collaborate closely with data scientists and machine learning engineers to understand data requirements and deliver optimized data solutions that support the entire machine learning lifecycle.
- Hands-on experience with IBM DataStage and Alteryx is a plus.
- Strong understanding of database design principles, including normalization, indexing, partitioning, and query optimization.
- Ability to design and maintain efficient, scalable, and well-structured database schemas to support both analytical and transactional workloads,
- Familiarity with BI visualization tools such as MicroStrategy, Power BI, Looker, or similar.
- Familiarity with data modeling tools.
- Familiarity with DevOps practices for data (CI/CD pipelines)
- Proficiency in project management software (e.g., JIRA, Clarizen, etc.)
- Familiarity with DevOps practices for data (CI/CD pipelines)
- Strong knowledge and skills in data management, data quality, and data governance.
- Strong communication, collaboration, and problem-solving skills.
- Ability to work on multiple projects and prioritize tasks effectively.
- Ability to work independently and in a team environment.
- Ability to learn new technologies and tools quickly.
- The ability to handle stressful situations.
- Highly developed business acuity and acumen.
- Strong critical thinking and decision-making skills.
Working Conditions & Physical Demands
This position requires in-person office presence at least 4x a week.
Compensation and Benefits
The salary range for this position is $75,868 - $150,644. Placement in the range depends on several factors, including experience, skills, education, geography, and budget considerations.
Wakefern is proud to offer a comprehensive benefits package designed to support the health, well-being, and professional development of our Associates. Benefits include medical, dental, and vision coverage, life and disability insurance, a 401(k) retirement plan with company match & annual company contribution, paid time off, holidays, and parental leave.
Associates also enjoy access to wellness and family support programs, fitness reimbursement, educational and training opportunities through our corporate university, and a collaborative, team-oriented work environment. Many of these benefits are fully or partially funded by the company, with some subject to eligibility requirements
The pay range for this role is $150,000 - $200,000/yr USD.
WHO WE ARE:
Headquartered in Southern California, Skechers—the Comfort Technology Company®—has spent over 30 years helping men, women, and kids everywhere look and feel good. Comfort innovation is at the core of everything we do, driving the development of stylish, high-quality products at a great value. From our diverse footwear collections to our expanding range of apparel and accessories, Skechers is a complete lifestyle brand.
ABOUT THE ROLE:
Skechers Digital Team is seeking a Digital Data Architect reporting to the Director, Digital Architecture, Consumer Domain. This role is responsible for designing and governing Skechers’ Consumer Data 360 ecosystem, enabling identity resolution, high-quality data foundations, personalization, loyalty intelligence, and machine learning capabilities across digital and retail channels.
The ideal candidate will be a strong technical leader, have hands-on full-stack technical knowledge in enterprise technologies related to Skecher’s consumer domain, and have the ability to work in a fast-paced agile environment. You should have knowledge of consumer programs from an architecture/industry perspective, and you should have strong hands-on experience designing solutions on the Salesforce Core Platform (including configuration, integration, and data model best practices).
You will work cross-functionally with Digital Engineering, Data Engineering, Data Science, Loyalty, and Marketing teams to architect scalable, secure, and high-performance data platforms that support advanced personalization and recommender systems.
WHAT YOU’LL DO:
- Responsible for the full technical life cycle of consumer platform capabilities which includes:
- Capability roadmap and technical architecture in alignment to consumer experience
- Technical planning, design, and execution
- Operations, analytics/reporting, and adoption
- Define and evolve Skechers’ Consumer Data 360 architecture, including identity resolution (deterministic and probabilistic matching) and unified customer profiles.
- Architect scalable data models and pipelines across CDP, CRM, e-commerce, marketing automation, data lake, and warehouse platforms.
- Establish enterprise data quality frameworks including validation, deduplication, anomaly detection, and observability.
- Optimize SQL workloads and large-scale distributed queries through performance tuning, partitioning, indexing, and workload management strategies.
- Design and oversee ML pipelines supporting personalization, churn modeling, and recommender systems.
- Partner with Data Science teams to productionize models using distributed platforms such as Databricks (Spark, Delta Lake, MLflow preferred).
- Ensure secure data governance, access control (RBAC/ABAC), and compliance with GDPR, CCPA, and related privacy regulations.
- Provide architectural oversight ensuring performance, scalability, resilience, and maintainability.
- Collaborate with stakeholders to translate business objectives (LTV growth, personalization lift, engagement) into scalable data solutions.
REQUIREMENTS:
- Computer Science, Data Engineering, or related degree or equivalent experience.
- 12+ years experience architecting enterprise data platforms in cloud environments.
- 9+ years experience with data engineering with a focus on consumer data.
- 6+ years experience working with Salesforce platforms, including data models and enterprise integrations.
- Strong experience with Data 360 and identity resolution architectures.
- Proven expertise in SQL performance tuning and large-scale data modeling.
- Hands-on experience implementing ML pipelines and recommender systems in production environments.
- Experience with cloud technologies (AWS, GCP, or Azure).
- Experience with integration patterns (API, ETL, event streaming).
- Experience providing technical leadership and guidance across multiple projects and development teams.
- Experience translating business requirements into detailed technical specifications and working with development teams through implementation, including issue resolution and stakeholder communication.
- Strong project management skills including scope assessment, estimation, and clear technical communication with both business users and technical teams.
- Must hold at least one of the following Salesforce Certifications (Platform App Builder, Platform Developer 1, JavaScript Developer 1).
- Experience with Databricks or similar distributed data/ML platforms preferred.
AI Innovation Architect
Location: Hybrid; Ashburn, VA; Springfield, VA; Washington, D.C.
Clearance: U.S. Citizen; Must have an active Top-Secret Clearance or DHS Public Trust Clearance.
InDev is seeking a senior strategic and technical AI Architect responsible for designing, building, and deploying artificial intelligence solutions that support mission outcomes across the homeland security market. In this role you will bridge advanced AI capabilities - including machine learning, natural language processing, and data engineering - with operational requirements, ensuring solutions are secure, scalable, and aligned with the homeland security mission.
YOUR FUTURE DUTIES AND RESPONSIBILITIES
- Define overall system architecture, selecting and governing Artificial Intelligence / Machine Learning (AI/ML) and platform technologies, and ensuring solutions are scalable, secure, and production-ready
- Lead end-to-end technical design, development, and implementation of an agentic AI system to orchestrate user queries across enterprise data sources
- Partner closely with development, DevOps and data engineering teams to translate project requirements into an extensible AI architecture
- Create and promote AI strategies that align with business objectives
- Develop and coordinate POCs to test new technologies
- Evaluate and select appropriate AI tools, frameworks, and platforms (i.e., AWS, Azure, Google) to drive innovation
QUALIFICATIONS
- U.S. Citizen; Active Top-Secret Clearance or DHS Public Trust Clearance
- 8+ years of experience delivering AI solutions across federal agencies
- Bachelor’s degree in Computer Science, Engineering, or Data Science
- Deep understanding of machine learning (ML), deep learning, Natural Language Processing (NLP), and neural networks
- Experience with cloud platforms (AWS, Google Cloud, Azure) and container orchestration tools like Kubernetes and Docker
- Ability to identify high-impact AI use cases and translate them into technical requirements
- Experience designing, building, and deploying advanced AI systems including Generative AI, AI Agents, LLMs, Reinforcement Learning, and computer vision models
- Ability to apply cloud and engineering expertise across AWS, GCP, Kubernetes, Docker, Terraform, Helm, Linux, and AI services, such as SageMaker, Vertex AI, Bedrock, or Gemini
- Experience with Python, agent frameworks, data engineering, APIs/microservices, vector databases, SQL engines, distributed systems, cloud services, RAG
- Experience developing and maintaining AI/ML roadmaps, performing Analysis of Alternatives, and making defensible technical tradeoff decisions
- Experience leading multidisciplinary teams, including data scientists, engineers, and business stakeholders
- Excellent written and oral communication skills
- Ability to tailor and present information across multiple stakeholders
NICE TO HAVES
- Experience integrating AI solutions with SaaS/PaaS platforms (e.g., ServiceNow, Salesforce, etc.)
- Experience implementing virtual agents within SaaS/PaaS platforms (e.g., ServiceNow Virtual Agent, Salesforce Agentforce, etc.)
- Experience with Google Gemini
ABOUT US
At InDev, we’re not just a company; we’re a trailblazing force transforming the way data shapes the future. As a dynamic player in the federal government sector, we’re on a mission to empower agencies with cutting-edge data solutions that drive innovation, efficiency, and progress. Our team thrives on collaboration, innovation, and embracing challenges head-on to create a meaningful impact on the world around us.
WHY INDEV
- Innovative Environment: Join a team that thrives on creativity and innovation, where your ideas are not only heard but encouraged.
- Meaningful Impact: Contribute to projects that directly impact federal agencies, driving positive change on a national scale.
- Dynamic Collaboration: Work alongside diverse experts who are passionate about pushing boundaries and making a difference.
- Agile Mindset: Embrace Agile methodologies that encourage flexibility, adaptability, and rapid growth.
- Learning Culture: Enjoy ongoing learning opportunities and professional development to expand your skill set.
- Cutting-edge Tech: Engage with the latest technologies and tools in the data integration landscape.
If you’re ready to embark on a journey of innovation, collaboration, and impact, InDev welcomes you to join our team. Let’s shape the future together.
About the Company
InHouse transforms how renters furnish their homes. We build a photorealistic digital twin of your space from a floor plan and photos. Inside, you can visualize complete rooms, swap products instantly, and shop exclusive pricing from hundreds of premium brands—all guaranteed to fit.
Our platform combines a live multi-brand catalog, spatial-placement engine, and ML-driven tooling to deliver professional-quality interiors ~98% faster and ~95% less costly than traditional design. We're making professional furnishing accessible on any budget, timeline, or skill level.
InHouse is backed by a diverse group of venture, angel, and strategic investors. The founding team brings 40 years of experience in e-commerce, AI, and design across 10+ venture-backed startups.
About the Role
You will help lead the evolution of the core platform that powers InHouse’s AI-driven visualization and commerce experience. This includes setting technical direction across our architecture, backend services, API layers, data models, and front-end applications. You’ll own complex, multi-system initiatives, guide engineering tradeoffs, and ensure the platform remains reliable, scalable, and extensible as we transition development fully in-house. This is a high-ownership role where you’ll partner closely with founders, design, and product to shape the technical roadmap and mentor engineers as the team grows.
Responsibilities
- Lead the technical vision and architecture across many parts of InHouse’s platform, defining the roadmap that enables efficient orchestration of services across the organization
- Drive technical decision-making across complex, multi-faceted problems from visualization to product ingestion to order fulfillment
- Own product initiatives end-to-end from initial brainstorm to delivery, working with Design, Marketing, Ops to ensure evolution of the digital product
- Implement and maintain standards for the engineering team for delivery velocity, platform maintenance, and system reliability
- Mentor and guide engineers on the team, fostering technical growth and establishing a culture of engineering excellence
Qualifications
- 5+ years of professional software engineering experience
- Proven ability to own and deliver complex features in fast-paced environments
- Deep knowledge of React + TypeScript (Next.js) and backend development (Node/Python)
- Strong database knowledge (PostgreSQL / Supabase)
- Solid foundation in system design and API architecture
- Experience with generative AI systems (LLMs, embeddings, etc.)
Required Skills
- Experience integrating ML or model outputs into production workflows
- Familiarity with e-commerce/marketplace systems
- Exposure to 3D, graphics, or visualization tooling
Pay range and compensation package
Base salary + bonus, Equity potential, Health benefits, Prime SoHo office
Equal Opportunity Statement
InHouse is an equal opportunity employer. We celebrate diversity and do not discriminate on any protected basis.
Why Join InHouse?
InHouse is redefining how people design and shop for their homes by merging photorealistic visualization, ML-assisted workflows, and seamless commerce into one cohesive platform. You’ll join at a pivotal moment as we bring engineering fully in-house—owning architecture, extending core systems, and shaping the long-term platform. This role offers unusually high ownership: deep technical challenges, direct collaboration with founders, and the opportunity to influence both product direction and engineering culture as we grow.
We are seeking a Senior Robotic Simulation Engineer with deep, hands-on expertise in building high-fidelity simulation environments and synthetic data pipelines using NVIDIA Isaac Sim or similar.
In this role, you will directly contribute to the development of scalable, realistic virtual worlds that power the training and validation of robotic foundation models for perception, planning, and manipulation.
At GM’s manufacturing technology development team, you’ll work closely with robotics and AI teams within the Manufacturing Technology Development (MTD) team to create simulation scenes that replicate complex manufacturing environments, generate diverse synthetic datasets, and build robust data workflows that accelerate model development and deployment.
Key Responsibilities • Build and customize simulation scenes in Isaac Sim that accurately reflect real-world robotic tasks and factory layouts • Develop synthetic data generation pipelines, including randomized object placement, sensor simulation, and multi-modal annotations (RGB, depth, segmentation, point clouds) • Implement and maintain data processing workflows, ensuring data quality, traceability, and compatibility with ML training pipelines • Optimize simulation performance and realism, tuning physics parameters, asset fidelity, and rendering configurations for scalable experimentation • Collaborate with robotics engineers and ML scientists to align simulation outputs with model requirements and support sim2real transfer • Conduct hands-on testing and debugging, iterating on simulation setups and synthetic data strategies to improve model robustness • Document workflows and contribute to best practices, enabling reproducibility and knowledge sharing across teams Required Qualifications • MSc or PhD in Robotics, Computer Graphics, Computer Vision, or related field • 3+ years of hands-on experience in robotics simulation, synthetic data generation, or virtual environment development • Proficiency with Isaac Sim, Omniverse, or similar simulation platforms • Strong programming skills in Python and C++, with experience in simulation APIs and data annotation tools • Familiarity with 3D vision, sensor modeling, and domain randomization techniques • Experience integrating simulation outputs into ML pipelines for training and evaluation Preferred Qualifications • Experience with robotics frameworks (e.g., ROS/ROS2, MoveIt, Nav2) • Experience with robotics simulation platforms (e.g., IsaacSim, MoJoCo, Gazebo) or Game engine (e.g., Unreal, Unity) • Background in industrial automation, autonomous vehicles, or robotic manipulation • Publications or contributions in simulation, synthetic data, or robotics venues (e.g.
ICRA, RSS, CVPR) • Familiarity with CI/CD pipelines and modern software development practices such as Bash, Github, Bazel, Docker
AI Research Scientist – NLP/Computational Linguistics
A well-funded applied AI startup is building advanced systems to distinguish human-created content from machine-generated content. The team combines frontier research with real-world deployment, turning cutting-edge ideas into production systems used at scale.
They’re hiring an AI Research Scientist to improve and scale core detection models. This is a hands-on role where you’ll train large neural networks, design experiments, and push model performance forward, while also shipping code into production.
What You’ll Do
- Train and fine-tune large-scale deep learning models
- Design experiments to improve robustness and generalization
- Create and leverage synthetic data at scale
- Contribute new ideas that materially improve model accuracy
- Collaborate with engineering to deploy research into production systems
- Share insights through technical writing or publications where appropriate
This is not a purely academic role, you’ll be expected to code, ship, and iterate.
What They’re Looking For
- PhD in ML/AI/NLP (or equivalent research experience at a leading AI lab)
- Publications at top-tier venues (e.g., NeurIPS, ICML, ICLR, ACL, EMNLP) or meaningful contributions to deployed AI systems
- Strong understanding of generative models and modern LLMs
- Experience training models in PyTorch at scale
- Evidence your research has led to real-world impact
Bonus Points
- Work in robustness, interpretability, synthetic data, or model evaluation
- Background in NLP, computational linguistics, vision, audio, or other adjacent ML domains
- Experience with distributed training or ML infrastructure
Ideal Profile
- Early-career researcher (including strong new PhDs) who wants applied impact
- Industry researcher who wants more ownership and product exposure
- Someone motivated by AI integrity, trust, and responsible deployment
Small team. High ownership. Deep technical problems. Real-world impact.
Location: Detroit, MI (3 days in office, 2 days remote; first 3 months are fully in office)
Note: This employer does not provide visa sponsorship. Applicants must be authorized to work in the United States now and in the future without sponsorship.
Overview
We are seeking a hands-on Application Security Engineer to help integrate security practices across modern development environments, cloud platforms, and emerging AI/ML systems. This role focuses on embedding security into engineering workflows, improving application resilience, and enabling teams to deliver secure solutions without slowing development.
You will work closely with development, infrastructure, and product teams to identify risks early, implement automated security controls, and support secure architecture across applications and data-driven systems.
Key Responsibilities
Secure Development & DevSecOps
- Integrate security best practices throughout the software development lifecycle, from design through production deployment
- Perform code reviews to identify vulnerabilities and promote secure coding standards
- Implement and manage application security tools such as SAST, DAST, SCA, and related technologies
- Embed automated security checks within CI/CD pipelines and DevSecOps workflows
Risk Assessment & Vulnerability Management
- Conduct threat modeling and security assessments for applications and system architectures
- Identify, prioritize, and track vulnerabilities, partnering with engineering teams on remediation efforts
- Monitor third-party libraries, APIs, and open-source components for security risks
Cloud, Container & Platform Security
- Support security efforts across cloud environments, including containerized and serverless applications
- Assist in securing Kubernetes-based workloads and distributed systems
- Contribute to infrastructure hardening and platform security improvements
AI/ML & Emerging Technology Security
- Evaluate risks associated with machine learning and generative AI systems across the full lifecycle
- Implement safeguards such as input validation, prompt protection, and data leakage prevention
- Support governance and security controls for AI-enabled applications
Incident Response & Compliance
- Investigate application-related security events and support incident response activities
- Track security metrics, risk posture, and remediation progress
- Assist with audit readiness and compliance initiatives
Collaboration & Enablement
- Partner with developers, architects, and product teams to promote secure design principles
- Provide guidance on secure coding practices aligned with industry frameworks (e.g., OWASP)
- Stay current on emerging threats, vulnerabilities, and attack techniques
Qualifications
- Bachelor’s degree in Computer Science, Cybersecurity, Software Engineering, or a related field (or equivalent experience)
- 3+ years of experience in application security, cloud security, or DevSecOps environments
- Hands-on experience with security testing tools (SAST, DAST, SCA, etc.)
- Strong understanding of secure coding practices and modern application architectures
- Experience with cloud platforms and containerized environments (e.g., Kubernetes)
- Familiarity with CI/CD pipelines and automation tools
- Excellent communication skills with the ability to collaborate across technical and non-technical teams
- Strong organizational skills and ability to manage multiple priorities
Preferred Qualifications
- Experience with AI/ML security concepts or securing data-driven applications
- Relevant certifications such as DevSecOps, cloud security, or AI security credentials
- Background in highly regulated or security-sensitive environments
Sr. Data Engineer (Hybrid)
Chicago, IL
The American Medical Association (AMA) is the nation's largest professional Association of physicians and a non-profit organization. We are a unifying voice and powerful ally for America's physicians, the patients they care for, and the promise of a healthier nation. To be part of the AMA is to be part of our Mission to promote the art and science of medicine and the betterment of public health.
At AMA, our mission to improve the health of the nation starts with our people. We foster an inclusive, people-first culture where every employee is empowered to perform at their best. Together, we advance meaningful change in health care and the communities we serve.
We encourage and support professional development for our employees, and we are dedicated to social responsibility. We invite you to learn more about us and we look forward to getting to know you.
We have an opportunity at our corporate offices in Chicago for a Sr. Data Engineer (Hybrid) on our Information Technology team. This is a hybrid position reporting into our Chicago, IL office, requiring 3 days a week in the office.
As a Sr. Data Engineer, you will play a key role in implementing
and maintaining AMA's enterprise data platform to support analytics,
interoperability, and responsible AI adoption. This role partners closely with
platform engineering, data governance, data science, IT security, and business
stakeholders to deliver highquality, reliable, and secure data products. This
role contributes to AMA's modern lakehouse architecture, optimizing data
operations, and embedding governance and quality standards into engineering
workflows. This role serves as a
senior technical contributor within the team-providing mentorship to junior
engineers and implementing engineering best practices within the data platform function,
in alignment with architectural direction set by leadership.
RESPONSIBILITIES:
Data Engineering & AI Enablement
- Build and maintain scalable data pipelines and
ETL/ELT workflows supporting analytics, operational reporting, and AI/ML use
cases. - Implement best practice patterns for ingestion,
transformation, modeling, and orchestration within a modern lakehouse
environment (e.g., Databricks, Delta Lake, Azure Data Lake). - Develop highperformance
data models and curated datasets with strong attention to quality, usability,
and interoperability; create reusable engineering components and automation. - Collaborate with the Architecture Team, the Data
Platform Lead, and federated IT teams to optimize storage, compute, and
architectural patterns for performance and costefficiency. - Build model-ready data sets and feature
pipelines to support AI/ ML use cases; serve as a technical coordination point
supporting business units' AI-related infrastructure needs. - Collaborate with data scientists and AI Working
Group to operationalize models responsibly and maintain ongoing monitoring
signals.
Governance, Quality & Compliance
- Embed data governance, metadata standards,
lineage tracking, and quality controls directly into engineering workflows;
ensure technical implementation and alignment within engineering workflows. - Work with the Data Governance Lead and business
stakeholders to operationalize stewardship, classification, validation,
retention, and access standards. - Implement privacybydesign and securitybydesign
principles, ensuring compliance with internal policies and regulatory
obligations. - Maintain documentation for pipelines, datasets,
and transformations to support transparency and audit requirements.
Platform Reliability, Observability & Optimization
- Monitor and troubleshoot pipeline failures,
performance bottlenecks, data anomalies, and platformlevel issues. - Implement observability tooling, alerts,
logging, and dashboards to ensure endtoend reliability. - Support cost governance by optimizing compute
resources, refining job schedules, and advising on efficient architecture. - Collaborate with the Data Platform Lead on
scaling, configuration management, CI/CD pipelines, and environment management. - Collaborate with business units to understand
data needs, translate them into engineering requirements, and deliver
fit-for-purpose data solutions; share and apply best practices and emerging
technologies within assigned initiatives. - Work with IT Security and Legal/ Compliance to
ensure platform and datasets meet risk and regulatory standards.
Staff Management
- Lead, mentor, and provide management oversight
for staff. - Responsible for setting objectives, evaluating
employee performance, and fostering a collaborative team environment. - Responsible for developing staff knowledge and
skills to support career development.
May include other responsibilities as assigned
REQUIREMENTS:
- Bachelor's degree in Computer Science, Engineering, Information Systems, or related field preferred or equivalent work experience and HS diploma/equivalent education required.
- 5+ years of experience in data engineering within cloud environments
- Experience in people management preferred.
- Demonstrated hands-on experience with modern data platforms (Databricks preferred).
- Proficiency in Python, SQL, and data
transformation frameworks. - Experience designing and operationalizing
ETL/ELT pipelines, orchestration workflows (Airflow, Databricks Workflows), and
CI/CD processes. - Solid understanding of data modeling,
structured/unstructured data patterns, and schema design. - Experience implementing governance and quality
controls: metadata, lineage, validation, stewardship workflows. - Working knowledge of cloud architecture, IAM,
networking, and security best practices. - Demonstrated ability to collaborate across
technical and business teams. - Exposure to AI/ML engineering concepts, feature
stores, model monitoring, or MLOps patterns. - Experience with infrastructureascode
(Terraform, CloudFormation) or DevOps tooling.
The American Medical Association is located at 330 N. Wabash Avenue, Chicago, IL 60611 and is convenient to all public transportation in Chicago.
This role is an exempt position, and the salary range for this position is $115,523.42-$150,972.44. This is the lowest to highest salary we believe we would pay for this role at the time of this posting. An employee's pay within the salary range will be determined by a variety of factors including but not limited to business consideration and geographical location, as well as candidate qualifications, such as skills, education, and experience. Employees are also eligible to participate in an incentive plan. To learn more about the American Medical Association's benefits offerings, please click here.
We are an equal opportunity employer, committed to diversity in our workforce. All qualified applicants will receive consideration for employment. As an EOE/AA employer, the American Medical Association will not discriminate in its employment practices due to an applicant's race, color, religion, sex, age, national origin, sexual orientation, gender identity and veteran or disability status.
THE AMA IS COMMITTED TO IMPROVING THE HEALTH OF THE NATION
Apply NowShare Save JobRemote working/work at home options are available for this role.
About Pinterest:
Millions of people around the world come to our platform to find creative ideas, dream about new possibilities and plan for memories that will last a lifetime. At Pinterest, we're on a mission to bring everyone the inspiration to create a life they love, and that starts with the people behind the product.
Discover a career where you ignite innovation for millions, transform passion into growth opportunities, celebrate each other's unique experiences and embrace theflexibility to do your best work. Creating a career you love? It's Possible.
At Pinterest, AI isn't just a feature, it's a powerful partner that augments our creativity and amplifies our impact, and we're looking for candidates who are excited to be a part of that. To get a complete picture of your experience and abilities, we'll explore your foundational skills and how you collaborate with AI.
Through our interview process, what matters most is that you can always explain your approach, showing us not just what you know, but how you think. You can read more about our AI interview philosophy and how we use AI in our recruiting process here.
About tvScientific
tvScientific is the first and only CTV advertising platform purpose-built for performance marketers. We leverage massive data and cutting-edge science to automate and optimize TV advertising to drive business outcomes. Our solution combines media buying, optimization, measurement, and attribution in one, efficient platform. Our platform is built by industry leaders with a long history in programmatic advertising, digital media, and ad verification who have now purpose-built a CTV performance platform advertisers can trust to grow their business.
We are seeking a Machine Learning Engineer to build out our simulation and AI capabilities. You'll design and implement systems that model the CTV advertising ecosystem - auction dynamics, bidding strategies, campaign outcomes, and counterfactual scenarios - and develop AI-driven tools that accelerate how we build, test, and deploy ML systems.
What you'll do:
- Design and build simulation environments that model CTV auction mechanics, inventory supply, and advertiser competition
- Develop counterfactual and what-if frameworks for evaluating bidding strategies, budget allocation, and pacing algorithms offline
- Build AI agents that explore strategy spaces, generate hypotheses, and automate experimentation within simulated environments
- Use LLMs and generative AI to accelerate internal ML workflows - synthetic data generation, code generation, automated analysis, and rapid prototyping
- Use simulation to de-risk ML model deployments - validate new bidding and optimization strategies before they touch live traffic
- Define the technical direction for simulation and AI infrastructure and mentor engineers on the team
What we're looking for:
- Strong production Python skills and experience building simulation or modeling systems
- Deep understanding of probabilistic modeling, stochastic processes, or agent-based simulation
- Hands-on experience with modern AI tools: LLMs, code generation, agentic workflows - and good judgment about when they help vs. when they don't
- Adtech experience: you understand auction theory, RTB mechanics, and the dynamics of programmatic advertising
- Ability to translate business questions ("what happens if we change our bid strategy?") into rigorous simulation frameworks
- Clear written communication: you'll be defining new technical directions and need to bring others along
- Ownership: you scope, design, and ship systems end-to-end with minimal direction
- Nice-to-Haves:
- Causal inference - uplift modeling, synthetic controls, difference-in-differences, or incrementality testing
- Experience with discrete event simulation, Monte Carlo methods, or digital twins
- Reinforcement learning - using simulated environments for policy learning and evaluation
- Experience building agentic AI systems or multi-agent simulations
- Big data experience with Scala and Spark
- Systems programming experience in Zig or similar (C, C++, Rust)
- MLOps experience - model deployment, monitoring, and pipeline orchestration on AWS
In-Office Requirement Statement:
- We recognize that the ideal environment for work is situational and may differ across departments. What this looks like day-to-day can vary based on the needs of each organization or role.
Relocation Statement:
- This position is not eligible for relocation assistance. Visit our PinFlex page to learn more about our working model.
#LI-SM4
#LI-REMOTE
At Pinterest we believe the workplace should be equitable, inclusive, and inspiring for every employee. In an effort to provide greater transparency, we are sharing the base salary range for this position. The position is also eligible for equity. Final salary is based on a number of factors including location, travel, relevant prior experience, or particular skills and expertise.
Information regarding the culture at Pinterest and benefits available for this position can be found here.
US based applicants only$123,696—$254,667 USDOur Commitment to Inclusion:
Pinterest is an equal opportunity employer and makes employment decisions on the basis of merit. We want to have the best qualified people in every job. All qualified applicants will receive consideration for employment without regard to race, color, ancestry, national origin, religion or religious creed, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, age, marital status, status as a protected veteran, physical or mental disability, medical condition, genetic information or characteristics (or those of a family member) or any other consideration made unlawful by applicable federal, state or local laws. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. If you require a medical or religious accommodation during the job application process, please completethis formfor support.
Duration: 11 Months (Contract to hire)
Location: Columbia, SC
Onsite Requirements: Partially onsite 3 days per week (Tue, Wed, Thurs) and as needed.
Standard work hours: 8:00 AM - 5:00 PM
**Credit check will be required**
Job Summary:
Day to Day:
- A typical day will involve a mix of hands-on coding, architectural design, and research.
- The engineer will spend a significant portion of their time in Python, building and optimizing agentic AI systems using frameworks like LangChain.
- This includes integrating these agents with our backend services and deploying them using CI/CD pipelines into our cloud environment.
- They will also be responsible for researching and testing new agentic models and frameworks, monitoring agent behavior in production, and collaborating with data scientists and business stakeholders to refine requirements and ensure the ethical deployment of AI solutions.
Team: The team is an innovative, collaborative, and empowering environment. We are building the next generation of AI solutions for the enterprise in a fast-paced, project-oriented setting. This is a multi-platformed environment that values creativity, continuous learning, and a customer-focused mindset. The new engineer will play a crucial role in shaping our AI strategy and building foundational tools and accelerators that will drive innovation across the company.
Job Requirements:
**This is a new role to establish a core competency in agentic AI systems. This engineer will be pivotal in designing and deploying advanced AI agents and will build the foundational frameworks for future AI use cases across the organization.**
Required Experience:
Required Software and Tools (Hands on experience required):
- Python
- JavaScript/TypeScript
- AI Tools and Libraries (e.g. LangGraph, LangChain, Deep Agents, Claude Skills, etc.)
- AI Models (e.g. Claude, OpenAI, etc.)
- AI Concepts (e.g. Prompt Engineering, RAG, Agentic AI, etc.)
- Distributed SDLC/DevOps (e.g. github, pipelines, VS Code, testing frameworks, etc.)
- Platforms (Container Platforms, Cloud Platforms, Document Databases, AWS)
- API Design
Python & AI/ML Libraries:
- Deep hands-on experience in Python for AI/ML development.
- Generative AI Development: Proven experience developing Gen AI or AI/ML solutions, from use case conceptualization to production deployment.
- Infrastructure & DevOps: Strong understanding of cloud environments (AWS preferred), LLM hosting, CI/CD pipelines, Docker, and Kubernetes.
- Agentic AI Concepts: Knowledge of agentic/autonomous systems (e.g., reasoning, planning, tool use).
Minimum Required Education: Bachelor's degree-in Computer Science, Information Technology or other job related degree or 4 years relevant experience or Associates degree + 2 years relevant experience
Minimum Required Work Experience: 6years-of application development, systems testing or other job related experience.
Required Technologies: 3-6 years of hands-on experience in Artificial Intelligence, Machine Learning, or related fields.
Nice to have/Preferred skills:
- Proficiency in Python development and FastAPI/Flask frameworks, along with SQL.
- Familiarity with agentic AI frameworks and concepts such as LangChain, LangGraph, AutoGen, Model Context Protocol (MCP), Chain of Thought prompting, knowledge stores, and embeddings.
- Experience developing autonomous agents using cloud-based AI services.
- Experience with prompt engineering techniques and model fine-tuning.
- Strong understanding of reinforcement learning, planning algorithms, and multi-agent systems.
- Experience working across cloud platforms (AWS, Azure, GCP) and deploying AI solutions at scale.