Haystack Github Jobs Hiring Now Jobs in Usa
185 positions found — Page 16
J
Senior Data Scientist
🏢 Jobot
Salary not disclosed
Urgently hiring Senior Data Scientist! This Jobot Job is hosted by: Kendall Kaing Are you a fit? Easy Apply now by clicking the "Apply Now" button and sending us your resume.
Salary: $80,000
- $150,000 per year A bit about us: Are you interested in working for the world's largest cannabis market with a footprint that covers the entire breadth of the state of California? Are you someone who wants to be part of the growth of a fast-growing industry? At client company, our mission is clear: to provide the ultimate one stop shop cannabis experience by offering exceptional customer service and diversified products.
We strive to build long-term customer loyalty.
We’re building a consumer-centric organization that is focused on sharing the transformational potential of cannabis with the world.
Our product line is one of the best-selling cannabis brands in the market today and has claimed the title of the best-selling vape brand across all BDSA-tracked markets and best-selling brand overall in the California market! We are rooted in California and have expanded our operations across the United States, with even more growth on the horizon! Additionally, we’re building distribution networks to bring our products to over 60 countries worldwide.
We recognize that our employees are at the center of our success, and we take pride in a corporate culture that emphasizes our core values: Influence, Inspire, Innovate, Win, & Grow! Our employees come from a wide range of retail backgrounds, each bringing their own unique skills and talents to the table as we work together to continue our incredible growth.
If you are interested in partaking in the journey of building a nationally recognized and leading brand, we want to hear from you! Why join us? Benefits & Compensation: Additional details about compensation and benefits eligibility for this role will be provided during the hiring process.
All employees are provided competitive compensation, paid training, and employee discounts on our products and services.
We offer a range of benefits packages based on employee eligibility, including: Paid Vacation Time, Paid Sick Leave, Paid Holidays, Parental Leave.
Health, Dental, and Vision Insurance.
Employee Assistance Program.
401k with generous employer match.
Life Insurance.
Job Details Data Scientist Why this role matters You’ll architect, build, and operate the data infrastructure that powers analytics, machine learning, and operational reporting across the organization.
This is a hands-on role for a data scientist who thrives at the intersection of engineering, modeling, and production-grade systems.
What you’ll do Design and evolve our lakehouse architecture (Delta Lake on AWS/Azure), including storage layout, compute strategy (Databricks, EMR, Synapse), and performance SLAs Build robust batch and streaming data pipelines using Apache Spark (PySpark/Scala), Databricks Workflows, Azure Data Factory, and event-driven integrations (Kinesis, Event Hubs, Kafka) Develop and maintain analytical and ML-ready data models, semantic layers, and dimensional schemas for BI and production use Implement observability and reliability features including custom Spark metrics, anomaly detection, logging, and automated data quality checks Optimize compute and storage performance through cluster tuning, caching, partitioning, and format selection, with measurable cost savings Enforce data governance policies including RBAC, row-level security, cataloging, lineage tracking, and compliance with GDPR, HIPAA/FHIR, and CCPA/CDPA Collaborate with analytics, product, and platform teams to define SLAs, manage incident response, and guide data contract best practices Mentor junior engineers and lead design reviews, setting standards for scalable, maintainable data systems What you bring 8+ years of experience in data science or data engineering Expert-level Python and SQL; working knowledge of Scala/Java for Spark 3–5+ years of experience with Apache Spark and Databricks (including Delta Live Tables and performance tuning) Strong cloud experience in AWS (S3, EMR, Glue, Lambda, Kinesis) and/or Azure (ADLS Gen2, ADF, Event Hubs, Synapse) Experience with orchestration tools (Airflow, ADF) and transformation frameworks (dbt) Familiarity with data warehouses such as Snowflake, Redshift, or Databricks SQL Proven track record of cost and performance optimization in cloud data environments Solid foundation in data modeling, CI/CD, Docker/Linux, and Git Ability to translate business needs into scalable data products and lead projects end-to-end Nice to have Experience with custom Spark listeners, low-latency streaming, or event-driven architectures Exposure to healthcare data pipelines (FHIR) and DLT troubleshooting Dashboarding experience with Power BI, Tableau, or Looker Familiarity with governance frameworks and MLOps (feature stores, model monitoring) Our stack Databricks, Spark (PySpark/Scala), Delta Lake, Airflow, ADF, dbt, AWS (S3/EMR/Glue/Kinesis), Azure (ADLS/Synapse/Event Hubs), Terraform, GitHub Actions, Docker, CloudWatch, REST Success in 6–12 months looks like Reliable, observable pipelines with
Salary: $80,000
- $150,000 per year A bit about us: Are you interested in working for the world's largest cannabis market with a footprint that covers the entire breadth of the state of California? Are you someone who wants to be part of the growth of a fast-growing industry? At client company, our mission is clear: to provide the ultimate one stop shop cannabis experience by offering exceptional customer service and diversified products.
We strive to build long-term customer loyalty.
We’re building a consumer-centric organization that is focused on sharing the transformational potential of cannabis with the world.
Our product line is one of the best-selling cannabis brands in the market today and has claimed the title of the best-selling vape brand across all BDSA-tracked markets and best-selling brand overall in the California market! We are rooted in California and have expanded our operations across the United States, with even more growth on the horizon! Additionally, we’re building distribution networks to bring our products to over 60 countries worldwide.
We recognize that our employees are at the center of our success, and we take pride in a corporate culture that emphasizes our core values: Influence, Inspire, Innovate, Win, & Grow! Our employees come from a wide range of retail backgrounds, each bringing their own unique skills and talents to the table as we work together to continue our incredible growth.
If you are interested in partaking in the journey of building a nationally recognized and leading brand, we want to hear from you! Why join us? Benefits & Compensation: Additional details about compensation and benefits eligibility for this role will be provided during the hiring process.
All employees are provided competitive compensation, paid training, and employee discounts on our products and services.
We offer a range of benefits packages based on employee eligibility, including: Paid Vacation Time, Paid Sick Leave, Paid Holidays, Parental Leave.
Health, Dental, and Vision Insurance.
Employee Assistance Program.
401k with generous employer match.
Life Insurance.
Job Details Data Scientist Why this role matters You’ll architect, build, and operate the data infrastructure that powers analytics, machine learning, and operational reporting across the organization.
This is a hands-on role for a data scientist who thrives at the intersection of engineering, modeling, and production-grade systems.
What you’ll do Design and evolve our lakehouse architecture (Delta Lake on AWS/Azure), including storage layout, compute strategy (Databricks, EMR, Synapse), and performance SLAs Build robust batch and streaming data pipelines using Apache Spark (PySpark/Scala), Databricks Workflows, Azure Data Factory, and event-driven integrations (Kinesis, Event Hubs, Kafka) Develop and maintain analytical and ML-ready data models, semantic layers, and dimensional schemas for BI and production use Implement observability and reliability features including custom Spark metrics, anomaly detection, logging, and automated data quality checks Optimize compute and storage performance through cluster tuning, caching, partitioning, and format selection, with measurable cost savings Enforce data governance policies including RBAC, row-level security, cataloging, lineage tracking, and compliance with GDPR, HIPAA/FHIR, and CCPA/CDPA Collaborate with analytics, product, and platform teams to define SLAs, manage incident response, and guide data contract best practices Mentor junior engineers and lead design reviews, setting standards for scalable, maintainable data systems What you bring 8+ years of experience in data science or data engineering Expert-level Python and SQL; working knowledge of Scala/Java for Spark 3–5+ years of experience with Apache Spark and Databricks (including Delta Live Tables and performance tuning) Strong cloud experience in AWS (S3, EMR, Glue, Lambda, Kinesis) and/or Azure (ADLS Gen2, ADF, Event Hubs, Synapse) Experience with orchestration tools (Airflow, ADF) and transformation frameworks (dbt) Familiarity with data warehouses such as Snowflake, Redshift, or Databricks SQL Proven track record of cost and performance optimization in cloud data environments Solid foundation in data modeling, CI/CD, Docker/Linux, and Git Ability to translate business needs into scalable data products and lead projects end-to-end Nice to have Experience with custom Spark listeners, low-latency streaming, or event-driven architectures Exposure to healthcare data pipelines (FHIR) and DLT troubleshooting Dashboarding experience with Power BI, Tableau, or Looker Familiarity with governance frameworks and MLOps (feature stores, model monitoring) Our stack Databricks, Spark (PySpark/Scala), Delta Lake, Airflow, ADF, dbt, AWS (S3/EMR/Glue/Kinesis), Azure (ADLS/Synapse/Event Hubs), Terraform, GitHub Actions, Docker, CloudWatch, REST Success in 6–12 months looks like Reliable, observable pipelines with
Not Specified
V
Sr Data Architect
🏢 V2Soft
Salary not disclosed
V2Soft is a global leader in IT services and business solutions, delivering innovative and cost-effective technology solutions worldwide since 1998.
We have headquarteerd in Bloomfiled Hills, MI and have 16 offices spread across six countries.
We partner with Fortune 500 companies to address complex business challenges.
Our services span AI, IT staffing, cloud computing, engineering, mobility, testing, and more.
Certified with CMMI Level 3 and ISO standards, V2Soft is committed to quality and security.
Beyond our work, we actively support local communities and non-profits, reflecting our core values.
Join us to be part of a dynamic and impactful global company! Please visit us at to know more .
Must Have Skills: Prior experience in JavaScript Frameworks.
Expert in designing/developing J2EE based web-applications Prior experience Architecting J2EE & Core Java based applications.
Expert in Design and development of REST services including authorization and metering.
Expert in designing/developing Java backend for web-applications Prior experience in JavaScript/UI Frameworks Prior experience in Build & Integration Tools required: Gradle, Jenkins Pipeline, Docker, GitHub Deployment experience to Open Shift Platform (Kubernetes), exposure to Helm Charts required Prior experience in Relational databases (Oracle, Teradata etc.) Nice to Have Skills: Prior experience in designing/developing Java/J2EE based web-applications Prior experience in developing Python backends Prior experience in Developing Single Page Web Apps (SPA) Prior experience in NOSQL databases V2Soft is an Equal Opportunity Employer ( EOE).
We welcome applicants from all backgrounds, including individuals with disabilities and veterans.
to view all of our open opportunities and to learn more about our benefits.
We have headquarteerd in Bloomfiled Hills, MI and have 16 offices spread across six countries.
We partner with Fortune 500 companies to address complex business challenges.
Our services span AI, IT staffing, cloud computing, engineering, mobility, testing, and more.
Certified with CMMI Level 3 and ISO standards, V2Soft is committed to quality and security.
Beyond our work, we actively support local communities and non-profits, reflecting our core values.
Join us to be part of a dynamic and impactful global company! Please visit us at to know more .
Must Have Skills: Prior experience in JavaScript Frameworks.
Expert in designing/developing J2EE based web-applications Prior experience Architecting J2EE & Core Java based applications.
Expert in Design and development of REST services including authorization and metering.
Expert in designing/developing Java backend for web-applications Prior experience in JavaScript/UI Frameworks Prior experience in Build & Integration Tools required: Gradle, Jenkins Pipeline, Docker, GitHub Deployment experience to Open Shift Platform (Kubernetes), exposure to Helm Charts required Prior experience in Relational databases (Oracle, Teradata etc.) Nice to Have Skills: Prior experience in designing/developing Java/J2EE based web-applications Prior experience in developing Python backends Prior experience in Developing Single Page Web Apps (SPA) Prior experience in NOSQL databases V2Soft is an Equal Opportunity Employer ( EOE).
We welcome applicants from all backgrounds, including individuals with disabilities and veterans.
to view all of our open opportunities and to learn more about our benefits.
Not Specified
A
Senior DevOps Lead
Salary not disclosed
Role: Sr DevOps Lead Location: Cleveland, OH Duration: Full Time Interview: Online/ Video Job Description Must Have Technical/Functional Skills • Thorough knowledge of continuous integration, continuous delivery, continuous testing, and configuration management methodologies.
• Thorough knowledge of common technologies used in DevOps such as Jenkins, Github • Good experience with containerization docker, Kubernetes etc • Programming experience of Java/Java springboot will be preferable • Adhere to technical standards and participate in standards evolution.
• Understand the importance of teamwork and coordinated activities.
• Demonstrate effective communications skills at all organizational levels.
Thorough knowledge of analytical thinking concepts and techniques.
• Mentor and lead junior members in continuous integration and continuous delivery concepts.
• Participate in cross-functional design and troubleshooting activities at an organizational level.
• Learn, design, and code in at least two development languages and integrate multiple tools through their APIs, command line interfaces, etc.
Design, code, and test systems with a long-term mindset of maintainability.
• Contribute to an atmosphere of cross-functional teamwork within the KeyBank Agile project life-cycle and ability to act within an Agile environment working with user stories, iterative development, continuous integration, continuous delivery, continuous feedback, etc.
• Work with the technical organization at large to understand its CI/CD and release management needs and be able to transform that understanding into solution requirements.
• Thorough knowledge of common technologies used in DevOps such as Jenkins, Github • Good experience with containerization docker, Kubernetes etc • Programming experience of Java/Java springboot will be preferable • Adhere to technical standards and participate in standards evolution.
• Understand the importance of teamwork and coordinated activities.
• Demonstrate effective communications skills at all organizational levels.
Thorough knowledge of analytical thinking concepts and techniques.
• Mentor and lead junior members in continuous integration and continuous delivery concepts.
• Participate in cross-functional design and troubleshooting activities at an organizational level.
• Learn, design, and code in at least two development languages and integrate multiple tools through their APIs, command line interfaces, etc.
Design, code, and test systems with a long-term mindset of maintainability.
• Contribute to an atmosphere of cross-functional teamwork within the KeyBank Agile project life-cycle and ability to act within an Agile environment working with user stories, iterative development, continuous integration, continuous delivery, continuous feedback, etc.
• Work with the technical organization at large to understand its CI/CD and release management needs and be able to transform that understanding into solution requirements.
Not Specified
M
Cloud Native Software Engineer - Automation
Salary not disclosed
Job Summary Job Description Job Title: Software Engineer – Cloud Native, Automation Location: Northbrook, IL Department: E-Commerce IT Team Employment Type: Full-Time About the Role We are looking for a seasoned Software Engineer – Test Automation (QA) to join our E-commerce B2B team.
In this role, you'll drive the future of test engineering by contributing to core automation initiatives and enabling frameworks, tools, and platforms that accelerate quality across our teams.
This is not a traditional QA role.
We're looking for a full-stack automation expert who is passionate about tooling, platforms, and frameworks, capable of evaluating and implementing cutting-edge solutions—including AI-powered test platforms, low-code/no-code test tools, building PoCs, and influencing architecture-level decisions.
Our application stack includes Java/Spring Boot microservices on the backend and Next.js/React on the frontend, all deployed to Azure Kubernetes Service (AKS).
Key Responsibilities Design, implement, and maintain reusable automation frameworks for backend (API/microservices) and frontend (UI) testing.
Research, evaluate, and introduce modern test automation tools (e.g., low-code/no-code, model-based testing, AI-powered test platforms).
Lead proofs-of-concept (PoCs) for new tools and frameworks to assess fit for the organization, enable and simplify shift-left practices within teams.
Contribute to shared quality tooling and pipelines to be used across teams (test orchestration, data setup, mocks/stubs, parallelization, etc.).
Collaborate with Digital Framework, platform, DevOps, and other engineering teams to shift quality left and define testability patterns.
Provide leadership in setting up automation standards, coding guidelines, and governance models for cross-team reuse.
Lead teams to ensure maximum automation coverage for functionality and be accountable for ensuring QA practices are aligned to the goal of maximum quality.
Accountable for overall QA deliverables, practices, and standards within teams and for compliance with functional, integration, regression and performance/load testing (NFR testing) goals for all systems and projects worked on by the teams.
Actively mentor junior engineers and guide teams in adopting quality engineering best practices.
What You Bring: Skills and Expertise 8+ years of experience in Quality Engineering, with deep expertise in test automation.
Strong in all aspects of SDLC and Testing Lifecycle.
Strong programming skills in Java (Spring Boot stack) and JavaScript/TypeScript (React/Next.js).
Experience building automation frameworks using JUnit/TestNG, Cypress, Playwright, Selenium, etc.
Proficiency in evaluating and implementing tools for test case management, test execution, and reporting.
Demonstrated success with CI/CD-integrated testing using Azure DevOps, GitHub Actions, or equivalent.
Knowledge of Kubernetes, containers, and testing in cloud-native environments (preferably Azure).
Familiarity with contract testing, mocking services, and management strategies.
Strong background and experience working in Agile teams.
Preferred Skills Experience with low-code/no-code automation platforms such as Testim, Tricentis Tosca, Mabl, or Katalon.
Exposure to AI/ML-based testing, self-healing tests, or model-based test generation.
Understanding of platform thinking and working in centralized enablement teams.
Hands-on experience with observability and monitoring tools for test impact analysis.
Educational Requirements Bachelor’s Degree in Computer Science, Engineering, or a related field is required.
Why Join Medline? Be a key contributor to the transformation of Medline’s e-commerce platform.
Access opportunities for growth, training, and certifications.
Competitive salary, comprehensive benefits, and flexible work arrangements.
Collaborate with a dynamic and innovative team in a supportive culture.
Medline Industries, LP, and its subsidiaries, offer a competitive total rewards package, continuing education & training, and tremendous potential with a growing worldwide organization.
The anticipated salary range for this position: $115,440.00
- $173,160.00 Annual The actual salary will vary based on applicant’s location, education, experience, skills, and abilities.
This role is bonus and/or incentive eligible.
Medline will not pay less than the applicable minimum wage or salary threshold.
Our benefit package includes health insurance, life and disability, 401(k) contributions, paid time off, etc., for employees working 30 or more hours per week on average.
For a more comprehensive list of our benefits please click here .
For roles where employees work less than 30 hours per week, benefits include 401(k) contributions as well as access to the Employee Assistance Program, Employee Resource Groups and the Employee Service Corp.
We’re dedicated to creating a Medline where everyone feels they belong and can grow their career.
We strive to do this by seeking diversity in all forms, acting inclusively, and ensuring that people have tools and resources to perform at their best.
Explore our Belonging page here .
Medline Industries, LP is an equal opportunity employer.
Medline evaluates qualified individuals without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, age, disability, neurodivergence, protected veteran status, marital or family status, caregiver responsibilities, genetic information, or any other characteristic protected by applicable federal, state, or local laws.
In this role, you'll drive the future of test engineering by contributing to core automation initiatives and enabling frameworks, tools, and platforms that accelerate quality across our teams.
This is not a traditional QA role.
We're looking for a full-stack automation expert who is passionate about tooling, platforms, and frameworks, capable of evaluating and implementing cutting-edge solutions—including AI-powered test platforms, low-code/no-code test tools, building PoCs, and influencing architecture-level decisions.
Our application stack includes Java/Spring Boot microservices on the backend and Next.js/React on the frontend, all deployed to Azure Kubernetes Service (AKS).
Key Responsibilities Design, implement, and maintain reusable automation frameworks for backend (API/microservices) and frontend (UI) testing.
Research, evaluate, and introduce modern test automation tools (e.g., low-code/no-code, model-based testing, AI-powered test platforms).
Lead proofs-of-concept (PoCs) for new tools and frameworks to assess fit for the organization, enable and simplify shift-left practices within teams.
Contribute to shared quality tooling and pipelines to be used across teams (test orchestration, data setup, mocks/stubs, parallelization, etc.).
Collaborate with Digital Framework, platform, DevOps, and other engineering teams to shift quality left and define testability patterns.
Provide leadership in setting up automation standards, coding guidelines, and governance models for cross-team reuse.
Lead teams to ensure maximum automation coverage for functionality and be accountable for ensuring QA practices are aligned to the goal of maximum quality.
Accountable for overall QA deliverables, practices, and standards within teams and for compliance with functional, integration, regression and performance/load testing (NFR testing) goals for all systems and projects worked on by the teams.
Actively mentor junior engineers and guide teams in adopting quality engineering best practices.
What You Bring: Skills and Expertise 8+ years of experience in Quality Engineering, with deep expertise in test automation.
Strong in all aspects of SDLC and Testing Lifecycle.
Strong programming skills in Java (Spring Boot stack) and JavaScript/TypeScript (React/Next.js).
Experience building automation frameworks using JUnit/TestNG, Cypress, Playwright, Selenium, etc.
Proficiency in evaluating and implementing tools for test case management, test execution, and reporting.
Demonstrated success with CI/CD-integrated testing using Azure DevOps, GitHub Actions, or equivalent.
Knowledge of Kubernetes, containers, and testing in cloud-native environments (preferably Azure).
Familiarity with contract testing, mocking services, and management strategies.
Strong background and experience working in Agile teams.
Preferred Skills Experience with low-code/no-code automation platforms such as Testim, Tricentis Tosca, Mabl, or Katalon.
Exposure to AI/ML-based testing, self-healing tests, or model-based test generation.
Understanding of platform thinking and working in centralized enablement teams.
Hands-on experience with observability and monitoring tools for test impact analysis.
Educational Requirements Bachelor’s Degree in Computer Science, Engineering, or a related field is required.
Why Join Medline? Be a key contributor to the transformation of Medline’s e-commerce platform.
Access opportunities for growth, training, and certifications.
Competitive salary, comprehensive benefits, and flexible work arrangements.
Collaborate with a dynamic and innovative team in a supportive culture.
Medline Industries, LP, and its subsidiaries, offer a competitive total rewards package, continuing education & training, and tremendous potential with a growing worldwide organization.
The anticipated salary range for this position: $115,440.00
- $173,160.00 Annual The actual salary will vary based on applicant’s location, education, experience, skills, and abilities.
This role is bonus and/or incentive eligible.
Medline will not pay less than the applicable minimum wage or salary threshold.
Our benefit package includes health insurance, life and disability, 401(k) contributions, paid time off, etc., for employees working 30 or more hours per week on average.
For a more comprehensive list of our benefits please click here .
For roles where employees work less than 30 hours per week, benefits include 401(k) contributions as well as access to the Employee Assistance Program, Employee Resource Groups and the Employee Service Corp.
We’re dedicated to creating a Medline where everyone feels they belong and can grow their career.
We strive to do this by seeking diversity in all forms, acting inclusively, and ensuring that people have tools and resources to perform at their best.
Explore our Belonging page here .
Medline Industries, LP is an equal opportunity employer.
Medline evaluates qualified individuals without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, age, disability, neurodivergence, protected veteran status, marital or family status, caregiver responsibilities, genetic information, or any other characteristic protected by applicable federal, state, or local laws.
Not Specified
J
Solutions Architect - Managed Services (Cloud Data Platforms l Snowflake, AWS, Azure)
🏢 Jobot
Salary not disclosed
Remote first / cloud-native data platforms This Jobot Job is hosted by: Katrina McFillin Are you a fit? Easy Apply now by clicking the "Apply Now" button and sending us your resume.
Salary: $165,000
- $190,000 per year A bit about us: Founded over a decade ago, we are a leading AI and data services partner helping global brands design, build, and run modern data and AI platforms on Snowflake and major cloud providers.
We specialize in data engineering, analytics, and managed services, combining deep technical expertise with a global delivery model to maximize business value from data.
Why join us? Competitive base compensation + bonus Remote first Comprehensive Benefits 4 weeks PTO + 10 holidays Accelerated learning and professional development with advanced training and certifications (e.g., Snowflake, cloud).
High-Impact Work: Join a specialized Elastic Operations team running mission-critical cloud data platforms for leading enterprises.
Autonomy & Collaboration: Work in a culture that prizes autonomy, creativity, transparency, and cross-functional collaboration.
Job Details Key Responsibilities and Duties: Lead the design, architecture, and implementation of large-scale cloud-native data platform solutions on Snowflake, AWS, and Azure.
Drive data migrations, integrations, and performance tuning across data warehouses, data lakes, and distributed systems to optimize reliability and cost.
Own platform security, data governance, and process engineering to ensure robust, scalable, and continuously improving data environments.
Manage multiple client engagements as a trusted advisor, collaborating with cross-functional teams and mentoring junior engineers to maximize platform ROI and customer success.
Qualifications: Bachelor’s degree in Computer Science, Engineering, or related field (advanced degrees or relevant certifications are a plus).
At least 10 years of hands-on experience architecting, designing, implementing, and managing cloud-native data platforms (Snowflake, Redshift, Azure Data Warehouse) on AWS and/or Azure.
Proven client-facing consulting experience, including presenting to executive stakeholders and creating detailed solution documentation.
Strong technical depth in SQL, infrastructure-as-code (Terraform or CloudFormation), CI/CD (e.g., GitHub, Bitbucket), and modern data integration tools (e.g., AWS DMS, Azure Data Factory, Matillion, Fivetran, Spark) Interested in hearing more? Easy Apply now by clicking the "Apply Now" button.
Jobot is an Equal Opportunity Employer.
We provide an inclusive work environment that celebrates diversity and all qualified candidates receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, age (40 and over), disability, military status, genetic information or any other basis protected by applicable federal, state, or local laws.
Jobot also prohibits harassment of applicants or employees based on any of these protected categories.
It is Jobot’s policy to comply with all applicable federal, state and local laws respecting consideration of unemployment status in making hiring decisions.
Sometimes Jobot is required to perform background checks with your authorization.
Jobot will consider qualified candidates with criminal histories in a manner consistent with any applicable federal, state, or local law regarding criminal backgrounds, including but not limited to the Los Angeles Fair Chance Initiative for Hiring and the San Francisco Fair Chance Ordinance.
Information collected and processed as part of your Jobot candidate profile, and any job applications, resumes, or other information you choose to submit is subject to Jobot's Privacy Policy, as well as the Jobot California Worker Privacy Notice and Jobot Notice Regarding Automated Employment Decision Tools which are available at /legal.
By applying for this job, you agree to receive calls, AI-generated calls, text messages, or emails from Jobot, and/or its agents and contracted partners.
Frequency varies for text messages.
Message and data rates may apply.
Carriers are not liable for delayed or undelivered messages.
You can reply STOP to cancel and HELP for help.
You can access our privacy policy here: /privacy-policy
Salary: $165,000
- $190,000 per year A bit about us: Founded over a decade ago, we are a leading AI and data services partner helping global brands design, build, and run modern data and AI platforms on Snowflake and major cloud providers.
We specialize in data engineering, analytics, and managed services, combining deep technical expertise with a global delivery model to maximize business value from data.
Why join us? Competitive base compensation + bonus Remote first Comprehensive Benefits 4 weeks PTO + 10 holidays Accelerated learning and professional development with advanced training and certifications (e.g., Snowflake, cloud).
High-Impact Work: Join a specialized Elastic Operations team running mission-critical cloud data platforms for leading enterprises.
Autonomy & Collaboration: Work in a culture that prizes autonomy, creativity, transparency, and cross-functional collaboration.
Job Details Key Responsibilities and Duties: Lead the design, architecture, and implementation of large-scale cloud-native data platform solutions on Snowflake, AWS, and Azure.
Drive data migrations, integrations, and performance tuning across data warehouses, data lakes, and distributed systems to optimize reliability and cost.
Own platform security, data governance, and process engineering to ensure robust, scalable, and continuously improving data environments.
Manage multiple client engagements as a trusted advisor, collaborating with cross-functional teams and mentoring junior engineers to maximize platform ROI and customer success.
Qualifications: Bachelor’s degree in Computer Science, Engineering, or related field (advanced degrees or relevant certifications are a plus).
At least 10 years of hands-on experience architecting, designing, implementing, and managing cloud-native data platforms (Snowflake, Redshift, Azure Data Warehouse) on AWS and/or Azure.
Proven client-facing consulting experience, including presenting to executive stakeholders and creating detailed solution documentation.
Strong technical depth in SQL, infrastructure-as-code (Terraform or CloudFormation), CI/CD (e.g., GitHub, Bitbucket), and modern data integration tools (e.g., AWS DMS, Azure Data Factory, Matillion, Fivetran, Spark) Interested in hearing more? Easy Apply now by clicking the "Apply Now" button.
Jobot is an Equal Opportunity Employer.
We provide an inclusive work environment that celebrates diversity and all qualified candidates receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, age (40 and over), disability, military status, genetic information or any other basis protected by applicable federal, state, or local laws.
Jobot also prohibits harassment of applicants or employees based on any of these protected categories.
It is Jobot’s policy to comply with all applicable federal, state and local laws respecting consideration of unemployment status in making hiring decisions.
Sometimes Jobot is required to perform background checks with your authorization.
Jobot will consider qualified candidates with criminal histories in a manner consistent with any applicable federal, state, or local law regarding criminal backgrounds, including but not limited to the Los Angeles Fair Chance Initiative for Hiring and the San Francisco Fair Chance Ordinance.
Information collected and processed as part of your Jobot candidate profile, and any job applications, resumes, or other information you choose to submit is subject to Jobot's Privacy Policy, as well as the Jobot California Worker Privacy Notice and Jobot Notice Regarding Automated Employment Decision Tools which are available at /legal.
By applying for this job, you agree to receive calls, AI-generated calls, text messages, or emails from Jobot, and/or its agents and contracted partners.
Frequency varies for text messages.
Message and data rates may apply.
Carriers are not liable for delayed or undelivered messages.
You can reply STOP to cancel and HELP for help.
You can access our privacy policy here: /privacy-policy
Not Specified
J
DevOps Manager - Data and Cloud Engineering
🏢 Jobot
Salary not disclosed
DevOps Manager – Lead Databricks, AWS & Azure Cloud Strategy for a High-Growth Data-Driven Organization (Hybrid TX) This Jobot Job is hosted by: Charles Simmons Are you a fit? Easy Apply now by clicking the "Apply Now" button and sending us your resume.
Salary: $130,000
- $140,000 per year A bit about us: Our client is a fast-growing technology company advancing the way data, cloud, and AI infrastructure power enterprise operations.
With over a decade of innovation and a strong culture rooted in servant leadership, collaboration, and performance, the company is scaling its data and DevOps capabilities to support next-generation analytics and machine learning platforms.
They are seeking a DevOps Manager (Data & Cloud Engineering) to lead the team responsible for modernizing and expanding their cloud and data infrastructure across AWS and Azure.
This is a strategic, hands-on leadership role focused on Databricks, CI/CD automation, infrastructure-as-code, and AI-driven data modeling.
You’ll lead a cross-functional group of DevOps engineers, data engineers, and cloud architects — driving reliability, scalability, and performance across a high-impact environment.
Why join us? Lead and scale a multidisciplinary team integrating DevOps, Data Engineering, and Machine Learning Operations at enterprise scale.
Drive the evolution of a robust Databricks ecosystem, enabling advanced analytics, AI pipelines, and real-time data workflows.
Architect and automate cloud infrastructure across AWS and Azure to ensure performance, security, and cost efficiency.
Join a culture that prioritizes servant leadership, collaboration, and innovation in a high-growth environment.
Competitive compensation package including base salary, annual bonus, and equity, plus comprehensive family benefits and career advancement opportunities.
Job Details Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
7+ years of experience in DevOps, Cloud Infrastructure, or Data Engineering roles, with 3+ years in management or team leadership.
Deep hands-on experience with Databricks for large-scale data and ML workflows, including model orchestration, pipeline management, and optimization.
Experience fine-tuning or deploying Large Language Models (LLMs) using Databricks or similar frameworks.
Proven expertise in AWS and Azure cloud environments.
Strong proficiency in CI/CD tools (Jenkins, GitHub Actions, GitLab CI, or Azure DevOps).
Expertise with Infrastructure-as-Code (Terraform, CloudFormation, or ARM templates).
Skilled in Python, Bash, or PowerShell for automation and deployment scripting.
Familiarity with Docker, Kubernetes, and container orchestration in production environments.
Experience managing data-centric environments supporting data lakes, machine learning, and real-time analytics.
Strong understanding of security, compliance, and governance frameworks (SOC 2, HIPAA, GDPR).
Preferred: Certifications such as AWS Certified DevOps Engineer or Azure DevOps Engineer Expert.
Experience with MLOps, streaming data, and cloud cost optimization.
Background in Agile/Scrum methodologies with tools like Jira or Asana.
Interested in hearing more? Easy Apply now by clicking the "Apply Now" button.
Jobot is an Equal Opportunity Employer.
We provide an inclusive work environment that celebrates diversity and all qualified candidates receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, age (40 and over), disability, military status, genetic information or any other basis protected by applicable federal, state, or local laws.
Jobot also prohibits harassment of applicants or employees based on any of these protected categories.
It is Jobot’s policy to comply with all applicable federal, state and local laws respecting consideration of unemployment status in making hiring decisions.
Sometimes Jobot is required to perform background checks with your authorization.
Jobot will consider qualified candidates with criminal histories in a manner consistent with any applicable federal, state, or local law regarding criminal backgrounds, including but not limited to the Los Angeles Fair Chance Initiative for Hiring and the San Francisco Fair Chance Ordinance.
Information collected and processed as part of your Jobot candidate profile, and any job applications, resumes, or other information you choose to submit is subject to Jobot's Privacy Policy, as well as the Jobot California Worker Privacy Notice and Jobot Notice Regarding Automated Employment Decision Tools which are available at /legal.
By applying for this job, you agree to receive calls, AI-generated calls, text messages, or emails from Jobot, and/or its agents and contracted partners.
Frequency varies for text messages.
Message and data rates may apply.
Carriers are not liable for delayed or undelivered messages.
You can reply STOP to cancel and HELP for help.
You can access our privacy policy here: /privacy-policy
Salary: $130,000
- $140,000 per year A bit about us: Our client is a fast-growing technology company advancing the way data, cloud, and AI infrastructure power enterprise operations.
With over a decade of innovation and a strong culture rooted in servant leadership, collaboration, and performance, the company is scaling its data and DevOps capabilities to support next-generation analytics and machine learning platforms.
They are seeking a DevOps Manager (Data & Cloud Engineering) to lead the team responsible for modernizing and expanding their cloud and data infrastructure across AWS and Azure.
This is a strategic, hands-on leadership role focused on Databricks, CI/CD automation, infrastructure-as-code, and AI-driven data modeling.
You’ll lead a cross-functional group of DevOps engineers, data engineers, and cloud architects — driving reliability, scalability, and performance across a high-impact environment.
Why join us? Lead and scale a multidisciplinary team integrating DevOps, Data Engineering, and Machine Learning Operations at enterprise scale.
Drive the evolution of a robust Databricks ecosystem, enabling advanced analytics, AI pipelines, and real-time data workflows.
Architect and automate cloud infrastructure across AWS and Azure to ensure performance, security, and cost efficiency.
Join a culture that prioritizes servant leadership, collaboration, and innovation in a high-growth environment.
Competitive compensation package including base salary, annual bonus, and equity, plus comprehensive family benefits and career advancement opportunities.
Job Details Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
7+ years of experience in DevOps, Cloud Infrastructure, or Data Engineering roles, with 3+ years in management or team leadership.
Deep hands-on experience with Databricks for large-scale data and ML workflows, including model orchestration, pipeline management, and optimization.
Experience fine-tuning or deploying Large Language Models (LLMs) using Databricks or similar frameworks.
Proven expertise in AWS and Azure cloud environments.
Strong proficiency in CI/CD tools (Jenkins, GitHub Actions, GitLab CI, or Azure DevOps).
Expertise with Infrastructure-as-Code (Terraform, CloudFormation, or ARM templates).
Skilled in Python, Bash, or PowerShell for automation and deployment scripting.
Familiarity with Docker, Kubernetes, and container orchestration in production environments.
Experience managing data-centric environments supporting data lakes, machine learning, and real-time analytics.
Strong understanding of security, compliance, and governance frameworks (SOC 2, HIPAA, GDPR).
Preferred: Certifications such as AWS Certified DevOps Engineer or Azure DevOps Engineer Expert.
Experience with MLOps, streaming data, and cloud cost optimization.
Background in Agile/Scrum methodologies with tools like Jira or Asana.
Interested in hearing more? Easy Apply now by clicking the "Apply Now" button.
Jobot is an Equal Opportunity Employer.
We provide an inclusive work environment that celebrates diversity and all qualified candidates receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, age (40 and over), disability, military status, genetic information or any other basis protected by applicable federal, state, or local laws.
Jobot also prohibits harassment of applicants or employees based on any of these protected categories.
It is Jobot’s policy to comply with all applicable federal, state and local laws respecting consideration of unemployment status in making hiring decisions.
Sometimes Jobot is required to perform background checks with your authorization.
Jobot will consider qualified candidates with criminal histories in a manner consistent with any applicable federal, state, or local law regarding criminal backgrounds, including but not limited to the Los Angeles Fair Chance Initiative for Hiring and the San Francisco Fair Chance Ordinance.
Information collected and processed as part of your Jobot candidate profile, and any job applications, resumes, or other information you choose to submit is subject to Jobot's Privacy Policy, as well as the Jobot California Worker Privacy Notice and Jobot Notice Regarding Automated Employment Decision Tools which are available at /legal.
By applying for this job, you agree to receive calls, AI-generated calls, text messages, or emails from Jobot, and/or its agents and contracted partners.
Frequency varies for text messages.
Message and data rates may apply.
Carriers are not liable for delayed or undelivered messages.
You can reply STOP to cancel and HELP for help.
You can access our privacy policy here: /privacy-policy
Not Specified
DevOps Engineer - Observability
🏢 Uline
Salary not disclosed
DevOps Engineer
- Observability Corporate Headquarters 12575 Uline Drive, Pleasant Prairie, WI 53158 At Uline, we count on reliable, resilient systems to keep up with our growth.
As a DevOps Engineer specialized in Observability, you’ll help drive that reliability by implementing modern monitoring, automation and orchestration practices that keep our systems performing at their best.
Careers Packed with Potential.
Backed by 45+ years of success, Uline offers opportunities to grow your career with stability you can count on.
Position Responsibilities Work with a modern observability stack (ElasticSearch, Prometheus/Thanos, OpenTelemetry, Grafana etc.) and help shape what comes next.
Build and automate infrastructure provisioning using Terraform, Kubernetes, GitHub Actions, ArgoCD, or similar tools.
Drive end-to-end visibility across distributed systems with real-time metrics, logs, traces, and SLO-based alerting.
Influence architecture decisions and help define best practices for performance, reliability, and incident response.
Minimum Requirements Bachelor's degree in information technology, computer science or a related field.
3+ years of hands-on experience with ElasticSearch or HashiCorp Vault with expertise in installation, configuration, support, log analysis and performance tuning.
3+ years of experience with Docker or Kubernetes container orchestration tools & IaC automation tools.
Experience with AI / ML in operations (AIOps) a plus.
Uline does not participate in the H1-B lottery.
Benefits Complete health insurance coverage and 401(k) with 6% employer match that starts day one! Multiple bonus programs.
Paid holidays and generous paid time off.
Tuition Assistance Program that covers professional continuing education.
Employee Perks On-site café and first-class fitness center with complimentary personal trainers.
Over four miles of beautifully maintained walking trails.
About Uline Uline, a family-owned company, is North America’s leading distributor of shipping, industrial, and packaging materials with over 9,800 employees across 14 locations.
Uline is a drug-free workplace.
All new hires must complete a pre-employment hair follicle drug screening.
All positions are on-site.
EEO/AA Employer/Vet/Disabled #LI-SR1 #CORP (#IN-PPITL1) #ZR-HQIT Our employees make the difference and we are committed to offering exceptional benefits and perks! Explore Uline.jobs to learn more!
- Observability Corporate Headquarters 12575 Uline Drive, Pleasant Prairie, WI 53158 At Uline, we count on reliable, resilient systems to keep up with our growth.
As a DevOps Engineer specialized in Observability, you’ll help drive that reliability by implementing modern monitoring, automation and orchestration practices that keep our systems performing at their best.
Careers Packed with Potential.
Backed by 45+ years of success, Uline offers opportunities to grow your career with stability you can count on.
Position Responsibilities Work with a modern observability stack (ElasticSearch, Prometheus/Thanos, OpenTelemetry, Grafana etc.) and help shape what comes next.
Build and automate infrastructure provisioning using Terraform, Kubernetes, GitHub Actions, ArgoCD, or similar tools.
Drive end-to-end visibility across distributed systems with real-time metrics, logs, traces, and SLO-based alerting.
Influence architecture decisions and help define best practices for performance, reliability, and incident response.
Minimum Requirements Bachelor's degree in information technology, computer science or a related field.
3+ years of hands-on experience with ElasticSearch or HashiCorp Vault with expertise in installation, configuration, support, log analysis and performance tuning.
3+ years of experience with Docker or Kubernetes container orchestration tools & IaC automation tools.
Experience with AI / ML in operations (AIOps) a plus.
Uline does not participate in the H1-B lottery.
Benefits Complete health insurance coverage and 401(k) with 6% employer match that starts day one! Multiple bonus programs.
Paid holidays and generous paid time off.
Tuition Assistance Program that covers professional continuing education.
Employee Perks On-site café and first-class fitness center with complimentary personal trainers.
Over four miles of beautifully maintained walking trails.
About Uline Uline, a family-owned company, is North America’s leading distributor of shipping, industrial, and packaging materials with over 9,800 employees across 14 locations.
Uline is a drug-free workplace.
All new hires must complete a pre-employment hair follicle drug screening.
All positions are on-site.
EEO/AA Employer/Vet/Disabled #LI-SR1 #CORP (#IN-PPITL1) #ZR-HQIT Our employees make the difference and we are committed to offering exceptional benefits and perks! Explore Uline.jobs to learn more!
Not Specified
J
Senior Data Engineer
🏢 Jobot
Salary not disclosed
Join a World Class Team Operating at the Intersection of Technology, Data, and Digital Products This Jobot Job is hosted by: Amanda Preston Are you a fit? Easy Apply now by clicking the "Apply Now" button and sending us your resume.
Salary: $155,000
- $175,000 per year A bit about us: Competitive salary and comprehensive benefits Long-term stability with continued investment in technology and engineering High-visibility work with real-world impact A collaborative, engineering-driven culture focused on quality and continuous improvement Why join us? Work on data and machine learning platforms operating at significant scale Own and influence core systems that power critical business capabilities Collaborate with experienced engineers and data scientists in a highly technical environment Tackle complex engineering challenges with modern cloud and MLOps tooling Enjoy the stability of a mature organization combined with the opportunity to modernize and innovate Job Details Senior Data Engineer Role Summary Join a high-performing engineering team at a large, global organization operating at the intersection of technology, data, and digital products.
We are seeking a highly motivated Senior Data Engineer to lead the architecture, deployment, and operation of next-generation, data-driven platforms.
In this role, you will bridge the gap between Data Science and Production Engineering, ensuring that datasets, machine learning models, and core services are deployed reliably, scalably, and securely in the cloud.
This is a high-impact position requiring deep expertise in data architecture, backend engineering, and the full machine learning lifecycle in production environments.
Key Responsibilities Data Pipeline Design & Orchestration Design, build, and maintain robust data ingestion and transformation pipelines Leverage modern orchestration tools to ensure reliable, observable data flows supporting machine learning workloads Core Development Write clean, efficient, and well-tested Python code for automation, infrastructure tooling, and service integration Develop shared libraries and glue services connecting cloud-native components API & Service Deployment Design, develop, and deploy high-performance Python APIs (FastAPI / Flask) to serve machine learning predictions and core application logic MLOps Pipeline Ownership Own end-to-end pipelines for continuous training, deployment, versioning, and monitoring of production ML models (e.g., recommendation or personalization systems) Infrastructure Management Architect and maintain scalable, fault-tolerant infrastructure using Kubernetes (GKE) within Google Cloud Platform Ensure reliability, performance, and cost efficiency across environments Collaboration & Mentorship Partner closely with data scientists, software engineers, and platform teams Provide technical leadership and mentorship to junior engineers Qualifications Must-Have (Engineering Excellence) 5+ years of professional experience in Data Engineering, Software Engineering, or Cloud Engineering Deep expertise in Python for application development, data processing, and automation Proven experience building and deploying production-grade backend services and APIs (FastAPI, Flask, or Django) Strong SQL skills with experience designing and optimizing schemas for relational and analytical data stores (e.g., BigQuery, Cloud SQL) Hands-on experience with data orchestration tools such as Dagster or Airflow Extensive experience designing and operating services within Google Cloud Platform (BigQuery, Pub/Sub, Vertex AI, Compute Engine) Expert-level knowledge of Docker and Kubernetes, including Helm-based deployments Nice-to-Have (DevOps & MLOps) Experience with Infrastructure as Code tools such as Terraform or Crossplane CI/CD experience using GitHub Actions or similar tooling Familiarity with observability stacks (Prometheus, Grafana, Cloud Logging) Understanding of cloud security principles and enterprise compliance requirements Direct experience supporting production MLOps workflows (model monitoring, drift detection, automated retraining) Interested in hearing more? Easy Apply now by clicking the "Apply Now" button.
Jobot is an Equal Opportunity Employer.
We provide an inclusive work environment that celebrates diversity and all qualified candidates receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, age (40 and over), disability, military status, genetic information or any other basis protected by applicable federal, state, or local laws.
Jobot also prohibits harassment of applicants or employees based on any of these protected categories.
It is Jobot’s policy to comply with all applicable federal, state and local laws respecting consideration of unemployment status in making hiring decisions.
Sometimes Jobot is required to perform background checks with your authorization.
Jobot will consider qualified candidates with criminal histories in a manner consistent with any applicable federal, state, or local law regarding criminal backgrounds, including but not limited to the Los Angeles Fair Chance Initiative for Hiring and the San Francisco Fair Chance Ordinance.
Information collected and processed as part of your Jobot candidate profile, and any job applications, resumes, or other information you choose to submit is subject to Jobot's Privacy Policy, as well as the Jobot California Worker Privacy Notice and Jobot Notice Regarding Automated Employment Decision Tools which are available at /legal.
By applying for this job, you agree to receive calls, AI-generated calls, text messages, or emails from Jobot, and/or its agents and contracted partners.
Frequency varies for text messages.
Message and data rates may apply.
Carriers are not liable for delayed or undelivered messages.
You can reply STOP to cancel and HELP for help.
You can access our privacy policy here: /privacy-policy
Salary: $155,000
- $175,000 per year A bit about us: Competitive salary and comprehensive benefits Long-term stability with continued investment in technology and engineering High-visibility work with real-world impact A collaborative, engineering-driven culture focused on quality and continuous improvement Why join us? Work on data and machine learning platforms operating at significant scale Own and influence core systems that power critical business capabilities Collaborate with experienced engineers and data scientists in a highly technical environment Tackle complex engineering challenges with modern cloud and MLOps tooling Enjoy the stability of a mature organization combined with the opportunity to modernize and innovate Job Details Senior Data Engineer Role Summary Join a high-performing engineering team at a large, global organization operating at the intersection of technology, data, and digital products.
We are seeking a highly motivated Senior Data Engineer to lead the architecture, deployment, and operation of next-generation, data-driven platforms.
In this role, you will bridge the gap between Data Science and Production Engineering, ensuring that datasets, machine learning models, and core services are deployed reliably, scalably, and securely in the cloud.
This is a high-impact position requiring deep expertise in data architecture, backend engineering, and the full machine learning lifecycle in production environments.
Key Responsibilities Data Pipeline Design & Orchestration Design, build, and maintain robust data ingestion and transformation pipelines Leverage modern orchestration tools to ensure reliable, observable data flows supporting machine learning workloads Core Development Write clean, efficient, and well-tested Python code for automation, infrastructure tooling, and service integration Develop shared libraries and glue services connecting cloud-native components API & Service Deployment Design, develop, and deploy high-performance Python APIs (FastAPI / Flask) to serve machine learning predictions and core application logic MLOps Pipeline Ownership Own end-to-end pipelines for continuous training, deployment, versioning, and monitoring of production ML models (e.g., recommendation or personalization systems) Infrastructure Management Architect and maintain scalable, fault-tolerant infrastructure using Kubernetes (GKE) within Google Cloud Platform Ensure reliability, performance, and cost efficiency across environments Collaboration & Mentorship Partner closely with data scientists, software engineers, and platform teams Provide technical leadership and mentorship to junior engineers Qualifications Must-Have (Engineering Excellence) 5+ years of professional experience in Data Engineering, Software Engineering, or Cloud Engineering Deep expertise in Python for application development, data processing, and automation Proven experience building and deploying production-grade backend services and APIs (FastAPI, Flask, or Django) Strong SQL skills with experience designing and optimizing schemas for relational and analytical data stores (e.g., BigQuery, Cloud SQL) Hands-on experience with data orchestration tools such as Dagster or Airflow Extensive experience designing and operating services within Google Cloud Platform (BigQuery, Pub/Sub, Vertex AI, Compute Engine) Expert-level knowledge of Docker and Kubernetes, including Helm-based deployments Nice-to-Have (DevOps & MLOps) Experience with Infrastructure as Code tools such as Terraform or Crossplane CI/CD experience using GitHub Actions or similar tooling Familiarity with observability stacks (Prometheus, Grafana, Cloud Logging) Understanding of cloud security principles and enterprise compliance requirements Direct experience supporting production MLOps workflows (model monitoring, drift detection, automated retraining) Interested in hearing more? Easy Apply now by clicking the "Apply Now" button.
Jobot is an Equal Opportunity Employer.
We provide an inclusive work environment that celebrates diversity and all qualified candidates receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, age (40 and over), disability, military status, genetic information or any other basis protected by applicable federal, state, or local laws.
Jobot also prohibits harassment of applicants or employees based on any of these protected categories.
It is Jobot’s policy to comply with all applicable federal, state and local laws respecting consideration of unemployment status in making hiring decisions.
Sometimes Jobot is required to perform background checks with your authorization.
Jobot will consider qualified candidates with criminal histories in a manner consistent with any applicable federal, state, or local law regarding criminal backgrounds, including but not limited to the Los Angeles Fair Chance Initiative for Hiring and the San Francisco Fair Chance Ordinance.
Information collected and processed as part of your Jobot candidate profile, and any job applications, resumes, or other information you choose to submit is subject to Jobot's Privacy Policy, as well as the Jobot California Worker Privacy Notice and Jobot Notice Regarding Automated Employment Decision Tools which are available at /legal.
By applying for this job, you agree to receive calls, AI-generated calls, text messages, or emails from Jobot, and/or its agents and contracted partners.
Frequency varies for text messages.
Message and data rates may apply.
Carriers are not liable for delayed or undelivered messages.
You can reply STOP to cancel and HELP for help.
You can access our privacy policy here: /privacy-policy
Not Specified
M
Sr Developer - Integration (SAP IS)
🏢 Medline Industries - Transportation & Operations
Salary not disclosed
Job Summary Worksite: Northbrook, IL Hybrid Schedule: Onsite Tues
- Thurs | Remote: Mon & Fri JOB SUMMARY We are seeking a seasoned Senior SAP Integration Developer with deep expertise in SAP Cloud Platform Integration (CPI), SAP Integration Suite, SAP Business Technology Platform (BTP), and Advanced Event Mesh.
In this role, you’ll design, develop, and maintain scalable integration solutions that bridge cloud and on-premise applications across global enterprise environments.
You will also be instrumental in mentoring junior developers and driving best practices for integration design, security, and performance.
Job Description MAJOR RESPONSIBILITIES Design, develop, and deploy complex integration flows using SAP BTP Integration Suite, CPI, and Advanced Event Mesh.
Build and manage synchronous and asynchronous interfaces between cloud and on-premise systems.
Configure and administer SAP BTP environments including connectivity (Cloud Connector), security roles, and API provisioning.
Leverage SAP Integration Suite components such as API Management, Open Connectors, CPI-DS, and SAP Event Mesh.
Configure and manage Solace messaging infrastructure – topics, queues, durable/non-durable subscriptions.
Migrate existing integrations from legacy platforms (e.g., SAP PI) to SAP Integration Suite.
Work on API-first integration designs, building secure, scalable APIs using REST/SOAP standards.
Monitor and troubleshoot integration issues using SAP Solution Manager, CPI Monitoring Dashboards, and SAP BTP Admin tools.
Collaborate with cross-functional teams to gather business requirements and translate them into technical designs.
Guide junior developers and lead technical knowledge transfer sessions.
Optimize existing integrations for performance, maintainability, and error handling.
Participate in Agile/Scrum ceremonies and lead initiatives for continuous improvement.
Interface with tools like ElasticSearch, Splunk for monitoring and logging integration behavior.
Support production instance and on call issue resolution MINIMUM JOB REQUIREMENTS Education Bachelor’s degree in computer science, IT, or related discipline.
Work Experience: 4+ years of hands-on software development experience in SAP BTP, SAP Integration Suite, and SAP CPI.
Strong background in ABAP development (especially with IDOCs, BAPI, RFC, ALE).
Extensive hands-on experience with REST/SOAP web services, XML, JSON, XSLT, and mapping/transformation logic.
Deep understanding of SAP PI/PO architecture and migration best practices.
Hands-on experience with Advanced Event Mesh.
Proficiency in at least one of the following: Java, Groovy, Python.
Experience with SAP Fiori and S/4HANA integration scenarios.
Proficiency in OAuth, SAML, SSL, and other authentication/authorization protocols.
Familiarity with SQL and working knowledge of SAP HANA, Oracle, or other relational databases.
Preferred Qualifications 8+ years of experience in SAP integration development.
SAP certifications in Integration Suite, BTP, or Cloud Platform Integration.
Experience with DevOps tools like Git, Jenkins, Docker, Kubernetes, CI/CD pipelines.
Knowledge of cloud infrastructure platforms (AWS, Azure, GCP).
Experience with event-driven architecture and microservices.
Familiarity with Agile development practices and project management tools (JIRA, Confluence).
Prior exposure to SAP Open Connectors, Graph API, and CAPM (Cloud Application Programming Model).
TECHNICAL SKILLS Integration Tools: SAP CPI, SAP PI/PO, SAP BTP, SAP API Management, SAP Event Mesh, Cloud Connector Languages: Java, ABAP, Groovy, Python, JavaScript, XML/XSLT Protocols: REST, SOAP, IDOC, OData, RFC, BAPI, SFTP, HTTP Platforms: SAP S/4HANA, SAP ECC, SAP Fiori, SAP HANA Tools: Eclipse, NetWeaver Developer Studio, Postman, SoapUI, GitHub Cloud & DevOps: Docker, Kubernetes, Jenkins, Git, CI/CD pipelines SOFT SKILLS Excellent verbal and written communication skills.
Strong problem-solving and analytical mindset.
Ability to mentor, coach, and lead junior team members.
Comfortable collaborating across global, cross-functional teams.
Highly organized and detail oriented.
Passion for innovation and continuous learning.
Medline Industries, LP, and its subsidiaries, offer a competitive total rewards package, continuing education & training, and tremendous potential with a growing worldwide organization.
The anticipated salary range for this position: $101,000.00
- $152,000.00 Annual The actual salary will vary based on applicant’s location, education, experience, skills, and abilities.
This role is bonus and/or incentive eligible.
Medline will not pay less than the applicable minimum wage or salary threshold.
Our benefit package includes health insurance, life and disability, 401(k) contributions, paid time off, etc., for employees working 30 or more hours per week on average.
For a more comprehensive list of our benefits please click here .
For roles where employees work less than 30 hours per week, benefits include 401(k) contributions as well as access to the Employee Assistance Program, Employee Resource Groups and the Employee Service Corp.
We’re dedicated to creating a Medline where everyone feels they belong and can grow their career.
We strive to do this by seeking diversity in all forms, acting inclusively, and ensuring that people have tools and resources to perform at their best.
Explore our Belonging page here .
Medline Industries, LP is an equal opportunity employer.
Medline evaluates qualified individuals without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, age, disability, neurodivergence, protected veteran status, marital or family status, caregiver responsibilities, genetic information, or any other characteristic protected by applicable federal, state, or local laws.
- Thurs | Remote: Mon & Fri JOB SUMMARY We are seeking a seasoned Senior SAP Integration Developer with deep expertise in SAP Cloud Platform Integration (CPI), SAP Integration Suite, SAP Business Technology Platform (BTP), and Advanced Event Mesh.
In this role, you’ll design, develop, and maintain scalable integration solutions that bridge cloud and on-premise applications across global enterprise environments.
You will also be instrumental in mentoring junior developers and driving best practices for integration design, security, and performance.
Job Description MAJOR RESPONSIBILITIES Design, develop, and deploy complex integration flows using SAP BTP Integration Suite, CPI, and Advanced Event Mesh.
Build and manage synchronous and asynchronous interfaces between cloud and on-premise systems.
Configure and administer SAP BTP environments including connectivity (Cloud Connector), security roles, and API provisioning.
Leverage SAP Integration Suite components such as API Management, Open Connectors, CPI-DS, and SAP Event Mesh.
Configure and manage Solace messaging infrastructure – topics, queues, durable/non-durable subscriptions.
Migrate existing integrations from legacy platforms (e.g., SAP PI) to SAP Integration Suite.
Work on API-first integration designs, building secure, scalable APIs using REST/SOAP standards.
Monitor and troubleshoot integration issues using SAP Solution Manager, CPI Monitoring Dashboards, and SAP BTP Admin tools.
Collaborate with cross-functional teams to gather business requirements and translate them into technical designs.
Guide junior developers and lead technical knowledge transfer sessions.
Optimize existing integrations for performance, maintainability, and error handling.
Participate in Agile/Scrum ceremonies and lead initiatives for continuous improvement.
Interface with tools like ElasticSearch, Splunk for monitoring and logging integration behavior.
Support production instance and on call issue resolution MINIMUM JOB REQUIREMENTS Education Bachelor’s degree in computer science, IT, or related discipline.
Work Experience: 4+ years of hands-on software development experience in SAP BTP, SAP Integration Suite, and SAP CPI.
Strong background in ABAP development (especially with IDOCs, BAPI, RFC, ALE).
Extensive hands-on experience with REST/SOAP web services, XML, JSON, XSLT, and mapping/transformation logic.
Deep understanding of SAP PI/PO architecture and migration best practices.
Hands-on experience with Advanced Event Mesh.
Proficiency in at least one of the following: Java, Groovy, Python.
Experience with SAP Fiori and S/4HANA integration scenarios.
Proficiency in OAuth, SAML, SSL, and other authentication/authorization protocols.
Familiarity with SQL and working knowledge of SAP HANA, Oracle, or other relational databases.
Preferred Qualifications 8+ years of experience in SAP integration development.
SAP certifications in Integration Suite, BTP, or Cloud Platform Integration.
Experience with DevOps tools like Git, Jenkins, Docker, Kubernetes, CI/CD pipelines.
Knowledge of cloud infrastructure platforms (AWS, Azure, GCP).
Experience with event-driven architecture and microservices.
Familiarity with Agile development practices and project management tools (JIRA, Confluence).
Prior exposure to SAP Open Connectors, Graph API, and CAPM (Cloud Application Programming Model).
TECHNICAL SKILLS Integration Tools: SAP CPI, SAP PI/PO, SAP BTP, SAP API Management, SAP Event Mesh, Cloud Connector Languages: Java, ABAP, Groovy, Python, JavaScript, XML/XSLT Protocols: REST, SOAP, IDOC, OData, RFC, BAPI, SFTP, HTTP Platforms: SAP S/4HANA, SAP ECC, SAP Fiori, SAP HANA Tools: Eclipse, NetWeaver Developer Studio, Postman, SoapUI, GitHub Cloud & DevOps: Docker, Kubernetes, Jenkins, Git, CI/CD pipelines SOFT SKILLS Excellent verbal and written communication skills.
Strong problem-solving and analytical mindset.
Ability to mentor, coach, and lead junior team members.
Comfortable collaborating across global, cross-functional teams.
Highly organized and detail oriented.
Passion for innovation and continuous learning.
Medline Industries, LP, and its subsidiaries, offer a competitive total rewards package, continuing education & training, and tremendous potential with a growing worldwide organization.
The anticipated salary range for this position: $101,000.00
- $152,000.00 Annual The actual salary will vary based on applicant’s location, education, experience, skills, and abilities.
This role is bonus and/or incentive eligible.
Medline will not pay less than the applicable minimum wage or salary threshold.
Our benefit package includes health insurance, life and disability, 401(k) contributions, paid time off, etc., for employees working 30 or more hours per week on average.
For a more comprehensive list of our benefits please click here .
For roles where employees work less than 30 hours per week, benefits include 401(k) contributions as well as access to the Employee Assistance Program, Employee Resource Groups and the Employee Service Corp.
We’re dedicated to creating a Medline where everyone feels they belong and can grow their career.
We strive to do this by seeking diversity in all forms, acting inclusively, and ensuring that people have tools and resources to perform at their best.
Explore our Belonging page here .
Medline Industries, LP is an equal opportunity employer.
Medline evaluates qualified individuals without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, age, disability, neurodivergence, protected veteran status, marital or family status, caregiver responsibilities, genetic information, or any other characteristic protected by applicable federal, state, or local laws.
Not Specified
J
Startup-Full Stack AI Engineer-Build AI Glasses and More!
🏢 Jobot
Salary not disclosed
NEW Full Stack AI Engineer Opportunity! This Jobot Job is hosted by: Audrey Block Are you a fit? Easy Apply now by clicking the "Apply Now" button and sending us your resume.
Salary: $100,000
- $150,000 per year A bit about us: We are a startup based out of Los Angeles with a focus on AI for attorneys.
This is an exciting product that will change the way attorneys practice.
Be a part of revolutionary change in AI today! Why join us? HUGE Opportunity for growth Base salary + Equity Team events Lunch, gym membership, Fun Fridays Flexible schedule Job Details Job Details: We are on the hunt for a passionate, innovative Full Stack AI Engineer to join our startup.
This is a permanent, full-time position where you will be given the opportunity to shape the future of our company by working on cutting-edge technology projects.
You will play a crucial role in developing, building, and maintaining AI-driven products and services.
Responsibilities: As a Full Stack AI Engineer, you will be tasked with the following responsibilities: 1.
Design and build robust, scalable AI-driven products and services.
2.
Collaborate closely with cross-functional teams to understand the needs and translate them into functional and efficient software solutions.
3.
Develop high-quality code for the front-end and back-end components of our applications, with a focus on back-end.
4.
Contribute to the entire product lifecycle from concept to deployment.
5.
Maintain and improve existing codebases and peer review code changes.
6.
Leverage existing tools and libraries, suggesting new ones that can improve development efficiencies.
7.
Document development phases, monitor systems, and ensure the alignment of application components with design specifications.
8.
Participate in brainstorming sessions and contribute innovative and original ideas to our technology, algorithms, and product.
9.
Continuously learn about new programming languages, tools, and technologies that can help increase the performance and functionality of our applications.
Qualifications: To be successful in this role, you should have the following qualifications: 1.
A minimum of 1 year of experience as a Full Stack AI Engineer, or in a similar role.
2.
Proven experience in building and maintaining AI-driven products and services.
3.
Strong knowledge of AI, machine learning algorithms, and software development.
4.
Proficient in Rust and Python.
5.
Experience with front-end and back-end development 6.
Familiarity with GitHub and other version control systems.
7.
Solid understanding of hardware and product design principles.
8.
Excellent problem-solving skills, with a knack for complex challenges.
9.
Strong communication skills, with the ability to articulate complex technical concepts in a clear and concise manner.
10.
A degree in Computer Science, Engineering, or a related field would be a plus.
This is an exciting opportunity for a Full Stack AI Engineer to join a vibrant team and make a significant impact on our products and services.
If you are passionate about AI, eager to solve challenging problems, and want to work in a dynamic, fast-paced environment, we would love to hear from you.
Interested in hearing more? Easy Apply now by clicking the "Apply Now" button.
Jobot is an Equal Opportunity Employer.
We provide an inclusive work environment that celebrates diversity and all qualified candidates receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, age (40 and over), disability, military status, genetic information or any other basis protected by applicable federal, state, or local laws.
Jobot also prohibits harassment of applicants or employees based on any of these protected categories.
It is Jobot’s policy to comply with all applicable federal, state and local laws respecting consideration of unemployment status in making hiring decisions.
Sometimes Jobot is required to perform background checks with your authorization.
Jobot will consider qualified candidates with criminal histories in a manner consistent with any applicable federal, state, or local law regarding criminal backgrounds, including but not limited to the Los Angeles Fair Chance Initiative for Hiring and the San Francisco Fair Chance Ordinance.
Information collected and processed as part of your Jobot candidate profile, and any job applications, resumes, or other information you choose to submit is subject to Jobot's Privacy Policy, as well as the Jobot California Worker Privacy Notice and Jobot Notice Regarding Automated Employment Decision Tools which are available at /legal.
By applying for this job, you agree to receive calls, AI-generated calls, text messages, or emails from Jobot, and/or its agents and contracted partners.
Frequency varies for text messages.
Message and data rates may apply.
Carriers are not liable for delayed or undelivered messages.
You can reply STOP to cancel and HELP for help.
You can access our privacy policy here: /privacy-policy
Salary: $100,000
- $150,000 per year A bit about us: We are a startup based out of Los Angeles with a focus on AI for attorneys.
This is an exciting product that will change the way attorneys practice.
Be a part of revolutionary change in AI today! Why join us? HUGE Opportunity for growth Base salary + Equity Team events Lunch, gym membership, Fun Fridays Flexible schedule Job Details Job Details: We are on the hunt for a passionate, innovative Full Stack AI Engineer to join our startup.
This is a permanent, full-time position where you will be given the opportunity to shape the future of our company by working on cutting-edge technology projects.
You will play a crucial role in developing, building, and maintaining AI-driven products and services.
Responsibilities: As a Full Stack AI Engineer, you will be tasked with the following responsibilities: 1.
Design and build robust, scalable AI-driven products and services.
2.
Collaborate closely with cross-functional teams to understand the needs and translate them into functional and efficient software solutions.
3.
Develop high-quality code for the front-end and back-end components of our applications, with a focus on back-end.
4.
Contribute to the entire product lifecycle from concept to deployment.
5.
Maintain and improve existing codebases and peer review code changes.
6.
Leverage existing tools and libraries, suggesting new ones that can improve development efficiencies.
7.
Document development phases, monitor systems, and ensure the alignment of application components with design specifications.
8.
Participate in brainstorming sessions and contribute innovative and original ideas to our technology, algorithms, and product.
9.
Continuously learn about new programming languages, tools, and technologies that can help increase the performance and functionality of our applications.
Qualifications: To be successful in this role, you should have the following qualifications: 1.
A minimum of 1 year of experience as a Full Stack AI Engineer, or in a similar role.
2.
Proven experience in building and maintaining AI-driven products and services.
3.
Strong knowledge of AI, machine learning algorithms, and software development.
4.
Proficient in Rust and Python.
5.
Experience with front-end and back-end development 6.
Familiarity with GitHub and other version control systems.
7.
Solid understanding of hardware and product design principles.
8.
Excellent problem-solving skills, with a knack for complex challenges.
9.
Strong communication skills, with the ability to articulate complex technical concepts in a clear and concise manner.
10.
A degree in Computer Science, Engineering, or a related field would be a plus.
This is an exciting opportunity for a Full Stack AI Engineer to join a vibrant team and make a significant impact on our products and services.
If you are passionate about AI, eager to solve challenging problems, and want to work in a dynamic, fast-paced environment, we would love to hear from you.
Interested in hearing more? Easy Apply now by clicking the "Apply Now" button.
Jobot is an Equal Opportunity Employer.
We provide an inclusive work environment that celebrates diversity and all qualified candidates receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, age (40 and over), disability, military status, genetic information or any other basis protected by applicable federal, state, or local laws.
Jobot also prohibits harassment of applicants or employees based on any of these protected categories.
It is Jobot’s policy to comply with all applicable federal, state and local laws respecting consideration of unemployment status in making hiring decisions.
Sometimes Jobot is required to perform background checks with your authorization.
Jobot will consider qualified candidates with criminal histories in a manner consistent with any applicable federal, state, or local law regarding criminal backgrounds, including but not limited to the Los Angeles Fair Chance Initiative for Hiring and the San Francisco Fair Chance Ordinance.
Information collected and processed as part of your Jobot candidate profile, and any job applications, resumes, or other information you choose to submit is subject to Jobot's Privacy Policy, as well as the Jobot California Worker Privacy Notice and Jobot Notice Regarding Automated Employment Decision Tools which are available at /legal.
By applying for this job, you agree to receive calls, AI-generated calls, text messages, or emails from Jobot, and/or its agents and contracted partners.
Frequency varies for text messages.
Message and data rates may apply.
Carriers are not liable for delayed or undelivered messages.
You can reply STOP to cancel and HELP for help.
You can access our privacy policy here: /privacy-policy
permanent
✓ All jobs loaded