Logicflow Github Jobs in Usa
177 positions found — Page 15
Job Description:
This position requires office presence of a minimum of 5 days per week and is only located in the location(s) posted. No relocation is offered.
AT&T will not hire any applicants for this position who require employer sponsorship now or in the future.
Join AT&T and reimagine the communications and technologies that connect the world. The Chief Information Office is responsible for advancing information technology performance and delivering solutions with a focus on maximizing ROI, increasing efficiency and enhancing the experience of end users. Guided by experienced leaders, Corporate Systems seamlessly integrate with advanced Technology and Operations to drive our enterprise forward. Our Systems Reliability and Software Delivery teams are unwavering in their commitment to excellence, ensuring every solution is robust and efficient. When you step into a career with AT&T, you won’t just imagine the future-you’ll create it.
What you’ll do:
The Release Manager oversees planning, scheduling, and deployment of software and infrastructure releases across the enterprise, ensuring seamless integration, minimal disruption, and alignment with organizational objectives. This role champions Agile, DevOps, CI/CD, and AI-enabled strategies to optimize release processes and enhance reliability. The Release Manager builds, owns, and manages a single calendar-of-record for all enterprise releases and changes across Applications, Infrastructure, Load Balancers, LAN/WAN, Mobility Core, DNS, HVA/HVD, Security, Data Center/Cloud, and Network Operations.
Key Responsibilities:
- Release Planning & Coordination: Develop and execute comprehensive release plans for software and infrastructure deployments. Align schedules across dependent systems and integrate Agile and DevOps principles into release trains.
- Risk & Issue Management: Use predictive analytics and AI-driven insights to identify, track, and mitigate risks. Proactively manage release-level jeopardies and determine optimal release paths.
- Enterprise-wide Release Calendar Management: Ensure proactive visibility, collision avoidance, and post-change learning across IT, Infrastructure, Network, and Operations to prevent service conflicts and outages.
- Deployment & Monitoring: Oversee end-to-end deployment activities using CI/CD pipelines and automation tools. Ensure rapid issue resolution to minimize downtime.
- Quality & Compliance: Maintain audit-ready, compliant release processes (e.g., SOX). Leverage metrics and analytics for continuous improvement.
- Stakeholder Communication: Provide transparent updates on release status, dependencies, and impacts. Segment communications for stakeholders and manage approvals and CAB agendas.
- AI-Enabled Release Management Strategy:
- Transition from fragmented, app-driven release practices to a mature enterprise model by embedding AI-powered capabilities such as predictive risk analysis, automated change validation, and intelligent scheduling across RM/CM solution areas.
- Mentoring; Drives Culture of Continuous Improvement
What you’ll need:
- Expert level Knowledge of SDLC for SAFe Agile and DevOps environments; best-in-class Release and Change Management framework and IT Service Management. Hands-on experience with Jira Align, Jira Cloud, JSM, GitHub, ServiceNow. Strong understanding of release/change lifecycle and Outage root cause analysis
- Data and AI Skillset: Advanced Data analytics, KPI metrics, and Prompt Engineering expertise; Guiding development of Agentic AI workflows and Gen AI use cases; Power BI; Python
- Governance and Communication: Establishing process framework, Implementing solutions and tools, Building standardized playbooks, and leading governance boards for ATS-wide implementation
What you’ll bring:
Required
- 7+ years in release management, software engineering, or related disciplines.
- ServiceNow certification is required (CAD, CSA, CIS).
- Strong experience in Agile, DevOps, CI/CD; certifications preferred.
- Familiarity with AI-driven analytics and automation; Python; PowerBI.
- Hands-on experience with Jira Align, Jira Cloud, JSM, GitHub, ServiceNow.
- Strong understanding of release/change lifecycle and Outage root cause analysis
- Experience with Agile SAFe, Waterfall, and hybrid delivery models.
- Modern Enterprise Release Management and ITSM
- Advanced expertise in Excel, PowerPoint, PowerBI
Preferred
- BS/BA in Computer Science or related field.
- Modern Release Management processes for Agile and DevOps environments
- Jira Align, JSM, Jira Cloud, Git for enterprise RM/CM
- ServiceNow for ITSM and RM/CM automation
- Modern Enterprise Release Management and ITSM
Role: Oracle ERP Test Manager
Skills: Oracle Cloud ERP Finance (GL, AP, AR, FA, CM, COA, P2P, O2C) and SCM (Inventory, Procurement, Order Management, Manufacturing) Data Migration, Releases
Experience: 12+ Years
Location: Houston TX.
We are Seeking the Oracle ERP Test Manager will lead end-to-end testing for Oracle Cloud ERP programs (Finance and/or SCM), acting as the primary client-facing and vendor-facing test leader. The role is responsible for shaping the test strategy, governing SIT/UAT/OAT, orchestrating multi-vendor delivery, ensuring traceability and compliance (e.g., SOX where applicable), and delivering high-quality releases through risk-based testing and measurable KPIs. This is an onsite leadership role working closely with client stakeholders and coordinating offshore QE teams.
Key Responsibilities
Test Leadership & Governance
- Own the Test Strategy, Test Plan, and Quality Governance across ERP tracks (Finance, SCM, Integrations, Reporting).
- Establish and run test governance forums: daily stand-ups, defect triage, go/no-go gates, release readiness meetings.
- Define entry/exit criteria, risk-based test scope, and traceability from requirements → test cases → defects → release notes.
- Set up RACI, test calendar, and QA checkpoints across SIT, UAT, OAT, regression, and cutover validations.
Client & Vendor Management
- Serve as the single point of contact for client QA leadership; ensure transparent communication and expectation management.
- Coordinate with system integrators (SI), third-party vendors, and internal product teams on dependencies, environments, and fixes.
- Drive defect SLAs, cross-team ownership, and escalation management; present progress/KPIs to executive stakeholders.
- Align test scope with contractual obligations, SOW, and project milestones.
Functional & Integration Testing
- Oversee testing across Finance (GL, AP, AR, FA, CM, COA, P2P, O2C) and SCM (Inventory, Procurement, Order Management, Manufacturing).
- Plan and lead E2E business process validations spanning Oracle ERP and boundary systems (e.g., CRM, WMS/TMS/OTM, Banking, Tax, Reporting).
- Govern API/OIC integration testing, data migration/reconciliation, reporting/BI verification, and cutover readiness.
Non-Functional & Automation
- Define regression automation strategy (e.g., Tosca, Selenium/WebdriverIO, TestComplete) and integrate into CI/CD (e.g., Jenkins, GitHub Actions).
- Oversee performance testing scope (critical flows, SLA validation) and environment/instrumentation readiness.
- Ensure security, audit, and compliance checks (SOX/ITGC where applicable) are embedded in the test process.
Planning, Environments & Data
- Build detailed test schedules, resource plans, and environment usage plans (SIT/UAT/Pre-Prod).
- Establish test data strategy (masking/subsetting/synthetic), test data refresh cycles, and data governance.
Reporting & Metrics
- Publish daily/weekly dashboards: coverage, pass rate, defect density/leakage/severity aging, DRE, trend analysis, and risk register updates.
- Produce go-live quality sign-offs, test summary reports, and post-implementation validation plans.
Required Skills & Qualifications
Must-Have
- Strong Oracle Cloud ERP testing leadership across Finance and/or SCM with integration-heavy landscapes.
- Proven client-facing and vendor management experience in multi-vendor delivery environments.
- Deep understanding of E2E business processes: P2P/PTP, O2C/OTC, R2R, PTM, Inventory & Costing; exposure to COA design.
- Hands-on oversight of SIT, UAT, OAT, cutover testing, and hypercare.
- Experience with integration middleware (e.g., Oracle Integration Cloud/OIC, REST/SOAP APIs, flat-file/EDIs).
- Tools: Jira/Azure DevOps/ALM, Zephyr/Xray, Confluence, Jenkins/GitHub, SQL for validation, Tosca/Selenium (governance level).
- Strong defect triage, risk management, and executive reporting skills.
- Excellent communication, stakeholder management, and documentation skills.
Nice-to-Have
- Exposure to OTM, WMS, CRM, tax engines (e.g., Vertex), Banking integrations, Reporting/BI (OTBI, BIP).
- SOX/ITGC testing experience; understanding of change management and segregation of duties.
- Certifications: Oracle Cloud ERP, PMP/PRINCE2, PSM/CSM, ISTQB Test Manager.
- Experience with service virtualization, data masking, test environment management.
Key Deliverables
- Master Test Strategy & Test Plan with risk-based scope.
- Traceability Matrix (requirements → test cases → defects → releases).
- SIT/UAT/Test Completion Reports, Go/No-Go recommendations, and hypercare validation plan.
- Automation coverage map, regression suite, and CI execution reports.
- Quality dashboards and executive steering readouts.
KPIs & Success Metrics
- Defect Leakage (to UAT/Production) within agreed thresholds.
- Test Coverage (Requirements, E2E business processes, Integrations, Compliance).
- Defect SLA adherence & aging; % of critical defects resolved by gate.
- Release Readiness: entry/exit compliance, risk burndown trend.
- Automation ROI: % regression automated, cycle time savings, stability.
- Stakeholder Satisfaction (CSAT) and audit/compliance pass rates.
Job Description Architect and design a robust, scalable, reliable, and secure Infrastructure Automation Platform to support our cloud migration, software modernization, and business objectives.
Deliver Automation Platform solutions to serve customer, product, developer, and operations needs throughout the entire product life cycle, to enable software engineering teams to increase the velocity of code and application releases, and to enable infrastructure teams to manage our cloud infrastructure through code and automated deployments.
Foster and evangelize a team culture where serving platform customers is the primary mission, continuously monitoring and analyzing customer feedback for platform related pain points and empathize with customer demands and requirements.
Create detailed architectural specifications to document the architecture decisions.
Communicate the architectural specifications to technical teams and business sponsors in a directly actionable, clear, and succinct manner.
Drive adoption of the automation platform and services through advocacy and education to the broader engineering and operations organizations.
Research market trends and conduct competitive analysis for infrastructure and software delivery automation products and services to ensure our automation platform becomes best in class.
Troubleshoot platform issues and work with the engineering, infrastructure, and operations teams to resolve them.
Collaborate with Enterprise Architecture, Software Engineering, and Development teams to deliver self-service platform capabilities to improve the developer experience.
Collaborate with IT Operations and Network Operations Center to enable management and monitoring of cloud infrastructure and applications and deliver stable and fault tolerant solutions to achieve application availability targets.
Collaborate with Quality Assurance Automation team to incorporate automated testing for infrastructure and application deployment pipelines.
Partner with Compliance and Security teams to ensure infrastructure and applications meet compliance standards and are safe and secure against cybersecurity threats.
Participate in Agile ceremonies, including daily stand-ups, sprint planning, sprint reviews, and retrospectives, to provide technical leadership and guidance.
Participate in ITIL-based change, incident, and problem management processes for automation platform solutions.
Manage and optimize the platform expense and budget in the form of product show/charge backs, in partnership with the IT Finance Division.
Telecommuting is permitted, but applicant must work from the worksite location at least 3 days per week.
No additional national or international travel is anticipated.
Job Requirements PRIMARY REQUIREMENTS: Bachelor’s degree in Computer Engineering or related fields, or its foreign equivalent, and 8 years of relevant work experience.
In addition, experience with the following skills is required: (1) Experience using DevOps tools including Azure DevOps, GitLab, GitHub Actions, and DevOps Consulting & Architecture to orchestrate and optimize the software delivery lifecycle by integrating code versioning, automated builds, testing, and deployment.
(2) Experience using Cloud Automation tools including Azure Automation, ARM Templates, Azure LogicApps, and Cloud Architecture provision, configure, and monitor cloud resources programmatically while minimizing human error.
(3) Experience using Infrastructure as Code tools including Terraform, Ansible, and Chef to automate infrastructure provisioning and configuration, enabling version control and consistency across environments.
(4) Experience using Scripting language including Python, PowerShell, and Bash for automation, integration scripting, and administrative task execution across Windows/Linux/cloud environments.
(5) Experience using ML/AI Tools including Azure ML, AWS SageMaker, and TensorFlow to build, train, and operationalize machine learning models that power intelligent business applications.
(6) Experience using supporting Infrastructure Automation solutions including BMC Server Automation and vRealize Orchestrator Design to automate infrastructure lifecycle tasks including provisioning, compliance, patching, and audit readiness.
(7) Experience using Process Orchestrator / Low-Code tools including MS Power Platform, BMC Atrium Orchestrator, MS Orchestrator, and UiPath to digitize, automate, and optimize IT and business processes, reducing manual intervention.
(8) Experience using ideation and innovation to research market trends on technologies including GitLab Duo and Power Automate in the automation space to solve problems, increase adoption, productivity and efficiency.
JOB SITE: 2375 Waterview Drive, Northbrook, IL 60062 WORK HOURS: Full Time (8am to 5pm, Monday to Friday) PAY RANGE: $134,000.00 to $201,000.00 Our benefit package includes health insurance, life and disability, 401(k) contributions, paid time off, etc., for employees working 30 or more hours per week on average.
For a more comprehensive list of our benefits please click here.
For roles where employees work less than 30 hours per week, benefits include 401(k) contributions as well as access to the Employee Assistance Program, Employee Resource Groups and the Employee Service Corp.
We’re dedicated to creating a Medline where everyone feels they belong and can grow their career.
We strive to do this by seeking diversity in all forms, acting inclusively, and ensuring that people have tools and resources to perform at their best.
Explore our Belonging page here .
Medline Industries, LP is an equal opportunity employer.
Medline evaluates qualified individuals without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, age, disability, neurodivergence, protected veteran status, marital or family status, caregiver responsibilities, genetic information, or any other characteristic protected by applicable federal, state, or local laws.
We have headquarteerd in Bloomfiled Hills, MI and have 16 offices spread across six countries.
We partner with Fortune 500 companies to address complex business challenges.
Our services span AI, IT staffing, cloud computing, engineering, mobility, testing, and more.
Certified with CMMI Level 3 and ISO standards, V2Soft is committed to quality and security.
Beyond our work, we actively support local communities and non-profits, reflecting our core values.
Join us to be part of a dynamic and impactful global company! Please visit us at to know more .
Must Have Skills: Prior experience in JavaScript Frameworks.
Expert in designing/developing J2EE based web-applications Prior experience Architecting J2EE & Core Java based applications.
Expert in Design and development of REST services including authorization and metering.
Expert in designing/developing Java backend for web-applications Prior experience in JavaScript/UI Frameworks Prior experience in Build & Integration Tools required: Gradle, Jenkins Pipeline, Docker, GitHub Deployment experience to Open Shift Platform (Kubernetes), exposure to Helm Charts required Prior experience in Relational databases (Oracle, Teradata etc.) Nice to Have Skills: Prior experience in designing/developing Java/J2EE based web-applications Prior experience in developing Python backends Prior experience in Developing Single Page Web Apps (SPA) Prior experience in NOSQL databases V2Soft is an Equal Opportunity Employer ( EOE).
We welcome applicants from all backgrounds, including individuals with disabilities and veterans.
to view all of our open opportunities and to learn more about our benefits.
• Thorough knowledge of common technologies used in DevOps such as Jenkins, Github • Good experience with containerization docker, Kubernetes etc • Programming experience of Java/Java springboot will be preferable • Adhere to technical standards and participate in standards evolution.
• Understand the importance of teamwork and coordinated activities.
• Demonstrate effective communications skills at all organizational levels.
Thorough knowledge of analytical thinking concepts and techniques.
• Mentor and lead junior members in continuous integration and continuous delivery concepts.
• Participate in cross-functional design and troubleshooting activities at an organizational level.
• Learn, design, and code in at least two development languages and integrate multiple tools through their APIs, command line interfaces, etc.
Design, code, and test systems with a long-term mindset of maintainability.
• Contribute to an atmosphere of cross-functional teamwork within the KeyBank Agile project life-cycle and ability to act within an Agile environment working with user stories, iterative development, continuous integration, continuous delivery, continuous feedback, etc.
• Work with the technical organization at large to understand its CI/CD and release management needs and be able to transform that understanding into solution requirements.
In this role, you'll drive the future of test engineering by contributing to core automation initiatives and enabling frameworks, tools, and platforms that accelerate quality across our teams.
This is not a traditional QA role.
We're looking for a full-stack automation expert who is passionate about tooling, platforms, and frameworks, capable of evaluating and implementing cutting-edge solutions—including AI-powered test platforms, low-code/no-code test tools, building PoCs, and influencing architecture-level decisions.
Our application stack includes Java/Spring Boot microservices on the backend and Next.js/React on the frontend, all deployed to Azure Kubernetes Service (AKS).
Key Responsibilities Design, implement, and maintain reusable automation frameworks for backend (API/microservices) and frontend (UI) testing.
Research, evaluate, and introduce modern test automation tools (e.g., low-code/no-code, model-based testing, AI-powered test platforms).
Lead proofs-of-concept (PoCs) for new tools and frameworks to assess fit for the organization, enable and simplify shift-left practices within teams.
Contribute to shared quality tooling and pipelines to be used across teams (test orchestration, data setup, mocks/stubs, parallelization, etc.).
Collaborate with Digital Framework, platform, DevOps, and other engineering teams to shift quality left and define testability patterns.
Provide leadership in setting up automation standards, coding guidelines, and governance models for cross-team reuse.
Lead teams to ensure maximum automation coverage for functionality and be accountable for ensuring QA practices are aligned to the goal of maximum quality.
Accountable for overall QA deliverables, practices, and standards within teams and for compliance with functional, integration, regression and performance/load testing (NFR testing) goals for all systems and projects worked on by the teams.
Actively mentor junior engineers and guide teams in adopting quality engineering best practices.
What You Bring: Skills and Expertise 8+ years of experience in Quality Engineering, with deep expertise in test automation.
Strong in all aspects of SDLC and Testing Lifecycle.
Strong programming skills in Java (Spring Boot stack) and JavaScript/TypeScript (React/Next.js).
Experience building automation frameworks using JUnit/TestNG, Cypress, Playwright, Selenium, etc.
Proficiency in evaluating and implementing tools for test case management, test execution, and reporting.
Demonstrated success with CI/CD-integrated testing using Azure DevOps, GitHub Actions, or equivalent.
Knowledge of Kubernetes, containers, and testing in cloud-native environments (preferably Azure).
Familiarity with contract testing, mocking services, and management strategies.
Strong background and experience working in Agile teams.
Preferred Skills Experience with low-code/no-code automation platforms such as Testim, Tricentis Tosca, Mabl, or Katalon.
Exposure to AI/ML-based testing, self-healing tests, or model-based test generation.
Understanding of platform thinking and working in centralized enablement teams.
Hands-on experience with observability and monitoring tools for test impact analysis.
Educational Requirements Bachelor’s Degree in Computer Science, Engineering, or a related field is required.
Why Join Medline? Be a key contributor to the transformation of Medline’s e-commerce platform.
Access opportunities for growth, training, and certifications.
Competitive salary, comprehensive benefits, and flexible work arrangements.
Collaborate with a dynamic and innovative team in a supportive culture.
Medline Industries, LP, and its subsidiaries, offer a competitive total rewards package, continuing education & training, and tremendous potential with a growing worldwide organization.
The anticipated salary range for this position: $115,440.00
- $173,160.00 Annual The actual salary will vary based on applicant’s location, education, experience, skills, and abilities.
This role is bonus and/or incentive eligible.
Medline will not pay less than the applicable minimum wage or salary threshold.
Our benefit package includes health insurance, life and disability, 401(k) contributions, paid time off, etc., for employees working 30 or more hours per week on average.
For a more comprehensive list of our benefits please click here .
For roles where employees work less than 30 hours per week, benefits include 401(k) contributions as well as access to the Employee Assistance Program, Employee Resource Groups and the Employee Service Corp.
We’re dedicated to creating a Medline where everyone feels they belong and can grow their career.
We strive to do this by seeking diversity in all forms, acting inclusively, and ensuring that people have tools and resources to perform at their best.
Explore our Belonging page here .
Medline Industries, LP is an equal opportunity employer.
Medline evaluates qualified individuals without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, age, disability, neurodivergence, protected veteran status, marital or family status, caregiver responsibilities, genetic information, or any other characteristic protected by applicable federal, state, or local laws.
Salary: $165,000
- $190,000 per year A bit about us: Founded over a decade ago, we are a leading AI and data services partner helping global brands design, build, and run modern data and AI platforms on Snowflake and major cloud providers.
We specialize in data engineering, analytics, and managed services, combining deep technical expertise with a global delivery model to maximize business value from data.
Why join us? Competitive base compensation + bonus Remote first Comprehensive Benefits 4 weeks PTO + 10 holidays Accelerated learning and professional development with advanced training and certifications (e.g., Snowflake, cloud).
High-Impact Work: Join a specialized Elastic Operations team running mission-critical cloud data platforms for leading enterprises.
Autonomy & Collaboration: Work in a culture that prizes autonomy, creativity, transparency, and cross-functional collaboration.
Job Details Key Responsibilities and Duties: Lead the design, architecture, and implementation of large-scale cloud-native data platform solutions on Snowflake, AWS, and Azure.
Drive data migrations, integrations, and performance tuning across data warehouses, data lakes, and distributed systems to optimize reliability and cost.
Own platform security, data governance, and process engineering to ensure robust, scalable, and continuously improving data environments.
Manage multiple client engagements as a trusted advisor, collaborating with cross-functional teams and mentoring junior engineers to maximize platform ROI and customer success.
Qualifications: Bachelor’s degree in Computer Science, Engineering, or related field (advanced degrees or relevant certifications are a plus).
At least 10 years of hands-on experience architecting, designing, implementing, and managing cloud-native data platforms (Snowflake, Redshift, Azure Data Warehouse) on AWS and/or Azure.
Proven client-facing consulting experience, including presenting to executive stakeholders and creating detailed solution documentation.
Strong technical depth in SQL, infrastructure-as-code (Terraform or CloudFormation), CI/CD (e.g., GitHub, Bitbucket), and modern data integration tools (e.g., AWS DMS, Azure Data Factory, Matillion, Fivetran, Spark) Interested in hearing more? Easy Apply now by clicking the "Apply Now" button.
Jobot is an Equal Opportunity Employer.
We provide an inclusive work environment that celebrates diversity and all qualified candidates receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, age (40 and over), disability, military status, genetic information or any other basis protected by applicable federal, state, or local laws.
Jobot also prohibits harassment of applicants or employees based on any of these protected categories.
It is Jobot’s policy to comply with all applicable federal, state and local laws respecting consideration of unemployment status in making hiring decisions.
Sometimes Jobot is required to perform background checks with your authorization.
Jobot will consider qualified candidates with criminal histories in a manner consistent with any applicable federal, state, or local law regarding criminal backgrounds, including but not limited to the Los Angeles Fair Chance Initiative for Hiring and the San Francisco Fair Chance Ordinance.
Information collected and processed as part of your Jobot candidate profile, and any job applications, resumes, or other information you choose to submit is subject to Jobot's Privacy Policy, as well as the Jobot California Worker Privacy Notice and Jobot Notice Regarding Automated Employment Decision Tools which are available at /legal.
By applying for this job, you agree to receive calls, AI-generated calls, text messages, or emails from Jobot, and/or its agents and contracted partners.
Frequency varies for text messages.
Message and data rates may apply.
Carriers are not liable for delayed or undelivered messages.
You can reply STOP to cancel and HELP for help.
You can access our privacy policy here: /privacy-policy
Salary: $130,000
- $140,000 per year A bit about us: Our client is a fast-growing technology company advancing the way data, cloud, and AI infrastructure power enterprise operations.
With over a decade of innovation and a strong culture rooted in servant leadership, collaboration, and performance, the company is scaling its data and DevOps capabilities to support next-generation analytics and machine learning platforms.
They are seeking a DevOps Manager (Data & Cloud Engineering) to lead the team responsible for modernizing and expanding their cloud and data infrastructure across AWS and Azure.
This is a strategic, hands-on leadership role focused on Databricks, CI/CD automation, infrastructure-as-code, and AI-driven data modeling.
You’ll lead a cross-functional group of DevOps engineers, data engineers, and cloud architects — driving reliability, scalability, and performance across a high-impact environment.
Why join us? Lead and scale a multidisciplinary team integrating DevOps, Data Engineering, and Machine Learning Operations at enterprise scale.
Drive the evolution of a robust Databricks ecosystem, enabling advanced analytics, AI pipelines, and real-time data workflows.
Architect and automate cloud infrastructure across AWS and Azure to ensure performance, security, and cost efficiency.
Join a culture that prioritizes servant leadership, collaboration, and innovation in a high-growth environment.
Competitive compensation package including base salary, annual bonus, and equity, plus comprehensive family benefits and career advancement opportunities.
Job Details Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
7+ years of experience in DevOps, Cloud Infrastructure, or Data Engineering roles, with 3+ years in management or team leadership.
Deep hands-on experience with Databricks for large-scale data and ML workflows, including model orchestration, pipeline management, and optimization.
Experience fine-tuning or deploying Large Language Models (LLMs) using Databricks or similar frameworks.
Proven expertise in AWS and Azure cloud environments.
Strong proficiency in CI/CD tools (Jenkins, GitHub Actions, GitLab CI, or Azure DevOps).
Expertise with Infrastructure-as-Code (Terraform, CloudFormation, or ARM templates).
Skilled in Python, Bash, or PowerShell for automation and deployment scripting.
Familiarity with Docker, Kubernetes, and container orchestration in production environments.
Experience managing data-centric environments supporting data lakes, machine learning, and real-time analytics.
Strong understanding of security, compliance, and governance frameworks (SOC 2, HIPAA, GDPR).
Preferred: Certifications such as AWS Certified DevOps Engineer or Azure DevOps Engineer Expert.
Experience with MLOps, streaming data, and cloud cost optimization.
Background in Agile/Scrum methodologies with tools like Jira or Asana.
Interested in hearing more? Easy Apply now by clicking the "Apply Now" button.
Jobot is an Equal Opportunity Employer.
We provide an inclusive work environment that celebrates diversity and all qualified candidates receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, age (40 and over), disability, military status, genetic information or any other basis protected by applicable federal, state, or local laws.
Jobot also prohibits harassment of applicants or employees based on any of these protected categories.
It is Jobot’s policy to comply with all applicable federal, state and local laws respecting consideration of unemployment status in making hiring decisions.
Sometimes Jobot is required to perform background checks with your authorization.
Jobot will consider qualified candidates with criminal histories in a manner consistent with any applicable federal, state, or local law regarding criminal backgrounds, including but not limited to the Los Angeles Fair Chance Initiative for Hiring and the San Francisco Fair Chance Ordinance.
Information collected and processed as part of your Jobot candidate profile, and any job applications, resumes, or other information you choose to submit is subject to Jobot's Privacy Policy, as well as the Jobot California Worker Privacy Notice and Jobot Notice Regarding Automated Employment Decision Tools which are available at /legal.
By applying for this job, you agree to receive calls, AI-generated calls, text messages, or emails from Jobot, and/or its agents and contracted partners.
Frequency varies for text messages.
Message and data rates may apply.
Carriers are not liable for delayed or undelivered messages.
You can reply STOP to cancel and HELP for help.
You can access our privacy policy here: /privacy-policy
- Observability Corporate Headquarters 12575 Uline Drive, Pleasant Prairie, WI 53158 At Uline, we count on reliable, resilient systems to keep up with our growth.
As a DevOps Engineer specialized in Observability, you’ll help drive that reliability by implementing modern monitoring, automation and orchestration practices that keep our systems performing at their best.
Careers Packed with Potential.
Backed by 45+ years of success, Uline offers opportunities to grow your career with stability you can count on.
Position Responsibilities Work with a modern observability stack (ElasticSearch, Prometheus/Thanos, OpenTelemetry, Grafana etc.) and help shape what comes next.
Build and automate infrastructure provisioning using Terraform, Kubernetes, GitHub Actions, ArgoCD, or similar tools.
Drive end-to-end visibility across distributed systems with real-time metrics, logs, traces, and SLO-based alerting.
Influence architecture decisions and help define best practices for performance, reliability, and incident response.
Minimum Requirements Bachelor's degree in information technology, computer science or a related field.
3+ years of hands-on experience with ElasticSearch or HashiCorp Vault with expertise in installation, configuration, support, log analysis and performance tuning.
3+ years of experience with Docker or Kubernetes container orchestration tools & IaC automation tools.
Experience with AI / ML in operations (AIOps) a plus.
Uline does not participate in the H1-B lottery.
Benefits Complete health insurance coverage and 401(k) with 6% employer match that starts day one! Multiple bonus programs.
Paid holidays and generous paid time off.
Tuition Assistance Program that covers professional continuing education.
Employee Perks On-site café and first-class fitness center with complimentary personal trainers.
Over four miles of beautifully maintained walking trails.
About Uline Uline, a family-owned company, is North America’s leading distributor of shipping, industrial, and packaging materials with over 9,800 employees across 14 locations.
Uline is a drug-free workplace.
All new hires must complete a pre-employment hair follicle drug screening.
All positions are on-site.
EEO/AA Employer/Vet/Disabled #LI-SR1 #CORP (#IN-PPITL1) #ZR-HQIT Our employees make the difference and we are committed to offering exceptional benefits and perks! Explore Uline.jobs to learn more!
Salary: $155,000
- $175,000 per year A bit about us: Competitive salary and comprehensive benefits Long-term stability with continued investment in technology and engineering High-visibility work with real-world impact A collaborative, engineering-driven culture focused on quality and continuous improvement Why join us? Work on data and machine learning platforms operating at significant scale Own and influence core systems that power critical business capabilities Collaborate with experienced engineers and data scientists in a highly technical environment Tackle complex engineering challenges with modern cloud and MLOps tooling Enjoy the stability of a mature organization combined with the opportunity to modernize and innovate Job Details Senior Data Engineer Role Summary Join a high-performing engineering team at a large, global organization operating at the intersection of technology, data, and digital products.
We are seeking a highly motivated Senior Data Engineer to lead the architecture, deployment, and operation of next-generation, data-driven platforms.
In this role, you will bridge the gap between Data Science and Production Engineering, ensuring that datasets, machine learning models, and core services are deployed reliably, scalably, and securely in the cloud.
This is a high-impact position requiring deep expertise in data architecture, backend engineering, and the full machine learning lifecycle in production environments.
Key Responsibilities Data Pipeline Design & Orchestration Design, build, and maintain robust data ingestion and transformation pipelines Leverage modern orchestration tools to ensure reliable, observable data flows supporting machine learning workloads Core Development Write clean, efficient, and well-tested Python code for automation, infrastructure tooling, and service integration Develop shared libraries and glue services connecting cloud-native components API & Service Deployment Design, develop, and deploy high-performance Python APIs (FastAPI / Flask) to serve machine learning predictions and core application logic MLOps Pipeline Ownership Own end-to-end pipelines for continuous training, deployment, versioning, and monitoring of production ML models (e.g., recommendation or personalization systems) Infrastructure Management Architect and maintain scalable, fault-tolerant infrastructure using Kubernetes (GKE) within Google Cloud Platform Ensure reliability, performance, and cost efficiency across environments Collaboration & Mentorship Partner closely with data scientists, software engineers, and platform teams Provide technical leadership and mentorship to junior engineers Qualifications Must-Have (Engineering Excellence) 5+ years of professional experience in Data Engineering, Software Engineering, or Cloud Engineering Deep expertise in Python for application development, data processing, and automation Proven experience building and deploying production-grade backend services and APIs (FastAPI, Flask, or Django) Strong SQL skills with experience designing and optimizing schemas for relational and analytical data stores (e.g., BigQuery, Cloud SQL) Hands-on experience with data orchestration tools such as Dagster or Airflow Extensive experience designing and operating services within Google Cloud Platform (BigQuery, Pub/Sub, Vertex AI, Compute Engine) Expert-level knowledge of Docker and Kubernetes, including Helm-based deployments Nice-to-Have (DevOps & MLOps) Experience with Infrastructure as Code tools such as Terraform or Crossplane CI/CD experience using GitHub Actions or similar tooling Familiarity with observability stacks (Prometheus, Grafana, Cloud Logging) Understanding of cloud security principles and enterprise compliance requirements Direct experience supporting production MLOps workflows (model monitoring, drift detection, automated retraining) Interested in hearing more? Easy Apply now by clicking the "Apply Now" button.
Jobot is an Equal Opportunity Employer.
We provide an inclusive work environment that celebrates diversity and all qualified candidates receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, age (40 and over), disability, military status, genetic information or any other basis protected by applicable federal, state, or local laws.
Jobot also prohibits harassment of applicants or employees based on any of these protected categories.
It is Jobot’s policy to comply with all applicable federal, state and local laws respecting consideration of unemployment status in making hiring decisions.
Sometimes Jobot is required to perform background checks with your authorization.
Jobot will consider qualified candidates with criminal histories in a manner consistent with any applicable federal, state, or local law regarding criminal backgrounds, including but not limited to the Los Angeles Fair Chance Initiative for Hiring and the San Francisco Fair Chance Ordinance.
Information collected and processed as part of your Jobot candidate profile, and any job applications, resumes, or other information you choose to submit is subject to Jobot's Privacy Policy, as well as the Jobot California Worker Privacy Notice and Jobot Notice Regarding Automated Employment Decision Tools which are available at /legal.
By applying for this job, you agree to receive calls, AI-generated calls, text messages, or emails from Jobot, and/or its agents and contracted partners.
Frequency varies for text messages.
Message and data rates may apply.
Carriers are not liable for delayed or undelivered messages.
You can reply STOP to cancel and HELP for help.
You can access our privacy policy here: /privacy-policy