Terraform Github Jobs Remote Jobs in Usa

40 positions found — Page 3

Senior Cloud Platform Engineer
Salary not disclosed

Our Ideal Candidate

We are seeking an experienced cloud and DevOps engineer with over 5 years of experience designing, automating, and maintaining scalable AWS infrastructure, CI/CD pipelines, and secure cloud environments. In the role of Senior Cloud Platform Engineer, you should demonstrate expertise in Infrastructure as Code, scripting, containerization, and modern monitoring or alerting platforms, as well as strong skills working across teams. Success in this position requires a talent for optimizing cloud resources, ensuring security and compliance, and facilitating fast, reliable software deployments. Having experience with HIPAA-compliant systems, .NET platforms, or serverless computing is considered a significant advantage.


Responsibilities

  • Design, implement, and maintain CI/CD pipelines using tools like AWS CDK, AWS CodePipeline, or GitHub Actions.
  • Manage infrastructure as code (IaC) using Terraform, CloudFormation, or similar tools.
  • Monitor system performance and availability using tools like CloudWatch, Prometheus, Grafana, or Datadog.
  • Automate repetitive tasks and deployment processes to improve team efficiency.
  • Collaborate with software engineers, QA, and product teams to ensure smooth deployments and rapid iteration.
  • Implement and enforce security best practices and compliance across infrastructure and deployment pipelines.
  • Identify optimizations to reduce cloud resource usage across AWS accounts.
  • Maintain documentation for infrastructure, processes, and compliance requirements.
  • Work with multiple teams to implement their deployments using common practices.
  • Manage Builds and the corresponding documentation
  • Monitor package versions, track EOL dates, and upgrade to keep infrastructure current


Qualifications

  • B.S. Computer Science degree or equivalent experience.
  • 5+ years of experience in DevOps, Site Reliability Engineering, or related roles.
  • 2+ years of hands-on AWS Experience
  • Strong experience with cloud platforms (AWS, Azure, or GCP).
  • Proficiency in scripting languages such as Bash, Python, or PowerShell.
  • Experience with containerization and orchestration (Docker, Kubernetes).
  • Familiarity with monitoring, logging, and alerting tools.
  • Solid understanding of networking, security, and system administration.
  • Strong communication skills and ability to work cross-functionally.
Not Specified
Senior Data Scientist
🏢 Jobot
Salary not disclosed
Los Angeles 2 weeks ago
Urgently hiring Senior Data Scientist! This Jobot Job is hosted by: Kendall Kaing Are you a fit? Easy Apply now by clicking the "Apply Now" button and sending us your resume.

Salary: $80,000
- $150,000 per year A bit about us: Are you interested in working for the world's largest cannabis market with a footprint that covers the entire breadth of the state of California? Are you someone who wants to be part of the growth of a fast-growing industry? At client company, our mission is clear: to provide the ultimate one stop shop cannabis experience by offering exceptional customer service and diversified products.

We strive to build long-term customer loyalty.

We’re building a consumer-centric organization that is focused on sharing the transformational potential of cannabis with the world.

Our product line is one of the best-selling cannabis brands in the market today and has claimed the title of the best-selling vape brand across all BDSA-tracked markets and best-selling brand overall in the California market! We are rooted in California and have expanded our operations across the United States, with even more growth on the horizon! Additionally, we’re building distribution networks to bring our products to over 60 countries worldwide.

We recognize that our employees are at the center of our success, and we take pride in a corporate culture that emphasizes our core values: Influence, Inspire, Innovate, Win, & Grow! Our employees come from a wide range of retail backgrounds, each bringing their own unique skills and talents to the table as we work together to continue our incredible growth.

If you are interested in partaking in the journey of building a nationally recognized and leading brand, we want to hear from you! Why join us? Benefits & Compensation: Additional details about compensation and benefits eligibility for this role will be provided during the hiring process.

All employees are provided competitive compensation, paid training, and employee discounts on our products and services.

We offer a range of benefits packages based on employee eligibility, including: Paid Vacation Time, Paid Sick Leave, Paid Holidays, Parental Leave.

Health, Dental, and Vision Insurance.

Employee Assistance Program.

401k with generous employer match.

Life Insurance.

Job Details Data Scientist Why this role matters You’ll architect, build, and operate the data infrastructure that powers analytics, machine learning, and operational reporting across the organization.

This is a hands-on role for a data scientist who thrives at the intersection of engineering, modeling, and production-grade systems.

What you’ll do Design and evolve our lakehouse architecture (Delta Lake on AWS/Azure), including storage layout, compute strategy (Databricks, EMR, Synapse), and performance SLAs Build robust batch and streaming data pipelines using Apache Spark (PySpark/Scala), Databricks Workflows, Azure Data Factory, and event-driven integrations (Kinesis, Event Hubs, Kafka) Develop and maintain analytical and ML-ready data models, semantic layers, and dimensional schemas for BI and production use Implement observability and reliability features including custom Spark metrics, anomaly detection, logging, and automated data quality checks Optimize compute and storage performance through cluster tuning, caching, partitioning, and format selection, with measurable cost savings Enforce data governance policies including RBAC, row-level security, cataloging, lineage tracking, and compliance with GDPR, HIPAA/FHIR, and CCPA/CDPA Collaborate with analytics, product, and platform teams to define SLAs, manage incident response, and guide data contract best practices Mentor junior engineers and lead design reviews, setting standards for scalable, maintainable data systems What you bring 8+ years of experience in data science or data engineering Expert-level Python and SQL; working knowledge of Scala/Java for Spark 3–5+ years of experience with Apache Spark and Databricks (including Delta Live Tables and performance tuning) Strong cloud experience in AWS (S3, EMR, Glue, Lambda, Kinesis) and/or Azure (ADLS Gen2, ADF, Event Hubs, Synapse) Experience with orchestration tools (Airflow, ADF) and transformation frameworks (dbt) Familiarity with data warehouses such as Snowflake, Redshift, or Databricks SQL Proven track record of cost and performance optimization in cloud data environments Solid foundation in data modeling, CI/CD, Docker/Linux, and Git Ability to translate business needs into scalable data products and lead projects end-to-end Nice to have Experience with custom Spark listeners, low-latency streaming, or event-driven architectures Exposure to healthcare data pipelines (FHIR) and DLT troubleshooting Dashboarding experience with Power BI, Tableau, or Looker Familiarity with governance frameworks and MLOps (feature stores, model monitoring) Our stack Databricks, Spark (PySpark/Scala), Delta Lake, Airflow, ADF, dbt, AWS (S3/EMR/Glue/Kinesis), Azure (ADLS/Synapse/Event Hubs), Terraform, GitHub Actions, Docker, CloudWatch, REST Success in 6–12 months looks like Reliable, observable pipelines with
Not Specified
Information Security Consultant
Salary not disclosed
Boston, Massachusetts 2 weeks ago

Information Security Consultant, Security Platforms Engineering, Enterprise Information Systems (EIS)

Full-Time, Hybrid – Springfield, MA

The Opportunity

Working closely with MassMutual business partners, you'll help design and implement solutions that address unique security challenges, streamline incident response, and automate critical workflows. The role is centered on building and maintaining resilient logging, data pipeline, and SOAR platforms in collaboration with the Security Operations Center and Security Intelligence teams, directly supporting our mission to enhance detection, response, and overall security posture.

The Team

We are looking for fast thinking, eager to learn team players to join the Security Platforms team. When you join the Security Platforms team, you'll be working with a group of people who are passionate about our security, innovation, and the success of our business partners.

The Impact:

This position offers the opportunity to play a key role in advancing our enterprise's centralized logging and SOAR capabilities. You will be responsible for deploying, managing, and optimizing centralized logging tools, data pipeline, and SOAR platforms at scale, ensuring robust visibility and rapid response across the organization. In addition, you will be working with internal customers and key stakeholders such as the Security Operations Center to onboard logs, manipulate data, create playbooks, write scripts, and configure integrations.

The Minimum Qualifications

  • Bachelors Degree or equivalent professional experience
  • 5+ years of experience in information technology field
  • 1+ years experience with centralized logging, SIEM, and data pipeline
  • Weekly on-call duties assigned based on team rotation (currently once every 10 weeks)

The Ideal Qualifications

  • Maintaining SIEM and data pipeline platform stability, diagnosing and resolving issues related to the platforms, creating, responding to, and resolving alerts, onboarding and parsing data
  • Experience with SOAR platforms
  • Working knowledge of Terraform, GitHub, Jenkins, Amazon Machine Image (AMI) updates
  • Experience with data manipulation and data science
  • Expertise with regular expressions
  • Hands on experience with or in support of any of the following vendors: Kafka, Sumo Logic, Splunk, Cribl, Crowdstrike, AWS, XSOAR, Torq, Palo Alto, Fortinet, Netskope, Google, Apple, Microsoft, Atlassian, and other applicable products preferred
  • Experience with Linux system administration
  • Experience with scripting languages, programming in JSON, JavaScript, python, and bash
  • Experience with UNIX, Windows Servers, Java
  • Knowledge of information security systems such as firewalls, intrusion detection, antivirus, data loss protection, vulnerability scanning, Active Directory, and LDAP
  • Preferred knowledge of Database Management (MySQL, Sybase, Oracle, DB2, MS-SQL), building queries and developing stored procedures
  • Information security solutions development either from an architect or engineering perspective
  • Experience troubleshooting inside of a corporate network
  • Experience with secure data communications and applications
  • Extensive knowledge of current and upcoming IT security technologies
  • Excellent written and oral communication skills and customer service skills
  • Exceptional troubleshooting skills and ability to problem solve with little to no supervision
  • Working knowledge of ServiceNow and JIRA tickets for customer assistance

What to Expect as Part of MassMutual and the Team

  • Regular collaboration within the Security Platforms Engineering team
  • Focused one-on-one time with your manager
  • Networking opportunities including access to Asian, Hispanic/Latinx, African American, women, LGBTQIA+, veteran and disability-focused Business Resource Groups
  • Access to learning content on Degreed and other informational platforms
  • Your ethics and integrity will be valued by a company with a strong and stable ethical business with industry leading pay and benefits

#LI-RK1

MassMutual is an equal employment opportunity employer. We welcome all persons to apply.
If you need an accommodation to complete the application process, please contact us and share the specifics of the assistance you need.
Not Specified
Senior DevOps Engineer (AWS)
Salary not disclosed
Seattle 2 weeks ago
We are rebuilding Tennis Channel’s digital experience from the ground up — mobile apps, TV apps, content management, analytics, monitoring, and eventually live and VOD video delivery.

AWS will be our core cloud platform.

We’re looking for an experienced Senior AWS Platform / DevOps Engineer to design, deploy, and operate the backbone of our new streaming platform.

You will own our AWS environment end-to-end: infrastructure, automation, observability, and security.

As our first DevOps hire, you’ll work directly with the Head of Engineering (ex-Amazon, ex-Prime Video) and a small, senior development team.

This is a hands-on role where you’ll set up environments, implement automation, enforce security best practices, and establish standards that future engineers will follow.

Along the way, you’ll help build a greenfield streaming platform with no legacy constraints, while making infrastructure decisions that will power experiences for millions of fans worldwide.

This is a unique opportunity to collaborate with proven leaders, shape the foundation of a new digital ecosystem, and play a pivotal role in the relaunch of Tennis Channel’s direct-to-consumer business.

This role is based in Seattle.

Key Responsibilities: Infrastructure & Automation Design, deploy, and manage AWS infrastructure: API Gateway, Lambda, ECS Fargate, Amazon RDS (PostgreSQL & MySQL/Aurora), DynamoDB, ElastiCache, S3, CloudFront, WAF, and Kinesis.

Implement Infrastructure as Code using AWS CDK (preferred) or Terraform.

Build and maintain CI/CD pipelines (GitHub Actions or AWS CodePipeline).

Security & Compliance Configure IAM roles, policies, and least-privilege access.

Implement and monitor CloudTrail, GuardDuty, Security Hub, WAF.

Enforce tagging, cost controls, and guardrails across environments.

Observability & Reliability Set up CloudWatch metrics, dashboards, and alarms.

Implement distributed tracing (AWS X-Ray) and log analytics (OpenSearch or equivalent).

Create synthetic canaries to validate key user flows (login, playback, API health).

Establish backup, recovery, and disaster recovery strategies.

Analytics & Data Implement data ingestion pipelines (e.g., Kinesis Firehose → S3 → Athena).

Deliver product dashboards with QuickSight or equivalent.

Video Delivery Extend infrastructure into AWS Media Services (MediaLive, MediaPackage, MediaConvert, MediaTailor) for live and VOD streaming.

Support secure, global video delivery through CloudFront and related services.

Required Qualifications: 5+ years of professional experience in AWS cloud infrastructure and DevOps Hands-on experience with Amazon RDS (PostgreSQL & MySQL) in production: deployment, performance tuning, backups, upgrades, and security Strong background with core AWS services: API Gateway, Lambda, ECS Fargate, DynamoDB, S3, CloudFront, WAF, ElastiCache Proficient with Infrastructure as Code (AWS CDK strongly preferred; Terraform/CloudFormation acceptable with willingness to switch) Skilled in CI/CD pipelines (GitHub Actions, CodePipeline, or Jenkins) Strong grasp of IAM, VPC networking, and AWS security best practices Experience with observability: CloudWatch, X-Ray, logging pipelines, and alerting Proficiency in scripting languages (Python, Bash) Strong communication and documentation skills; able to work independently in a small, fast-moving team Nice to have: Experience deploying and scaling a headless CMS (Strapi, Contentful, Sanity, etc.) Familiarity with streaming video architectures Knowledge of analytics pipelines (Kinesis, Athena, QuickSight) AWS certifications (Solutions Architect, SysOps, DevOps Engineer) Tennis Channel is proud to be equal opportunity employer and a drug free workplace.

Employment practices will not be influenced or affected by virtue of an applicant's or employee's race, color, religion, sex (including pregnancy, gender identity, and sexual orientation), national origin, age, disability, genetic information, military or veteran status or any other characteristic protected by law.

About PickleballTV Pickleballtv (PBTV) is the 24-hour television home of America’s fastest growing sport.

With coverage of tournaments throughout the year, the network offers 1,000+ hours of live matches from the game’s top professionals and biggest stars.

PBTV also provides viewers with first-class instruction, exclusive lifestyle programming and studio news content and more.

About Tennis Channel Tennis Channel is the media home to two twenty-four-hour television networks, a subscription streaming service, online magazine and podcasts dedicated to the sport and its unique lifestyle.

The tennis-media hub is home to every aspect of the wide-ranging, worldwide tennis community.

Tennis Channel is carried nationwide by every one of the top ten pay-TV service providers.

About Sinclair Sinclair, Inc.

(Nasdaq: SBGI) is a diversified media company and a leading provider of local news and sports.

The Company owns, operates and/or provides services to 178 television stations in 81 markets affiliated with all major broadcast networks; owns Tennis Channel, the premium destination for tennis enthusiasts; multicast networks CHARGE, Comet, ROAR and The Nest.

Sinclair’s AMP Media produces a growing portfolio of digital content and original podcasts.

Additional information about Sinclair can be found at .

About the Team The life-blood of our organization is our people.

We have a compelling story, a goal-oriented culture, and we take really good care of people.

How good? Here is a glimpse: great benefits, open-door policy, upward mobility and a strong desire to see you succeed.

Ready to be part of a winning team? Let’s talk.

The base salary compensation range for this role is $140,000 to $170,000.

Final compensation for this role will be determined by various factors such as a candidates’ relevant work experience, skills, certifications, and geographic location.

Full time positions are eligible for benefits that include participation in a retirement plan, life and disability insurance, health, dental and vision plans, flexible spending accounts, sick leave, vacation time, personal time, parental leave and employee stock purchase plan.

#tennis
Not Specified
Java Solution Architect
🏢 Coforge
Salary not disclosed
Chicago, IL 2 weeks ago

Role: Java Solution engineer Architect

Skills: Core Java, Azure (Public Cloud), AI-assisted development tools (GitHub Copilot, etc.)

Experience: 15 + Years

Location: Chicago


We are seeking a highly skilled and experienced Java Solution engineer Architect who brings more than traditional development capability—someone with a core engineering mindset, strong independent problem‑solving skills, and the ability to influence and uplevel an existing team. This role is central to our transformation journey to modernize legacy systems and move toward an autopilot engineering model. The ideal candidate is not an “order-taker” but a Better Engineer—someone who can diagnose issues, architect solutions, and drive modernization without needing step-by-step direction.


Job Description:

Tech Stack: Core Java, Azure (Public Cloud), AI-assisted development tools (GitHub Copilot, etc.)

Transformation & Coaching

  • Assessment & Roadmap Creation:
  • Evaluate the current technology landscape, identify gaps, and shape the transformation roadmap for modernization.
  • Culture Change Agent:
  • Break entrenched “comfort zone” patterns across a team with 10+ year legacy experience.
  • Mentor team members and interns on modern engineering practices, automation-first thinking, and cloud-native principles.
  • AI Adoption Leader:
  • Promote and operationalize the use of GitHub Copilot and other AI tools for:
  • Unit test generation
  • Code reviews & optimization
  • Issue impact diagnosis
  • Development efficiency

Technical Leadership & Delivery

  • Self-Sufficient Execution:
  • Build systems independently without needing detailed “how-to” instructions. Should naturally handle engineering essentials such as timeouts, retries, dead letter queues, error handling, and resiliency patterns.
  • Hands-on Development:
  • Write high‑quality Java code while enhancing CI/CD pipelines, improving automated testing, and increasing code coverage.
  • Cloud & Infrastructure Ownership:
  • Drive Azure-related engineering tasks including IAM, horizontal/vertical scaling, monitoring, and Infrastructure-as-Code (Terraform).
  • Operate with a philosophy of application teams owning their infrastructure.

Operational Excellence

  • Documentation & Knowledge Sharing:
  • Transition knowledge from individuals to scalable, documented, and automated processes; establish best practices and engineering playbooks.
  • Resiliency & Scalability Engineering:
  • Architect broadly stable systems that minimize production issues and increase platform reliability.
Not Specified
Java Full Stack Developer
Salary not disclosed
Smithfield 2 weeks ago
Position Description: The Application Developer, Advanced Technology, shall translate application requirements into web-based solutions using available technology.

This individual shall apply new and emerging technologies to the software development process.

Required Skills & Qualifications: B.S.

in Computer Science, Engineering, Mathematics, or equivalent experience.

Advanced full-stack software development experience, building enterprise web and middle-tier applications using Angular and core Java with Spring/Spring Boot.

Angular 16+.

Leadership to guide, encourage, and motivate your fellow engineers.

Experience working in an Agile Scrum development environment.

Experience in REST API development via a gateway.

Experience with Docker, Kubernetes, Terraform, and AWS cloud deployment/application management.

Experience with unit testing and test automation libraries/strategies (Cypress/Karate/Cucumber).

Experience building and deploying applications using continuous integration pipelines and automated deployment tools such as Jenkins.

Experience using source control and pull requests for collaborative development in code repository tools such as GitHub.

Strong communications and problem-solving skills.

Preferred Skills: Experience with PDF generation
- understanding PDF reporting Docker Kubernetes Terraform Jenkins Duties and Responsibilities Developing and deploying software in a fast-paced environment.

Collaborating with colleagues on technical implementation and process improvement.

Working closely with technology and business partners to design new features.

Passion for learning the latest technologies and frameworks.

Building positive relationships within and across teams.

Mentor and be mentored by your team members and partners.
permanent
Machine Learning Engineer
🏢 Jobot
Salary not disclosed
San Francisco 2 weeks ago
Machine Learning Engineer
- Fulltime/Permanent REMOTE position This Jobot Job is hosted by: Dallas Gillespie Are you a fit? Easy Apply now by clicking the "Apply Now" button and sending us your resume.

Salary: $150,000
- $180,000 per year A bit about us: Full-time, fully remote position with excellent benefits Candidates must live in the PST or MDT time zones.

CST zone candidates will be considered.

EST Candidates are not eligible.

Why join us? We value you! Our impressive benefits package, bonus potential, and salary reflect that! Work somewhere you are recognized for your contributions! Apply Today! Job Details Minimum Education: Bachelor’s degree in computer science, artificial intelligence, informatics or closely related field.

Master’s degree in computer science, engineering or closely related field preferred.

Minimum Experience: 3 or more years of relevant Machine Learning Engineer Experience.

Proven experience with: Artificial intelligence and machine learning platforms (e.g., AWS, Azure or GCP).

Containerization technologies (e.g., Docker) or container orchestration platforms (e.g., Kubernetes).

CI/CD tools (e.g., Github Actions).

Programming languages and frameworks (e.g., Python, R, SQL).

MLOps engineering principles, agile methodologies, and DevOps life-cycle management.

Technical writing and documentation for AI/ML models and processes.

Healthcare data and machine learning use cases.

Healthcare Expertise: Understanding of healthcare regulations and standards, and familiarity with Electronic Health Records (EHR) systems, including integrating machine learning models with these systems.

REQUIRED qualifications: Experience in managing end-to-end ML lifecycle.

Experience in managing automation with Terraform.

Containerization technologies (e.g., Docker) or container orchestration platforms (e.g., Kubernetes).

CI/CD tools (e.g., Github Actions).

Programming languages and frameworks (e.g., Python, R, SQL).

Deep understanding of coding, architecture, and deployment processes Strong understanding of critical performance metrics.

Extensive experience in predictive modeling, LLMs, and NLP Exhibit the ability to effectively articulate the advantages and applications of the RAG framework with LLMs Accountabilities: Production Deployment and Model Engineering: Proven experience in deploying and maintaining production-grade machine learning models, with real-time inference, scalability, and reliability.

Scalable ML Infrastructures: Proficiency in developing end-to-end scalable ML infrastructures using on-premise cloud platforms such as Amazon Web Services (AWS), Google Cloud Platform (GCP), or Azure.

Engineering Leadership: Ability to lead engineering efforts in creating and implementing methods and workflows for ML/GenAI model engineering, LLM advancements, and optimizing deployment frameworks while aligning with business strategic directions.

AI Pipeline Development: Experience in developing AI pipelines for various data processing needs, including data ingestion, preprocessing, and search and retrieval, ensuring solutions meet all technical and business requirements.

Collaboration: Demonstrated ability to collaborate with data scientists, data engineers, analytics teams, and DevOps teams to design and implement robust deployment pipelines for continuous improvement of machine learning models.

Continuous Integration/Continuous Deployment (CI/CD) Pipelines: Expertise in implementing and optimizing CI/CD pipelines for machine learning models, automating testing and deployment processes.

Monitoring and Logging: Competence in setting up monitoring and logging solutions to track model performance, system health, and anomalies, allowing for timely intervention and proactive maintenance.

Version Control: Experience implementing version control systems for machine learning models and associated code to track changes and facilitate collaboration.

Security and Compliance: Knowledge of ensuring machine learning systems meet security and compliance standards, including data protection and privacy regulations.

Documentation: Skill in maintaining clear and comprehensive documentation of ML Ops processes and configurations.

Interested in hearing more? Easy Apply now by clicking the "Apply Now" button.

Jobot is an Equal Opportunity Employer.

We provide an inclusive work environment that celebrates diversity and all qualified candidates receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, age (40 and over), disability, military status, genetic information or any other basis protected by applicable federal, state, or local laws.

Jobot also prohibits harassment of applicants or employees based on any of these protected categories.

It is Jobot’s policy to comply with all applicable federal, state and local laws respecting consideration of unemployment status in making hiring decisions.

Sometimes Jobot is required to perform background checks with your authorization.

Jobot will consider qualified candidates with criminal histories in a manner consistent with any applicable federal, state, or local law regarding criminal backgrounds, including but not limited to the Los Angeles Fair Chance Initiative for Hiring and the San Francisco Fair Chance Ordinance.

Information collected and processed as part of your Jobot candidate profile, and any job applications, resumes, or other information you choose to submit is subject to Jobot's Privacy Policy, as well as the Jobot California Worker Privacy Notice and Jobot Notice Regarding Automated Employment Decision Tools which are available at /legal.

By applying for this job, you agree to receive calls, AI-generated calls, text messages, or emails from Jobot, and/or its agents and contracted partners.

Frequency varies for text messages.

Message and data rates may apply.

Carriers are not liable for delayed or undelivered messages.

You can reply STOP to cancel and HELP for help.

You can access our privacy policy here: /privacy-policy
Not Specified
DevSecOps Engineer
Salary not disclosed
Berkeley Heights, NJ 2 weeks ago

LTIMindtree is an equal opportunity employer that is committed to diversity in the workplace. Our employment decisions are made without regard to race, color, creed, religion, sex (including pregnancy, childbirth or related medical conditions), gender identity or expression, national origin, ancestry, age, family-care status, veteran status, marital status, civil union status, domestic partnership status, military service, handicap or disability or history of handicap or disability, genetic information, atypical hereditary cellular or blood trait, union affiliation, affectional or sexual orientation or preference, or any other characteristic protected by applicable federal, state, or local law, except where such considerations are bona fide occupational qualifications permitted by law.

A little about us...

Role: Azure DevOps Engineer

Location: Berkeley Heights, NJ


Job Description:

1. Extensive hands-on experience on GitHub Actions writing workflows in YAML using re-usable templates

2. Extensive hands-on experience with application CI/CD pipelines both for Azure and on-prem for different frameworks

3. Hands on experience with Azure DevOps and migration programs of CI/CD pipelines preferably from Azure DevOps to GitHub Actions

4. Proficiency in integrating and consuming REST APIs to achieve automation through scripting

5. Hands on experience with atleast 1 scripting language and has done out of box automations for platforms like People Soft, SharePoint, MDM etc

6. Hands on experience with CI/CD of databases

7. Good to have experience with infrastructure-as-code including ARM templates Terraform Azure CLI Azure PowerShell modules

8. Exposure to monitoring tools like ELK Prometheus Grafana


LTIMindtree is an equal opportunity employer that is committed to diversity in the workplace. Our employment decisions are made without regard to race, color, creed, religion, sex (including pregnancy, childbirth or related medical conditions), gender identity or expression, national origin, ancestry, age, family-care status, veteran status, marital status, civil union status, domestic partnership status, military service, handicap or disability or history of handicap or disability, genetic information, atypical hereditary cellular or blood trait, union affiliation, affectional or sexual orientation or preference, or any other characteristic protected by applicable federal, state, or local law, except where such considerations are bona fide occupational qualifications permitted by law.

Not Specified
Domain Architects
Salary not disclosed
Northbrook 2 weeks ago
Job Summary Job Description Medline Industries, LP is seeking Domain Architects to join our team in Northbrook, IL.

Job Description Architect and design a robust, scalable, reliable, and secure Infrastructure Automation Platform to support our cloud migration, software modernization, and business objectives.

Deliver Automation Platform solutions to serve customer, product, developer, and operations needs throughout the entire product life cycle, to enable software engineering teams to increase the velocity of code and application releases, and to enable infrastructure teams to manage our cloud infrastructure through code and automated deployments.

Foster and evangelize a team culture where serving platform customers is the primary mission, continuously monitoring and analyzing customer feedback for platform related pain points and empathize with customer demands and requirements.

Create detailed architectural specifications to document the architecture decisions.

Communicate the architectural specifications to technical teams and business sponsors in a directly actionable, clear, and succinct manner.

Drive adoption of the automation platform and services through advocacy and education to the broader engineering and operations organizations.

Research market trends and conduct competitive analysis for infrastructure and software delivery automation products and services to ensure our automation platform becomes best in class.

Troubleshoot platform issues and work with the engineering, infrastructure, and operations teams to resolve them.

Collaborate with Enterprise Architecture, Software Engineering, and Development teams to deliver self-service platform capabilities to improve the developer experience.

Collaborate with IT Operations and Network Operations Center to enable management and monitoring of cloud infrastructure and applications and deliver stable and fault tolerant solutions to achieve application availability targets.

Collaborate with Quality Assurance Automation team to incorporate automated testing for infrastructure and application deployment pipelines.

Partner with Compliance and Security teams to ensure infrastructure and applications meet compliance standards and are safe and secure against cybersecurity threats.

Participate in Agile ceremonies, including daily stand-ups, sprint planning, sprint reviews, and retrospectives, to provide technical leadership and guidance.

Participate in ITIL-based change, incident, and problem management processes for automation platform solutions.

Manage and optimize the platform expense and budget in the form of product show/charge backs, in partnership with the IT Finance Division.

Telecommuting is permitted, but applicant must work from the worksite location at least 3 days per week.

No additional national or international travel is anticipated.

Job Requirements PRIMARY REQUIREMENTS: Bachelor’s degree in Computer Engineering or related fields, or its foreign equivalent, and 8 years of relevant work experience.

In addition, experience with the following skills is required: (1) Experience using DevOps tools including Azure DevOps, GitLab, GitHub Actions, and DevOps Consulting & Architecture to orchestrate and optimize the software delivery lifecycle by integrating code versioning, automated builds, testing, and deployment.

(2) Experience using Cloud Automation tools including Azure Automation, ARM Templates, Azure LogicApps, and Cloud Architecture provision, configure, and monitor cloud resources programmatically while minimizing human error.

(3) Experience using Infrastructure as Code tools including Terraform, Ansible, and Chef to automate infrastructure provisioning and configuration, enabling version control and consistency across environments.

(4) Experience using Scripting language including Python, PowerShell, and Bash for automation, integration scripting, and administrative task execution across Windows/Linux/cloud environments.

(5) Experience using ML/AI Tools including Azure ML, AWS SageMaker, and TensorFlow to build, train, and operationalize machine learning models that power intelligent business applications.

(6) Experience using supporting Infrastructure Automation solutions including BMC Server Automation and vRealize Orchestrator Design to automate infrastructure lifecycle tasks including provisioning, compliance, patching, and audit readiness.

(7) Experience using Process Orchestrator / Low-Code tools including MS Power Platform, BMC Atrium Orchestrator, MS Orchestrator, and UiPath to digitize, automate, and optimize IT and business processes, reducing manual intervention.

(8) Experience using ideation and innovation to research market trends on technologies including GitLab Duo and Power Automate in the automation space to solve problems, increase adoption, productivity and efficiency.

JOB SITE: 2375 Waterview Drive, Northbrook, IL 60062 WORK HOURS: Full Time (8am to 5pm, Monday to Friday) PAY RANGE: $134,000.00 to $201,000.00 Our benefit package includes health insurance, life and disability, 401(k) contributions, paid time off, etc., for employees working 30 or more hours per week on average.

For a more comprehensive list of our benefits please click here.

For roles where employees work less than 30 hours per week, benefits include 401(k) contributions as well as access to the Employee Assistance Program, Employee Resource Groups and the Employee Service Corp.

We’re dedicated to creating a Medline where everyone feels they belong and can grow their career.

We strive to do this by seeking diversity in all forms, acting inclusively, and ensuring that people have tools and resources to perform at their best.

Explore our Belonging page here .

Medline Industries, LP is an equal opportunity employer.

Medline evaluates qualified individuals without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, age, disability, neurodivergence, protected veteran status, marital or family status, caregiver responsibilities, genetic information, or any other characteristic protected by applicable federal, state, or local laws.
Not Specified
Solutions Architect - Managed Services (Cloud Data Platforms l Snowflake, AWS, Azure)
🏢 Jobot
Salary not disclosed
Philadelphia 2 weeks ago
Remote first / cloud-native data platforms This Jobot Job is hosted by: Katrina McFillin Are you a fit? Easy Apply now by clicking the "Apply Now" button and sending us your resume.

Salary: $165,000
- $190,000 per year A bit about us: Founded over a decade ago, we are a leading AI and data services partner helping global brands design, build, and run modern data and AI platforms on Snowflake and major cloud providers.

We specialize in data engineering, analytics, and managed services, combining deep technical expertise with a global delivery model to maximize business value from data.

Why join us? Competitive base compensation + bonus Remote first Comprehensive Benefits 4 weeks PTO + 10 holidays Accelerated learning and professional development with advanced training and certifications (e.g., Snowflake, cloud).

High-Impact Work: Join a specialized Elastic Operations team running mission-critical cloud data platforms for leading enterprises.

Autonomy & Collaboration: Work in a culture that prizes autonomy, creativity, transparency, and cross-functional collaboration.

Job Details Key Responsibilities and Duties: Lead the design, architecture, and implementation of large-scale cloud-native data platform solutions on Snowflake, AWS, and Azure.

Drive data migrations, integrations, and performance tuning across data warehouses, data lakes, and distributed systems to optimize reliability and cost.

Own platform security, data governance, and process engineering to ensure robust, scalable, and continuously improving data environments.

Manage multiple client engagements as a trusted advisor, collaborating with cross-functional teams and mentoring junior engineers to maximize platform ROI and customer success.

Qualifications: Bachelor’s degree in Computer Science, Engineering, or related field (advanced degrees or relevant certifications are a plus).

At least 10 years of hands-on experience architecting, designing, implementing, and managing cloud-native data platforms (Snowflake, Redshift, Azure Data Warehouse) on AWS and/or Azure.

Proven client-facing consulting experience, including presenting to executive stakeholders and creating detailed solution documentation.

Strong technical depth in SQL, infrastructure-as-code (Terraform or CloudFormation), CI/CD (e.g., GitHub, Bitbucket), and modern data integration tools (e.g., AWS DMS, Azure Data Factory, Matillion, Fivetran, Spark) Interested in hearing more? Easy Apply now by clicking the "Apply Now" button.

Jobot is an Equal Opportunity Employer.

We provide an inclusive work environment that celebrates diversity and all qualified candidates receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, age (40 and over), disability, military status, genetic information or any other basis protected by applicable federal, state, or local laws.

Jobot also prohibits harassment of applicants or employees based on any of these protected categories.

It is Jobot’s policy to comply with all applicable federal, state and local laws respecting consideration of unemployment status in making hiring decisions.

Sometimes Jobot is required to perform background checks with your authorization.

Jobot will consider qualified candidates with criminal histories in a manner consistent with any applicable federal, state, or local law regarding criminal backgrounds, including but not limited to the Los Angeles Fair Chance Initiative for Hiring and the San Francisco Fair Chance Ordinance.

Information collected and processed as part of your Jobot candidate profile, and any job applications, resumes, or other information you choose to submit is subject to Jobot's Privacy Policy, as well as the Jobot California Worker Privacy Notice and Jobot Notice Regarding Automated Employment Decision Tools which are available at /legal.

By applying for this job, you agree to receive calls, AI-generated calls, text messages, or emails from Jobot, and/or its agents and contracted partners.

Frequency varies for text messages.

Message and data rates may apply.

Carriers are not liable for delayed or undelivered messages.

You can reply STOP to cancel and HELP for help.

You can access our privacy policy here: /privacy-policy
Not Specified
jobs by JobLookup
✓ All jobs loaded