Terraform Githubrepository Jobs in Usa
149 positions found — Page 3
We're building safety-enhancing technology for aviation that will save lives. Automated aviation systems will enable a future where air transportation is safer, more convenient and fundamentally transformative to the way goods - and eventually people - move around the planet. We are a team of mission-driven engineers with experience across aerospace, robotics and self-driving cars working to make this future a reality.
As a Senior Software Engineer - Engineering Productivity at Reliable Robotics, you will design, and implement software to support the development, analysis, and certification of automated aircraft systems. You will work closely with product owners and end users to develop solutions that enable and optimize engineering development workflows. The software you produce will be critical to the development and certification of the first fully autonomous aircraft.
Responsibilities
In your role as an internal tool developer, you will develop applications, infrastructure, and tools used by engineering to capture product requirements and interface definitions, model the product architecture and design, and reduce and analyze flight and lab test data. You will supercharge the engineering organization's efficiency and effectiveness by streamlining tools and processes. You will work with other teams and stakeholders to establish technical and UX design requirements for these projects and own the "plan, code, build, test, release, deploy" lifecycle of these applications and services.
Basic Success Criteria
Bachelor's degree in Computer Science, Computer Engineering, or equivalent experience
5+ years experience with professional full stack web development in a team setting
Professional experience with core browser technologies (JavaScript, HTML, CSS) and TypeScript
Experience structuring dynamic, model-driven data and determining data relationships
Experience working with SQL, NoSQL, and time series databases
Experience designing software architecture for both new and existing projects
Preferred Criteria
Experience using Python and libraries such as pandas, matplotlib, and django
Experience integrating with cloud platforms and infrastructure tools such as AWS, Terraform, and Docker
Experience designing and implementing ingestion pipelines for high-throughput streams of real-time telemetry
Experience integrating business intelligence and data visualization tools such as Tableau, Power BI, Superset, Metabase
Experience developing React components and reusable libraries/tools for developers
At Reliable Robotics, we believe that our internal tools are key ingredients to our success. Aircraft design, integration, and certification are highly complex processes requiring diligent management of data and their relationships. Traditionally a paper process, our tools enable our system designers to move faster, conduct more thorough and comprehensive analyses, and design safer aircraft systems. Come be a part of taking our products to the next level.
This position requires access to information that is subject to U.S. export controls. An offer of employment will be contingent upon the applicant's capacity to perform in compliance with U.S. export control laws.
All applicants are asked to provide documentation that legally establishes status as a U.S. person or non-U.S. person (and nationalities in the case of a non-U.S. person). Where the applicant is not a U.S. person, meaning not a (i) U.S. citizen or national, (ii) U.S. lawful permanent resident, (iii) refugee under 8 U.S.C. * 1157, or (iv) asylee under 8 U.S.C. * 1158, or not otherwise permitted to access the export-controlled technology without U.S. government authorization, the Company reserves the right not to apply for an export license for such applicants whose access to export-controlled technology or software source code requires authorization and may decline to proceed with the application process and any offer of employment on that basis.
At Reliable Robotics, our goal is to be a diverse and inclusive workforce. As an Equal Opportunity Employer, we do not discriminate on the basis of race, religion, color, creed, ancestry, sex, gender (including pregnancy, childbirth, breastfeeding, or related medical conditions), gender identity, gender expression, sexual orientation, age, non-disqualifying physical or mental disability or medical conditions, national origin, military or veteran status, genetic information, marital status, or any other basis covered by applicable law. All employment and promotion is decided on the basis of qualifications, merit, and business need.
If you require reasonable accommodation in completing an application, interviewing, completing any pre-employment testing, or otherwise participating in the employee selection process, please direct your inquiries to
Compensation Range: $215K - $300K
Apply for this JobSr. Data Engineer (Hybrid)
Chicago, IL
The American Medical Association (AMA) is the nation's largest professional Association of physicians and a non-profit organization. We are a unifying voice and powerful ally for America's physicians, the patients they care for, and the promise of a healthier nation. To be part of the AMA is to be part of our Mission to promote the art and science of medicine and the betterment of public health.
At AMA, our mission to improve the health of the nation starts with our people. We foster an inclusive, people-first culture where every employee is empowered to perform at their best. Together, we advance meaningful change in health care and the communities we serve.
We encourage and support professional development for our employees, and we are dedicated to social responsibility. We invite you to learn more about us and we look forward to getting to know you.
We have an opportunity at our corporate offices in Chicago for a Sr. Data Engineer (Hybrid) on our Information Technology team. This is a hybrid position reporting into our Chicago, IL office, requiring 3 days a week in the office.
As a Sr. Data Engineer, you will play a key role in implementing
and maintaining AMA's enterprise data platform to support analytics,
interoperability, and responsible AI adoption. This role partners closely with
platform engineering, data governance, data science, IT security, and business
stakeholders to deliver highquality, reliable, and secure data products. This
role contributes to AMA's modern lakehouse architecture, optimizing data
operations, and embedding governance and quality standards into engineering
workflows. This role serves as a
senior technical contributor within the team-providing mentorship to junior
engineers and implementing engineering best practices within the data platform function,
in alignment with architectural direction set by leadership.
RESPONSIBILITIES:
Data Engineering & AI Enablement
- Build and maintain scalable data pipelines and
ETL/ELT workflows supporting analytics, operational reporting, and AI/ML use
cases. - Implement best practice patterns for ingestion,
transformation, modeling, and orchestration within a modern lakehouse
environment (e.g., Databricks, Delta Lake, Azure Data Lake). - Develop highperformance
data models and curated datasets with strong attention to quality, usability,
and interoperability; create reusable engineering components and automation. - Collaborate with the Architecture Team, the Data
Platform Lead, and federated IT teams to optimize storage, compute, and
architectural patterns for performance and costefficiency. - Build model-ready data sets and feature
pipelines to support AI/ ML use cases; serve as a technical coordination point
supporting business units' AI-related infrastructure needs. - Collaborate with data scientists and AI Working
Group to operationalize models responsibly and maintain ongoing monitoring
signals.
Governance, Quality & Compliance
- Embed data governance, metadata standards,
lineage tracking, and quality controls directly into engineering workflows;
ensure technical implementation and alignment within engineering workflows. - Work with the Data Governance Lead and business
stakeholders to operationalize stewardship, classification, validation,
retention, and access standards. - Implement privacybydesign and securitybydesign
principles, ensuring compliance with internal policies and regulatory
obligations. - Maintain documentation for pipelines, datasets,
and transformations to support transparency and audit requirements.
Platform Reliability, Observability & Optimization
- Monitor and troubleshoot pipeline failures,
performance bottlenecks, data anomalies, and platformlevel issues. - Implement observability tooling, alerts,
logging, and dashboards to ensure endtoend reliability. - Support cost governance by optimizing compute
resources, refining job schedules, and advising on efficient architecture. - Collaborate with the Data Platform Lead on
scaling, configuration management, CI/CD pipelines, and environment management. - Collaborate with business units to understand
data needs, translate them into engineering requirements, and deliver
fit-for-purpose data solutions; share and apply best practices and emerging
technologies within assigned initiatives. - Work with IT Security and Legal/ Compliance to
ensure platform and datasets meet risk and regulatory standards.
Staff Management
- Lead, mentor, and provide management oversight
for staff. - Responsible for setting objectives, evaluating
employee performance, and fostering a collaborative team environment. - Responsible for developing staff knowledge and
skills to support career development.
May include other responsibilities as assigned
REQUIREMENTS:
- Bachelor's degree in Computer Science, Engineering, Information Systems, or related field preferred or equivalent work experience and HS diploma/equivalent education required.
- 5+ years of experience in data engineering within cloud environments
- Experience in people management preferred.
- Demonstrated hands-on experience with modern data platforms (Databricks preferred).
- Proficiency in Python, SQL, and data
transformation frameworks. - Experience designing and operationalizing
ETL/ELT pipelines, orchestration workflows (Airflow, Databricks Workflows), and
CI/CD processes. - Solid understanding of data modeling,
structured/unstructured data patterns, and schema design. - Experience implementing governance and quality
controls: metadata, lineage, validation, stewardship workflows. - Working knowledge of cloud architecture, IAM,
networking, and security best practices. - Demonstrated ability to collaborate across
technical and business teams. - Exposure to AI/ML engineering concepts, feature
stores, model monitoring, or MLOps patterns. - Experience with infrastructureascode
(Terraform, CloudFormation) or DevOps tooling.
The American Medical Association is located at 330 N. Wabash Avenue, Chicago, IL 60611 and is convenient to all public transportation in Chicago.
This role is an exempt position, and the salary range for this position is $115,523.42-$150,972.44. This is the lowest to highest salary we believe we would pay for this role at the time of this posting. An employee's pay within the salary range will be determined by a variety of factors including but not limited to business consideration and geographical location, as well as candidate qualifications, such as skills, education, and experience. Employees are also eligible to participate in an incentive plan. To learn more about the American Medical Association's benefits offerings, please click here.
We are an equal opportunity employer, committed to diversity in our workforce. All qualified applicants will receive consideration for employment. As an EOE/AA employer, the American Medical Association will not discriminate in its employment practices due to an applicant's race, color, religion, sex, age, national origin, sexual orientation, gender identity and veteran or disability status.
THE AMA IS COMMITTED TO IMPROVING THE HEALTH OF THE NATION
Apply NowShare Save JobRemote working/work at home options are available for this role.
About Pinterest:
Millions of people around the world come to our platform to find creative ideas, dream about new possibilities and plan for memories that will last a lifetime. At Pinterest, we're on a mission to bring everyone the inspiration to create a life they love, and that starts with the people behind the product.
Discover a career where you ignite innovation for millions, transform passion into growth opportunities, celebrate each other's unique experiences and embrace theflexibility to do your best work. Creating a career you love? It's Possible.
At Pinterest, AI isn't just a feature, it's a powerful partner that augments our creativity and amplifies our impact, and we're looking for candidates who are excited to be a part of that. To get a complete picture of your experience and abilities, we'll explore your foundational skills and how you collaborate with AI.
Through our interview process, what matters most is that you can always explain your approach, showing us not just what you know, but how you think. You can read more about our AI interview philosophy and how we use AI in our recruiting process here.
About tvScientific
tvScientific is the first and only CTV advertising platform purpose-built for performance marketers. We leverage massive data and cutting-edge science to automate and optimize TV advertising to drive business outcomes. Our solution combines media buying, optimization, measurement, and attribution in one, efficient platform. Our platform is built by industry leaders with a long history in programmatic advertising, digital media, and ad verification who have now purpose-built a CTV performance platform advertisers can trust to grow their business.
tvScientific is looking for a Senior MLOps Engineer! You'll be working with a distributed engineering team on our Connected TV ad-buying platform, as we scale our Machine Learning practice. We've cracked the code on optimizing TV ad campaigns. We're scaling massively and we need your help to make that scale sustainable.
An Idealab company, tvScientific was co-founded by executives with deep roots in programmatic advertising and digital media. tvScientific helps our clients buy ads across the CTV universe, from Hulu to PlutoTV to the ad-supported tier of Disney+ and (HBO) Max. Since our acquisition by Pinterest, we're expanding our work on CTV to lift search & social advertising performance.
What you'll do:
- Scale the decisionmaking process for tools for the tvScientific AI team, from our workflows to our training infrastructure to our Kubernetes deployments
- Improve the developer experience for the data science team
- Upgrade our observability tooling
- Serve as a technical lead and mentor to the team
- Make every deployment smooth as our infrastructure evolves.
What we're looking for:
- Deep understanding of Linux
- Excellent writing skills
- A systems-oriented mindset
- Experience in high-performance software (RTB, HFT, etc.)
- Software engineering experience + reliability (e.g. CI/CD) expertise
- Strong observability instincts
- Nice-To-Haves:
- Reverse-engineering experience
- Terraform, EKS, or MLOps experience
- Python, Scala, or Zig experience
- NixOS experience
- Adtech or CTV experience
- Experience deploying a distributed system across multiple clouds
- Experience in hard real-time low-latency (
Location: Remote
Duration: 12 Months (Possibility of extension)
Job Description:
Your Team & Role As an Infrastructure Security Engineer for the Global Technology Workforce Identity and Access Management (IAM) Team, you will partner with technology leads and engineers to develop and improve IAM solutions. You will design, test, and engineer new and existing solutions and streamline our day-to-day workflows, and help the Identity engineering team to meet project timelines and complete net new business as usual (BAU) work. The work will be around Security and Engineering of our Tier Zero Identity platform and Active Directory Domain Services (AD DS) systems.
Here is What You Can Expect on a Typical Day:
- Develop high quality, well documented engineering configuration and infrastructure solutions that adhere to all applicable clients security standards
- Ensure product and infrastructure security is maintained throughout the system lifecycle, integrating new security features, patches, and updates into existing environments
- Collaborate with tech leads in understanding system requirements, defining stories, creation of technical designs, and deployment of solutions
- Support engineering and other team members to understand systems end-to-end and deliver robust solutions that support positive business impact
- Write scripts and automation code to support areas such as operational excellence, production validation, and security for our Windows servers and identity infrastructure platform
- Research problems discovered internally or by stakeholders and consumers, ideate and develop solutions to mitigate without negative impact to the business
- Bring an applied understanding of relevant and emerging technologies, begin to identify opportunities to provide input to the team and coach others, and embed learning and innovation in the day-to-day
- Work on complex problems in which analysis of situations or data requires an in-depth evaluation of various factors
- Use programming languages including but not limited to PowerShell, Terraform, ARM templates, etc.
The work requires a minimum of 5-8 years of experience in the following areas:
- Infrastructure: Hyper-V and Windows Server Operating systems
- Observability: System Center Operations Manager (SCOM) / Azure Monitoring
- Configuration: System Center Configuration Manager (SCCM) / Azure Arc
- Identity: Active Directory Domain Services (AD DS) / Microsoft Entra ID
- Automation: PowerShell, Infrastructure and Configuration as Code (e.g. Chef, Ansible)
- Databases: SQL Server
Experience in:
Triaging and troubleshooting identity and infrastructure issues
Development of Automation and Deployments for Hyper-V and Windows Server Infrastructure
Automated testing and validation to support non-production and production changes
Writing clear engineering and system documentation (e.g. in Confluence)
Granite delivers advanced communications and technology solutions to businesses and government agencies throughout the United States and Canada. We provide exceptional customized service with an emphasis on reliability and outstanding customer support and our customers include over 85 of the Fortune 100. Granite has over $1.85 Billion in revenue with more than 2,100 employees and is headquartered in Quincy, MA. Our mission is to be the leading telecommunications company wherever we offer services as well as provide an environment where the value of each individual is recognized and where each person has the opportunity to further their growth and achieve success.
Granite has been recognized by the Boston Business Journal as one of the "Healthiest Companies" in Massachusetts for the past 15 consecutive years.
Our offices have onsite fully equipped state of the art gyms for employees at zero cost.
Granite's philanthropy is unparalleled with over $300 million in donations to organizations such as Dana Farber Cancer Institute, The ALS Foundation and the Alzheimer's Association to name a few.
We have been consistently rated a "Fastest Growing Company" by Inc. Magazine.
Granite was named to Forbes List of America's Best Employers 2022, 2023 and 2024.
Granite was recently named One of Forbes Best Employers for Diversity.
Our company's insurance package includes health, dental, vision, life, disability coverage, 401K retirement with company match, childcare benefits, tuition assistance, and more.
If you are a highly motivated individual who wants to grow your career with a fast paced and progressive company, Granite has countless opportunities for you.
EOE/M/F/Vets/Disabled
General Summary of Position:
Granite is a leading provider of managed security and networking platforms. The NetOps team supports a wide range of technologies and services including Fortinet SD-WAN, Palo Alto, Cisco, and Juniper. Within this team, the Application Automation practice focuses on improving operational efficiency and scalability through scripting, automation, and tool development. This position plays a key role in supporting large enterprise environments by developing automation frameworks, streamlining complex deployments, and reducing manual intervention in day-to-day operations. Responsibilities include solution design, automation development, support for new product rollouts, Tier 3 escalations, and collaboration with engineering teams to build repeatable, efficient processes.
Duties and Responsibilities:
- Design, develop, and maintain automation scripts and tools to support network deployments, configuration management, and troubleshooting.
- Collaborate with engineering and operations teams to identify repetitive tasks and build automated solutions to reduce manual workload.
- Support new technology rollouts by creating scalable deployment frameworks and configuration templates.
- Respond to Tier 3 technical escalations, particularly those involving process inefficiencies, integration challenges, or repeat issues solvable via automation.
- Research and test emerging network automation technologies to improve toolsets and practices.
- Document automation procedures, scripts, and best practices for operational consistency and knowledge sharing.
- Ensure all automated solutions align with security and compliance standards.
- Participate in audits and reviews to ensure reliability and maintainability of automation systems.
Required Qualifications:
- Bachelor's degree in computer science, Network Engineering, or a related field.
- Proficiency in scripting languages such as Python, Bash, or PowerShell.
- Strong understanding of APIs, network configuration models (e.g., NETCONF, RESTCONF), and configuration automation tools (e.g., Ansible, Terraform).
- Experience supporting enterprise-grade networks, preferably in a service provider or managed services environment.
- Demonstrated ability to troubleshoot complex network issues and implement automated solutions.
- Excellent communication and documentation skills
Splunk Engineer/Cloud Logging Engineer (CLS Support)
Job ID
2026-2158
# of Openings
1
Overview
Pyramid Systems is seeking an Cloud Logging Engineer (Splunk & AWS) who is responsible for ensuring the availability, performance, and security.
Responsibilities
- Advise on cost efficiency for future usage and cost optimization for current infrastructure.
- Automate the management and enforcement of policies.
- Create and maintain documentation related to architecture and operational processes for CLS (Centralized Logging Solution).
- Develop a set of best practices and architecture patterns.
- Help maintain regulatory compliance of the CLS (Centralized Logging Solution) infrastructure.
- Help monitor and maintain CLS performance, availability, and capacity.
- Help maintain application container images.
- Offer solutions for ingestion of logs to Splunk via cloud native solutions.
- Maintain all infrastructure as code.
- Provide operations monitoring of CLS platform to enable proactive issue identification, response, and resolution.
- Recommend and execute improvements to the existing CLS architecture and design with growth and scalability in mind to optimize performance, stability, reliability, and agility.
- Responsible for reporting on current infrastructure status, and planning for future usage.
- Responsible for Beats agent deployments and container infrastructure analysis, optimization, and capacity planning.
- Maintain CI/CD pipelines for configuration deployments to applications.
- Support large-scale deployments with data feeds from multiple on premise and cloud data centers.
- Upgrade, install, configure monitoring solution for AWS for Windows and Linux servers.
- Utilize automation tool such as Terraform, Ansible, AWS Cloud Formation, Azure Resource Manager, or similar.
- Participate in a rotating on call schedule and weekly off hours maintenance.
Qualifications
- Splunk certification required***
- Candidate background eligibility requirements are United States citizen or be a Permanent Resident and have lived in the United States for at least 3 years, clean criminal background and able to obtain a Public Trust (High-Risk) Position.
Bachelor's degree in computer science, electronics engineering or other engineering or technical discipline OR AWS/Azure Certification (AWS Professional / Specialty Cert. OR Azure Expert / Advanced Cert.) OR 4 years of relevant experience in one of the VAECOT suite of tools (Science Logic, Dynatrace, Turbot, AppDynamics)
Minimum of three (3) years of experience in leading technical teams to achieve objectives and outcomes.
Minimum of six (6) years setting up, configuring, and using AWS cloud operational tools to ensure service level agreements and performance targets are met, and continued compliance with policies, standards and guidelines.
Minimum of three (3) years specific to monitoring Centralized Logging Solution (CLS)/Splunk
Subject matter expertise with ALL VAEC Cloud Service Providers which currently includes Microsoft Azure and Amazon Web Services (AWS).
Experience with programming with Splunk language (SPL) or equivalent (e.g., Python, Powershell, AWS or Azure CLI).
One or more of these Splunk certifications: Splunk Core Certified Power User, Splunk Core Certified Advanced Power User, Splunk Enterprise Certified Admin, Splunk Enterprise Certified Architect, Splunk Enterprise Security Certified Admin, Splunk IT Service Intelligence Certified Admin.
Knowledge of enterprise logging, with a focus on security event logging.
Solid understanding of cloud concepts, either using Azure or AWS semantics.
Experience in one or more of the VAECOT suite of tools, shown below:
VAEC Operational Tools (VAECOT)
Some experience in one or more of the following tools:
Third party tools
* Application Performance Monitoring: Dynatrace, AppDynamics
* Cloud Security: Nessus, NetSkope, Enterprise Security External Change Council, Identity and Assessment Management, Continuous Monitoring as a Service, McAfee, eMASS, Centrify
* Cloud Governance: Turbot
* DevOps/Configuration Management/Help Desk: Ansible, Service Desk, ScienceLogic, ServiceNow, SPLUNK, Jira ServiceDesk, Cloudockit, GitHub
* Containerization: Red Hat OpenShift
* Migration: CloudKey, Version One
* Reporting: Apptio
Cloud Service Provider (CSP) Operational Tools Tools/Services
* AWS Security: System Manager (Explorer and OpsCenter), CloudWatch, Config, CloudTrail, Elasticsearch (Kinesis DataStreams), GuardDuty, Inspector, Key Management Service (KMS), Security Hub, Directory Service, Identity and Access Management, Resource Access Manager, Cognito, Secrets Manager, Certificate Manager, Artifact
* Aws Monitoring and Logging: QuickSight, Eventbridge (AWS Kinesis DataStreams), Simple Notification Service (SMS), Elasticsearch (AWS Kinesis DataStreams), CloudTrail, CloudWatch
* Aws Networking: Virtual Private Cloud (VPC), Route S3, API Gateway, Direct Connect, AppStream 2.0, Transit Gateway, Elastic Loadbalancer, Firewall Manager, WAF & Shield
* AWS Storage: Cloud Tiering Services to S3 from On-Prem, Simple Storage Services (S3), S3 Glacier, Storage Gateway, Elastic File System (EFS), Backup
* Azure Security: Monitor (Log Analytics and ASC), Event Hubs, Security Center (ASC), Information Protection (AIP) , Key Vault, PowerBI, Network Watcher (Performance Monitor), Monitor (Log Analytics and ASC)
* Azure Monitoring and Logging: Information Protection (AIP), Advance Threat Protection, Security Center (ASC), Information Protection (AIP), Key Vault, Active Directory, Role Based Access Control (RBAC), Resource Manager (ARM), Resource Graph (ARG), Active Directory B2C, Key Vault, App Service, Service Trust Portal
* Azure Networking: Virtual Network, Traffic Manager, DNS, Application Gateway, Express Route, Web Apps, FrontDoor, VPN Gateway, Loadbalancer, Firewall
* Azure Storage: NetApp File Service, Storage (Blobs, Disks, Files, Queues, Tables), Storage Archive Access Tier, StorSimple, Files, Backup
Target Pay Range
The below listed pay range for this position is not a guarantee of compensation or salary. The final offered salary will be influenced by a host of factors including, but not limited to, geographic location, Federal Government contract labor categories and contract wage rates, relevant prior work experience, specific skills and competencies, education, and certifications. Our employees value the flexibility at Pyramid Systems that allows them to balance quality work and their personal lives. We offer competitive compensation, benefits, to include our Employee Stock Ownership Program, FlexPTO, and learning and development opportunities.
Pyramid Min
USD $92,168.00/Yr.
Pyramid Max
USD $138,252.00/Yr.
Why Pyramid?
Pyramid Systems, Inc. is an award-winning, technology leader, driving digital transformation across federal agencies. We empower forward-thinking innovations, accelerate production-ready software, and deliver secure solutions so federal agencies can meet their mission goals. Voted a Top Workplace, both regionally (Washington, DC) and Nationally (USA) the past 2 years (2023 and 2024) based on the feedback from our employees, we are headquartered in Fairfax, VA. and have a growing national footprint. We value and promote our Flexible Workplace approach because of the positive impacts it has on work-life integration. We remain committed to ensuring every employee's voice is heard, performance and results are recognized and rewarded, development and advancement is a focus, and diversity, equity and inclusion is a company priority. We offer competitive compensation and benefits (including a recently launched Employee Stock Ownership Plan - ESOP), a robust performance-based rewards program, and we know how to have fun! Our people and culture have endured and delivered for our clients for nearly three decades.
EEO Statement
Pyramid Systems, Inc. is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, or protected veteran status and will not be discriminated against on the basis of disability.
This position is on-site at Liberty NC and W2 Contract.
An international IT service & solutions provider is seeking a IT Systems Engineer – Manufacturing Applications in Liberty/Greensboro NC area to ensure stability, reliability, and high performance across our mission-critical applications. This role will focus on supporting Siemens MOM (Opcenter) systems, managing SQL databases, and working with AWS infrastructure to ensure stable and efficient operations at the manufacturing site.
The ideal candidate will have experience supporting enterprise applications, databases, and cloud infrastructure in a manufacturing or operations-driven environment.
Key Responsibilities
• Support and maintain manufacturing applications including Siemens MOM / Opcenter systems
• Deploy, configure, and support applications in an AWS cloud environment
• Install, maintain, and optimize Microsoft SQL Server and PostgreSQL databases
• Troubleshoot application, database, and infrastructure issues affecting production systems
• Monitor system performance and identify opportunities for improvement
• Work with cross-functional teams including IT, engineering, and operations
• Document system configurations, procedures, and troubleshooting steps
• Generate reports related to system performance and operational activities
Qualifications
• Bachelor’s degree in Information Technology, Computer Science, or related field (or equivalent experience)
• Experience supporting enterprise applications or manufacturing systems (MES/MOM preferred)
• Experience with AWS cloud infrastructure (VPC, Security Groups, storage, etc.)
• Strong experience with Microsoft SQL Server and/or PostgreSQL
• Understanding of network fundamentals (TCP/IP, DNS, load balancing)
• Strong troubleshooting and analytical skills
• Ability to work in a fast-paced manufacturing environment
Preferred Skills
• Experience with Siemens Opcenter / MOM / MES systems
• Experience with CI/CD tools or automation tools (Terraform, Jenkins, GitLab CI, etc.)
• Experience working with manufacturing or shop-floor systems
• Experience working in cross-functional or global teams
Location
Liberty, North Carolina (On-site)
Job Description:
Principal Azure Engineer, Platform & Delivery:
The Principal Azure Engineer, Platform & Delivery is a senior technical leader responsible for designing, building, and delivering enterprise-scale Microsoft Azure solutions. This role combines deep hands-on Azure engineering expertise with ownership of delivery outcomes, often serving as the technical lead for initiatives without dedicated project management. The ideal candidate can translate complex or ambiguous business needs into secure, scalable Azure solutions and ensure they are executed predictably and effectively.
Required Qualifications:
- Deep technical experience designing and operating high-availability, scalable infrastructure including networking, storage, virtualization, and identity.
- Developing and maintaining automated deployment modules using tools like Terraform or ARM templates.
- Optimizing delivery pipelines (e.g., Azure DevOps, GitHub Actions) to ensure repeatable, secure platform services.
- Proven experience implementing enterprise Azure networking architectures.
- Experience migrating and modernizing workloads from on-premises environments to Azure.
- Implementing governance frameworks, RBAC, and security baselines using Microsoft Defender for Cloud and Azure Policy.
- Demonstrated ability to lead engineers and influence stakeholders without formal authority.
- Experience defining and implementing monitoring and observability solutions.
- Lead end-to-end delivery of multiple concurrent Azure initiatives from intake and design through implementation and operational handoff.
- Act as the technical project lead for Azure initiatives where no formal project manager is assigned.
- Maintain visibility into all in-flight Azure work and provide regular status updates, risk reporting, and summaries.
- Coordinate work across infrastructure, security, networking, application, and vendor teams.
- Proactively identify delivery risks and blockers and drive resolution to keep initiatives moving forward.
- Balance speed, cost, risk, and compliance when making technical and delivery tradeoff decisions.
- Mentor and guide engineers, establishing technical standards, patterns, and best practices.
- Produce high-quality technical documentation, architectural artifacts, and operational runbooks.
- Foster strong partnerships with application teams to enable successful Azure adoption.
Additional Skills and Experience:
- Deep proficiency in Azure compute (VMs, AKS), storage, networking (VNETs, NSGs), and identity (Microsoft Entra ID).
- Experience operating in regulated environments such as healthcare, financial services, or higher education, including frameworks like HIPAA, HITRUST, SOC 2, or GDPR.
- Working knowledge of IT service management concepts.
- Experience with Azure Cost Management and FinOps practices.
- Strong problem investigation, root cause analysis, and decision-making skills.
Education and Experience:
- Bachelor’s degree or equivalent experience.
- Minimum of 10 years of professional IT experience, with at least 5 years in a senior, architect-level, or principal cloud engineering role.
- Demonstrated experience leading enterprise-scale Azure initiatives with multiple parallel workstreams.
Position is hybrid with requirement to travel to Sayre, PA at least once a month.
Summary:
The Engineer, Compute System Engineering is responsible for the implementation and support of compute based infrastructure, including public, private and hybrid cloud deployment models to support critical healthcare operations across The Guthrie Clinic (TGC). This role ensures high availability and performance for clinical systems, patient care services, and administrative functions across the network. The Engineer collaborates with IT teams, vendors, and hospital stakeholders to align server infrastructure with organizational goals and regulatory requirements. This position will be a technology advocate throughout the organization for the effective application of technology to meet business needs and to support business changes and growth. Technology functions include cloud computing, database, storage, data protection, virtualization, hyperconverged infrastructure, server automation, monitoring and application delivery.
Experience:
- Preferred three to five (3 to 5) years of experience in implementing and managing Windows and Open Systems server infrastructure hybrid cloud solutions in an enterprise environment; healthcare experience preferred.
- Highly experienced information systems professional with a strong technical background and proven track record of accomplishments in a large, complex multi-level organization .
- Strong technical knowledge of VMWare ESX and Microsoft Hyper-V.
- Expertise in Microsoft Windows, Linux and AIX operating systems and management.
- Familiar with hyperconverged infrastructures such as VxRail.
- Familiar with Microsoft Azure Arc, System Center, Admin Center and SCCM.
- Familiar with cloud platforms (e.g., AWS, Azure, Google Cloud).
- Experience in infrastructure-as-code (e.g., Terraform, CloudFormation) and containerization (e.g., Docker, Kubernetes).
- Experience in scripting (PowerShell, Python, Bash, etc.)
- Familiar with application delivery solutions such as Citrix.
- Experience with storage and data protection replication methodologies.
- Experience with Epic Infrastructure such as Hyperspace.
- Experience with ITSM functionalities such as change control, CMDB and ticketing systems.
- Strong knowledge of healthcare information systems (e.g., Epic, Cerner), cybersecurity and clinical operations.
- Prior experience delivering high availability systems in a 24/7 environment across geographically dispersed business units.
- Demonstrated ability to facilitate evaluation of technologies and achieve consensus on technical standards and solutions among a diverse group of information technology professionals.
- Demonstrated commitment to customer service who has provided responsive and effective support, developed solid working relationships with customers, and delivered high quality, value-added services that met/exceeded customer expectations.
- Equally adept at developing technology strategies and the operation of existing technical infrastructures. Significant experience and knowledge of computing architecture and implementation of networked computing structures.
- Polished professional with demonstrated information technology experience and strong communication skills that can rapidly gain and maintain credibility with customers and IT colleagues.
- Bachelor’s degree in Information Technology, Computer Science, Healthcare Administration or related field strongly desired or an equivalent combination of education and experience.
- Preferred certifications include Microsoft Certified: Azure Fundamentals, VCP-DCV, ECSA.
- Responsible for installation and maintenance of server infrastructure along with upgrading/configuration and the life cycle management of hardware.
- Monitors functions of server infrastructure to ensure acceptable performance.
- Creates and maintains documentation related to server configuration and environments.
- Serves as subject matter expert across server operating systems and solutions (Microsoft Windows Server, Linux, AIX, VMWare ESX, Microsoft Hyper-V)
- Troubleshoots and resolves server and virtualization incidents.
- Maintain server patching to address security vulnerabilities.
- Collaborate with cloud compute architect to design and build functional server environments.
- Provide level 2 escalation support and troubleshooting to resolve complex server incidents and tasks.
- Stay current on cloud and systems engineering trends (e.g., serverless computing, containerization, AI-driven automation) and evaluate their potential to enhance TGC operations.
- Ensure systems, applications and data are high availability, backed up and/or replicated to meet disaster and business recovery requirements.
- Implement and enforce security requirements to protect Azure-based systems and data.
- Anticipates and provides solutions for complex problems and issues, recommends upgrades and enhancements. Rapidly absorbs complex technical and conceptual information to identify issues and implications. Able to present understandable alternatives to both technical and non-technical individuals at all levels of the organization.
- Monitors industry trends, maintains knowledge of developments in cloud computing, database, storage, data protection, virtualization, hyperconverged infrastructure, server automation, monitoring and application delivery.
- Promotes the use of TGC’s PMO methodology and standards to manage IT initiatives.
- Demonstrates commitment to customer service by providing responsive and effective support, developing solid working relationships with customers and IT colleagues, and delivering high quality, value-added services that exceed customer expectations.
- Demonstrates a commitment to excellence in Customer Service with all internal and external customers of TGC.
- Performs related duties as assigned and unrelated duties as requested.
About Us
Joining the Guthrie team allows you to become a part of a tradition of excellence in health care. In all areas and at all levels of Guthrie, you’ll find staff members who have committed themselves to serving the community.
The Guthrie Clinic is an Equal Opportunity Employer.
The Guthrie Clinic is a non-profit, integrated, practicing physician-led organization in the Twin Tiers of New York and Pennsylvania. Our multi-specialty group practice of more than 500 physicians and 302 advanced practice providers offers 47 specialties through a regional office network providing primary and specialty care in 22 communities. Guthrie Medical Education Programs include General Surgery, Internal Medicine, Emergency Medicine, Family Medicine, Anesthesiology and Orthopedic Surgery Residency, as well as Cardiovascular, Gastroenterology and Pulmonary Critical Care Fellowship programs. Guthrie is also a clinical campus for the Geisinger Commonwealth School of Medicine.
Remote working/work at home options are available for this role.
Role: Cyber Security Architect – Linux, Ansible & Terraform
Location: Silver Spring, MD , DC, Techwood, ATL – Onsite
Job Responsibilities / Typical Day in the Role
• Implement design reviews to evaluate security controls
• Identify and communicate opportunities to enhance the security posture of WBD
• Build and / or manage enterprise security platforms effectively
• Communicate effectively across all levels of management to articulate WBD security goals and vision.
• Identify and communicate opportunities to enhance the security posture of WBD
• Build and / or manage enterprise security platforms effectively (SAAS, on premise or in Cloud)
• Communicate effectively across all levels of management to articulate WBD security goals and vision.
• Have a team player mentality; strive to contribute to team cohesion however can work independently if the need arises
• Plan, design, engineer and implement security-related technologies
• Understanding technical security issues, their implications within WBD business units and able to effectively communicate them to management and other business leaders.
• Configure, troubleshoot, and maintain security infrastructure – including software and hardware in cloud environments, as well as on-premises.
• Conduct security audits and assessments to regularly determine the effectiveness of security platforms and identify areas of improvement.
• Host and operating systems hardening, auditing, monitoring and logging with appropriate security controls and best practices while meeting security best practices and business goals
• Research and explore emerging security technologies and determine their appropriate use within the company.
• Prepare, document, and create standard operating procedures and protocols.
• Crosstrain and mentor other team members as needed
Must Have Skills / Requirements
1) Implementing advanced cyber security technology in a complex environment
a. 5+ years of experience; Hands-on experience in security engineering, hands-on experience in building, designing, and maintaining enterprise security tools.
2) Scripting experience (using Python, Go, or other equivalent languages)
a. 5+ years of experience.
3) Hands-on Experience with automation technologies
a. 3+ Years of experience; Terraform, Ansible, CloudFormation, etc.
4) Linux Experience.
a. 5+ years of experience; Ability to construct and maintain complex network infrastructures.
Technology requirements:
• Engineer and administer security platforms including SIEM/SOAR systems, endpoint detection and response, vulnerability management, anomaly detection, and cloud analysis.
• Experience in managing the Brinqa vulnerability management platform and experience with Groovy programming language
• Must have 5+ years of scripting experience (using Python or other equivalent languages)
• Hands-on Experience in public cloud infrastructures like AWS (Amazon Web Services)
Nice to Have Skills / Preferred Requirements
1) Security and Cloud certifications are a plus. (CISSP, Splunk Admin, AWS Solution architect).
2) Media/entertainment or distributed global network experience.
Soft Skills
1) Hands-on technical experience with networking and computing system architectures, specifically, the security aspects thereof.
2) Thorough understanding of information security principles, techniques, principles, policy frameworks, and best practices
3) Hands-on technical experience with compliance and regulatory frameworks and how they affect architecture designs and review
Cloud Systems Engineer
Location: Pleasanton, CA (3 days onsite)
Duration: 9-Month W2 Contract (Potential Extension)
Pay Rate: $85–$100/hour (DOE)
Employer: Russell Tobin (supporting a leading enterprise retail organization)
Job Overview
Russell Tobin is supporting a leading enterprise retail organization in hiring a Cloud Systems Engineer to join its Compute & Storage team.
This is a project-focused hybrid infrastructure engineering role — not an operations support position. The engineer will design, build, and automate infrastructure platforms across cloud and on-prem environments as the organization continues major data center retirement and cloud migration initiatives.
The ideal candidate is a strong Linux engineer with deep automation expertise and experience building scalable infrastructure solutions in enterprise environments.
Key Responsibilities
- Design and build hybrid IaaS infrastructure across Azure, Oracle Cloud Infrastructure (OCI), Google Cloud Platform (GCP), and on-prem systems
- Support large-scale data center migration initiatives to cloud platforms
- Engineer and modernize infrastructure provisioning pipelines
- Develop automation for cloud orchestration using Terraform and configuration management tools
- Provide Level 3 engineering support for complex Linux and Windows infrastructure (project-based work)
- Build new infrastructure platforms to support product and warehouse systems
- Collaborate cross-functionally with infrastructure and product teams
- Participate in occasional deployment windows (some nights during implementation phases)
Required Qualifications
- 5–6+ years of systems engineering experience
- Strong Linux administration and debugging experience
- Experience building new infrastructure platforms
- Automation expertise (Terraform, Ansible, Python)
- Familiarity with Chef (expertise not required; willingness to work with it)
- Experience with virtualized environments (VMware ESX, Nutanix, or similar)
- Experience supporting Windows Server environments
- Experience working in enterprise IaaS environments
Preferred Qualifications
- Azure infrastructure experience
- Oracle Cloud Infrastructure (OCI) experience
- Google Cloud Platform (GCP) exposure
- Oracle KVM
- Warehouse Management Systems experience
- Experience modernizing provisioning pipelines
Top Skill Priorities
- Automation skillsets (Terraform, Ansible, Python)
- Virtualized platforms
- Infrastructure solutions mindset
Work Environment
- 3 days onsite in Pleasanton, CA
- Project-focused engineering team
- Enterprise-scale hybrid cloud environment
- Contract opportunity with strong potential for extension
Pride Global offers eligible employee's comprehensive healthcare coverage (medical, dental, and vision plans), supplemental coverage (accident insurance, critical illness insurance and hospital indemnity), 401(k)-retirement savings, life & disability insurance, an employee assistance program, legal support, auto, home insurance, pet insurance and employee discounts with preferred vendors.
*Please include your Linkedin on your resume
* Job Summary: Seeking an experienced Cloud Platform Engineer with strong Red Hat OpenShift and Linux systems engineering expertise.
This role focuses on designing, deploying, and operating enterprise-scale OpenShift platforms in on-prem datacenter environments, working closely with SRE teams and program management.
Key Responsibilities: Platform Engineering Design, deploy, and manage enterprise-scale Red Hat OpenShift clusters in on-prem datacenter environments.
Architect highly available, scalable, and secure OpenShift platforms.
Implement cluster lifecycle management (installation, upgrades, patching, scaling).
Configure networking, storage, ingress, and security components for OpenShift.
Infrastructure Build & Automation Build and automate infrastructure in datacenter environments (compute, storage, networking).
Integrate OpenShift with virtualization platforms (VMware/other hypervisors as applicable).
Develop Infrastructure-as-Code (IaC) solutions using tools such as Terraform, Ansible, or similar.
Implement CI/CD pipelines for platform deployments and updates.
Linux Systems Engineering Provide deep Linux system administration and troubleshooting support.
Optimize OS-level configurations for performance, reliability, and security.
Automate system configuration and compliance management.
Diagnose and resolve complex kernel, networking, and storage issues.
Reliability & Operations Partner closely with the SRE team to establish SLOs, SLIs, monitoring, and alerting.
Drive observability implementation (logging, metrics, tracing).
Participate in incident management, root cause analysis (RCA), and remediation.
Ensure platform resiliency, performance tuning, and capacity planning.
Program & Cross-Functional Collaboration Work with Program Management to drive large-scale OpenShift implementation milestones.
Provide technical input into roadmap planning, timelines, and risk mitigation.
Collaborate with security, networking, storage, and application teams.
Document architecture, standards, and operational procedures.
Security & Compliance Implement RBAC, security policies, and compliance controls within OpenShift.
Harden clusters according to enterprise security standards.
Support vulnerability management and patch governance processes.
Required Qualifications: 5+ years of Linux systems engineering experience.
3+ years hands-on experience with Red Hat OpenShift (OCP 4.x preferred).
Experience with on-prem infrastructure and datacenter environments.
Strong knowledge of Kubernetes, networking, storage, and virtualization.
Experience with automation tools (Terraform, Ansible, GitOps).
Preferred Qualifications: Red Hat certifications (RHCE, OpenShift).
Experience with enterprise-scale OpenShift deployments.
Familiarity with SRE practices, DevOps/GitOps, and monitoring tools.
At MVP Health Care, we're on a mission to create a healthier future for everyone. That means embracing innovation, championing equity, and continuously improving how we serve our communities. Our team is powered by people who are curious, humble, and committed to making a difference-every interaction, every day. We've been putting people first for over 40 years, offering high-quality health plans across New York and Vermont and partnering with forward-thinking organizations to deliver more personalized, equitable, and accessible care. As a not-for-profit, we invest in what matters most: our customers, our communities, and our team.
What's in it for you:
- Growth opportunities to uplevel your career
- A people-centric culture embracing and celebrating diverse perspectives, backgrounds, and experiences within our team
- Competitive compensation and comprehensive benefits focused on well-being
- An opportunity to shape the future of health care by joining a team recognized as a Best Place to Work For in the NY Capital District, one of the Best Companies to Work For in New York, and an Inclusive Workplace.
You'll contribute to our humble pursuit of excellence by bringing curiosity to spark innovation, humility to collaborate as a team, and a deep commitment to being the difference for our customers. Your role will reflect our shared goal of enhancing health care delivery and building healthier, more vibrant communities.
What You'll Get to Do
- Engineer, implement, and maintain secure, scalable, and highly available cloud infrastructure utilizing Microsoft Azure and other cloud technologies.
- Collaborate closely with application teams, architects, security, networking, and leadership to deliver reliable cloud solutions.
- Automate cloud deployments and configurations using Terraform, Bicep, or ARM templates.
- Build and optimize CI/CD pipelines for infrastructure deployments.
- Drive adoption of cloud engineering standards, architecture patterns, and best practices.
- Implement cloud monitoring, logging, and alerting using Azure Monitor, Log Analytics, and Application Insights.
- Continuously optimize cloud cost, performance, and scalability across workloads
- Participate in incident response and troubleshooting to ensure uptime and performance.
- Evaluate new Azure services, run proofs-of-concept, and drive platform innovation.
- Mentor junior engineers and contribute to a culture of learning and technical excellence.
Skills and Experience
- Education, Licensures, & Certifications
- Bachelor's degree in computer science, information technology, system engineering, system analysis, or equivalent experience.
- Required Job Skills
- Strong technical experience working in Microsoft Azure including compute, storage, networking, security, and automation.
- 5+ years in Information Technology, with at least 3+ years in Azure-based Cloud Engineering or Infrastructure roles.
- Microsoft Azure certifications are a plus.
- Proficiency in Infrastructure as Code (IaC) tools such as Terraform, ARM, or Bicep.
- Familiarity with CI/CD pipelines, containerization (Docker, Kubernetes), and modern DevOps practices.
- Experience with Cloud governance through Policies, Blueprints, RBAC, and landing zone architecture.
- Experience with Cloud Identity Providers (EntraID).
- Scripting experience in PowerShell, Python, or Bash.
- Strong planning, communication, organizational, and problem-solving skills.
- Prior experience in healthcare or regulated industries is a plus.
Pay Transparency
MVP Health Care is committed to providing competitive employee compensation and benefits packages. The base pay range provided for this role reflects our good faith compensation estimate at the time of posting. MVP adheres to pay transparency nondiscrimination principles. Specific employment offers and associated compensation will be extended individually based on several factors, including but not limited to geographic location; relevant experience, education, and training; and the nature of and demand for the role.
We do not request current or historical salary information from candidates.
$93,667.00-$124,576.75
MVP's Inclusion Statement
At MVP Health Care, we believe creating healthier communities begins with nurturing a healthy workplace. As an organization, we strive to create space for individuals from diverse backgrounds and all walks of life to have a voice and thrive. Our shared curiosity and connectedness make us stronger, and our unique perspectives are catalysts for creativity and collaboration.
MVP is an equal opportunity employer and recruits, employs, trains, compensates, and promotes without discrimination based on race, color, creed, national origin, citizenship, ethnicity, ancestry, sex, gender identity, gender expression, religion, age, marital status, personal appearance, sexual orientation, family responsibilities, familial status, physical or mental disability, handicapping condition, medical condition, pregnancy status, predisposing genetic characteristics or information, domestic violence victim status, political affiliation, military or veteran status, Vietnam-era or special disabled Veteran or other legally protected classifications.
To support a safe, drug-free workplace, pre-employment criminal background checks and drug testing are part of our hiring process. If you require accommodations during the application process due to a disability, please contact our Talent team at .
The Linux Systems & Automation Engineer is responsible for designing, deploying, automating, and operating enterprise Linux infrastructure and supporting applications running there. This role focuses on infrastructure-as-code, automation tooling, monitoring, and reliability engineering across on-prem and hybrid environments. The engineer will collaborate with network, platform, and application teams to deliver scalable, secure, and repeatable infrastructure.
Key Responsibilities
Linux Systems Engineering
- Design, deploy, and maintain Linux systems across bare metal and virtual environments.
- Develop and enforce OS baseline standards, hardening, and patching processes.
- Manage system lifecycle: provisioning, configuration, upgrades, and decommissioning.
Automation & Infrastructure as Code
- Build and maintain automation pipelines using Ansible, Terraform, cloud-init, or equivalent tools.
- Develop Ruby/Python/Bash tooling to automate operational workflows.
- Create standardized system images, templates, and deployment frameworks.
Monitoring, Observability, and Reliability
- Design and maintain monitoring and telemetry platforms and data pipelines. (Zabbix, Prometheus, Grafana, OpenSearch, etc.).
- Analyze metrics and logs to improve system reliability and performance.
- Participate in on-call and incident response.
- Conduct root cause analysis and drive corrective actions.
DevOps & Tooling
- Develop and Maintain Git-based workflows and CI/CD pipelines for infrastructure code.
- Develop internal tools to improve provisioning, validation, and operational efficiency.
- Collaborate in architecture reviews and technical design discussions.
Requirements:
- Bachelor’s degree in Computer Science, Engineering, Information Systems, or equivalent experience.
- 5+ years of experience administering Linux systems in enterprise environments.
- Strong Linux fundamentals (processes, networking, storage, security, kernel concepts).
- Proficiency in scripting/programming (Python, Bash, Ruby, etc.).
- Experience with Git and collaborative development workflows.
- Experience with automation/configuration management tools (Ansible, Terraform, Puppet, Chef, etc.).
Preferred Qualifications
- Experience with containers and orchestration (Docker, Kubernetes).
- Virtualization experience (VMware, KVM, OpenStack).
- Cloud platform experience (AWS, Azure, GCP) or hybrid architectures.
- Monitoring/observability tooling experience (Prometheus, Grafana, Zabbix, ELK/OpenSearch).
- Security experience (SSH hardening, PAM, SELinux, CIS benchmarks).
- Experience supporting telecom, financial, healthcare, or other regulated environments.
LTIMindtree is an equal opportunity employer that is committed to diversity in the workplace. Our employment decisions are made without regard to race, color, creed, religion, sex (including pregnancy, childbirth or related medical conditions), gender identity or expression, national origin, ancestry, age, family-care status, veteran status, marital status, civil union status, domestic partnership status, military service, handicap or disability or history of handicap or disability, genetic information, atypical hereditary cellular or blood trait, union affiliation, affectional or sexual orientation or preference, or any other characteristic protected by applicable federal, state, or local law, except where such considerations are bona fide occupational qualifications permitted by law.
A little about us...
Role: AWS DevOps Engineer
Location: Charlotte, NC
Salary: Market Rate
Job Description:
We are seeking a highly skilled Senior DevOps Engineer with strong expertise in AWS cloud infrastructure automation databases and modern containerized environments The ideal candidate will have experience designing implementing and maintaining scalable secure and reliable systems while enabling fast and efficient development workflows You will work closely with development architecture and operations teams to build robust CICD pipelines automate infrastructure provisioning and ensure high availability of business critical applications
Key Responsibilities:
- Design implement and manage AWS cloud infrastructure EC2 S3 Lambda ECSEKS etc with scalability and security in mind
- Develop and maintain Infrastructure as Code IaC using Terraform
- Build manage and optimize Docker base images and containerized application stacks
- Orchestrate and maintain Kubernetes EKS clusters for production and staging environments
- Set up manage and optimize CICD pipelines in GitLab to support fast reliable deployments
- Manage MCP servers and ensure reliable operations for critical services
- Automate operational tasks and workflows using Python and JavaScript
- Support fullstack teams React Nodejs by providing containerized environments and deployment strategies
- Manage and optimize databases SQL PostgreSQL for performance security and scalability
- Integrate and manage AWS streaming services Kinesis MSK Kafka or similar for realtime data pipelines
- Implement container image security scanning governance and lifecycle management
- Monitor system performance availability and cost implementing proactive improvements
- Ensure compliance with security and governance standards across cloud infrastructure and database layers
- Collaborate with developers and architects to improve application delivery scalability and resilience
Required Skills Qualifications:
- 8 years of experience in DevOps Cloud Infrastructure
- Strong Handson experience with AWS services EC2 S3 ECSEKS Lambda VPC IAM CloudWatch Kinesis MSK
- Proficiency in Terraform for infrastructure automation
- Expertise with Docker including base image creation and Kubernetes orchestration
- Strong scripting programming skills in Python and JavaScript
- Experience with GitLab CICD for pipelines automation and environment management
- Strong database experience with SQL and PostgreSQL setup scaling replication performance tuning
- Exposure to streaming architectures AWS Kinesis Kafka MSK or similar
- Experience supporting React based applications from a DevOps perspective
- Familiarity with MCP servers and containerized service deployments
- Knowledge of cloud cost optimization and security best practices
- Strong problem-solving troubleshooting and communication skills
- Preferred Qualifications
- AWS certifications eg AWS Certified Solutions Architect DevOps Engineer Professional
- Experience with monitoring observability tools Prometheus Grafana ELK Datadog
- Knowledge of networking load balancing and distributed system design
- Familiarity with Agile Scrum methodologies
Skills
- Mandatory Skills : AWS Lambda, Docker, Python
- Good to Have Skills : Ansible, Git, Kubernetes
LTIMindtree is an equal opportunity employer that is committed to diversity in the workplace. Our employment decisions are made without regard to race, color, creed, religion, sex (including pregnancy, childbirth or related medical conditions), gender identity or expression, national origin, ancestry, age, family-care status, veteran status, marital status, civil union status, domestic partnership status, military service, handicap or disability or history of handicap or disability, genetic information, atypical hereditary cellular or blood trait, union affiliation, affectional or sexual orientation or preference, or any other characteristic protected by applicable federal, state, or local law, except where such considerations are bona fide occupational qualifications permitted by law.
Infosys is seeking an Azure Platform Admin .This position’s primary responsibility will be to provide technical expertise and coordinate for day-to-day deliverables for the team. The chosen candidate will assist in the technical design of Azure Data Platforms; builds automations, integrate Data products through automation, understands data security, retention, and recovery. The role holder should be able to research on technologies independently to recommend appropriate solutions & should contribute to technology-specific best practices & standards; contribute to success criteria from design through deployment, including, reliability, cost-effectiveness, performance, data integrity, maintainability and scalability; contributes expertise on significant application components, program languages, databases, operating systems, etc., and guides/mentors the team during the build and test phases.
Required Qualifications
- Candidate must be located within commuting distance of Richardson, TX or Raleigh, NC or Tempe, AZ or Hartford, CT or Indianapolis, IN or be willing to relocate to the area.
- Bachelor’s degree or foreign equivalent required from an accredited institution. Will also consider three years of progressive experience in the specialty in lieu of every year of education.
- Candidates authorized to work for any employer in the United States without employer-based visa sponsorship are welcome to apply. Infosys is unable to provide immigration sponsorship for this role at this time.
- At least 4 years of Information Technology experience
- Hands‑on experience specifically in Databricks and Azure administration.
- Hands‑on experience in Azure data/infra roles supporting ADF and Databricks across multi‑environment setups (Dev/QA/Prod).
- Experience supporting and optimizing large‑scale data processing environments and mission‑critical data pipelines.
- Hands-on experience administering Azure Databricks or Databricks on AWS/GCP, including Workspace provisioning, Cluster configuration, Job scheduling & administration, Access control and permissions
- Strong experience with Identity and Access Management (IAM), Unity Catalog, Secret scopes and credential management, Audit logging and compliance and Network and security configuration for Databricks environments
Preferred Qualifications:
- Significant experience writing Hashi Corp Terraform configurations and modules. Proficient in translating designs into fully developed Terraform code
- Understanding of Azure data analytics technologies including Azure Data Factory, Azure Databricks, Azure Open AI. Azure data services deployment experience.
- Experience in enterprise-scale environments, building highly available IaaS and PaaS solutions
- Understanding of landing zones, cloud-native security, monitoring and logging tools, and well-architected Framework principals
- Strong problem solving, analytical and interpersonal skills.
- Excellent written & verbal communication, ability to multitask, work well under demanding situations, prioritize and meet deadline
- Certifications
- Azure Data Engineer, Azure Administrator, Azure Architect
- Databricks Associate/Professional certifications
Along with competitive pay, as a full-time Infosys employee you are also eligible for the following benefits: -
- Medical/Dental/Vision/Life Insurance
- Long-term/Short-term Disability
- Health and Dependent Care Reimbursement Accounts
- Insurance (Accident, Critical Illness, Hospital Indemnity, Legal)
- 401(k) plan and contributions dependent on salary level
- Paid holidays plus Paid Time Off
The job entails sitting as well as working at a computer for extended periods of time. Should be able to communicate by telephone, email or face to face. Travel may be required as per the job requirements.
LTIMindtree is an equal opportunity employer that is committed to diversity in the workplace. Our employment decisions are made without regard to race, color, creed, religion, sex (including pregnancy, childbirth or related medical conditions), gender identity or expression, national origin, ancestry, age, family-care status, veteran status, marital status, civil union status, domestic partnership status, military service, handicap or disability or history of handicap or disability, genetic information, atypical hereditary cellular or blood trait, union affiliation, affectional or sexual orientation or preference, or any other characteristic protected by applicable federal, state, or local law, except where such considerations are bona fide occupational qualifications permitted by law.
A little about us...
Role: Java Associate Principal Architect
Location: Berkeley heights, NJ
Job Description:
Project Overview:
Project requires someone having 12+ years of experience, having good AWS cloud architecture knowledge and can handle the cloud network & service design independently
Has good Spring Batch expertise and has done file processing applications
Has good experience in Microservices patterns and Event-driven architectures (e.g., Outbox pattern to ensure data consistency and reliable message delivery)
Has hands-on experience in Cloud IaC using Terraforms & Gitlab
Candidate to be Tech Architect role for new development project with expertise on below skills.
Java/Microservices
Java, Spring boot
Spring Batch (File processing)
REST API Specs, Event Schemas
Transaction Management
Business Rules Engine
Data model and Schema Design
AWS Cloud
Network & Infra Architecture - VPC, Subnet, Security Groups
Services - SQS, S3, Transfer Family
EKS / EC2 / Fargate
PostgressSQL, Dynamo DB
Terraform
CICD
Gitlab
SonarQube
Fortify
Jfrog Antifactory
Deployment Strategy BG, Canary
LTIMindtree is an equal opportunity employer that is committed to diversity in the workplace. Our employment decisions are made without regard to race, color, creed, religion, sex (including pregnancy, childbirth or related medical conditions), gender identity or expression, national origin, ancestry, age, family-care status, veteran status, marital status, civil union status, domestic partnership status, military service, handicap or disability or history of handicap or disability, genetic information, atypical hereditary cellular or blood trait, union affiliation, affectional or sexual orientation or preference, or any other characteristic protected by applicable federal, state, or local law, except where such considerations are bona fide occupational qualifications permitted by law.
We are seeking an experienced Cloud Platform Engineer with deep expertise in Red Hat OpenShift and strong Linux systems engineering background. This role will be responsible for designing, building, and operating large-scale OpenShift platforms within on-premises datacenter environments.
The ideal candidate will work closely with SRE teams and Program Management to drive the successful implementation, scaling, and operationalization of enterprise-grade OpenShift infrastructure.
Key Responsibilities
1. Platform Engineering
- Design, deploy, and manage enterprise-scale Red Hat OpenShift clusters in on-prem datacenter environments.
- Architect highly available, scalable, and secure OpenShift platforms.
- Implement cluster lifecycle management (installation, upgrades, patching, scaling).
- Configure networking, storage, ingress, and security components for OpenShift.
2. Infrastructure Build & Automation
- Build and automate infrastructure in datacenter environments (compute, storage, networking).
- Integrate OpenShift with virtualization platforms (VMware/other hypervisors as applicable).
- Develop Infrastructure-as-Code (IaC) solutions using tools such as Terraform, Ansible, or similar.
- Implement CI/CD pipelines for platform deployments and updates.
3. Linux Systems Engineering
- Provide deep Linux system administration and troubleshooting support.
- Optimize OS-level configurations for performance, reliability, and security.
- Automate system configuration and compliance management.
- Diagnose and resolve complex kernel, networking, and storage issues.
4. Reliability & Operations
- Partner closely with the SRE team to establish SLOs, SLIs, monitoring, and alerting.
- Drive observability implementation (logging, metrics, tracing).
- Participate in incident management, root cause analysis (RCA), and remediation.
- Ensure platform resiliency, performance tuning, and capacity planning.
5. Program & Cross-Functional Collaboration
- Work with Program Management to drive large-scale OpenShift implementation milestones.
- Provide technical input into roadmap planning, timelines, and risk mitigation.
- Collaborate with security, networking, storage, and application teams.
- Document architecture, standards, and operational procedures.
6. Security & Compliance
- Implement RBAC, security policies, and compliance controls within OpenShift.
- Harden clusters according to enterprise security standards.
- Support vulnerability management and patch governance processes.
Required Qualifications
- 5+ years of experience in Linux systems engineering (RHEL preferred).
- 3+ years of hands-on experience with Red Hat OpenShift (OCP 4.x preferred).
- Proven experience building infrastructure in on-prem datacenter environments.
- Strong understanding of:
- Kubernetes architecture
- Networking (DNS, load balancing, firewalls, SDN)
- Storage (SAN, NAS, CSI drivers)
- Virtualization platforms (VMware, etc.)
- Experience with automation tools (Terraform, Ansible, GitOps).
- Strong troubleshooting and problem-solving skills.
Preferred Qualifications
- Red Hat certifications (RHCE, OpenShift Certification).
- Experience implementing OpenShift at enterprise scale (multi-cluster environments).
- Experience working in SRE-driven environments.
- Knowledge of DevOps/GitOps practices.
- Experience with monitoring tools (Prometheus, Grafana, ELK, etc.).
Advanced Software Engineering Manager – DevOps / CI/CD
Location: Onsite – Cincinnati, Ohio (Blue Ash)
Employment Type: Direct Hire
Compensation: $145,000–$160,000/year
Benefits: Health, Dental, Vision, 401(k) with Match, PTO, Paid Holidays, and additional enterprise benefits
Travel: None
Start: ASAP
Open Role Due To: Technology modernization and enterprise platform expansion
About the Role
Our client, a global leader in enterprise technology and innovation, is seeking an Advanced Software Engineering Manager – DevOps / CI/CD to architect, operate, and continuously enhance a modern CI/CD ecosystem that enables fast, secure, and reliable software delivery across the organization.
This role will drive pipeline scalability, automation maturity, platform modernization, and operational excellence. You will partner closely with development, QA, architecture, IT, security, and product teams to accelerate engineering productivity while ensuring enterprise-grade security, compliance, and governance standards are met.
This is a highly visible leadership role requiring both strategic vision and strong hands-on technical expertise.
What You’ll Do
- Lead the design, development, and delivery of scalable, high-quality DevOps and CI/CD solutions aligned to enterprise architecture standards
- Architect and modernize CI/CD pipelines using Azure DevOps, GitHub Actions, Jenkins, and/or GitLab CI
- Develop Infrastructure as Code solutions using Terraform, Ansible, ARM/Bicep, and related tools
- Translate technology strategy into actionable roadmaps supporting enterprise project portfolios
- Drive modernization initiatives across cloud, infrastructure, automation, and security domains
- Lead migration strategies from current-state to future-state architecture aligned with capital planning cycles
- Present architectural recommendations to executive leadership, including cost/benefit and risk analysis
- Mentor engineers on software engineering best practices, reusable design patterns, and architectural principles
- Promote reusable frameworks, shared libraries, and automation standards
- Author and review architectural artifacts including system diagrams, interface specifications, and technical design documentation
Why This Role Stands Out
- Enterprise-wide impact with high visibility to senior leadership
- Opportunity to modernize and scale CI/CD platforms in a large, complex environment
- Strong influence over cloud architecture, automation strategy, and operational excellence
- Competitive compensation and comprehensive benefits package
What We’re Looking For
- Bachelor’s Degree in Computer Science or STEM-related field
- 8+ years of experience in DevOps, CI/CD, Infrastructure Engineering, or Site Reliability Engineering
- 5+ years leading the design and delivery of complex, large-scale, high-quality software or automation systems
- Strong hands-on expertise in software or infrastructure engineering, including design patterns and code structure
- Expertise with CI/CD platforms (Azure DevOps, GitHub Actions, Jenkins, GitLab CI)
- Strong proficiency in Infrastructure as Code tools (Terraform, Ansible, ARM/Bicep)
- Deep knowledge in at least two of the following: cloud platforms, APIs, application development, infrastructure/network design, middleware, servers & storage, database management, or operations
- Solid understanding of network and security architecture principles
- Experience with Linux and/or Windows systems, virtualization, and containerization
- Experience with enterprise observability and monitoring platforms
- Proven ability to operate strategically while delivering hands-on technical solutions
- Strong written, verbal, and presentation communication skills
Preferred Skills
- Experience building and mentoring high-performing engineering teams
- Experience designing and implementing elastic architectures in Azure and/or Google Cloud Platform
- Experience working in large-scale or regulated enterprise environments
- Experience with Google Cloud Platform (GCP) or Google Distributed Cloud (GDCE)
- Awareness of industry trends and competitive DevOps landscape
Site Reliability Engineer (SRE) | San Fran Bay Area | Hybrid, 2 days in Fremont Office
We're partnering with a technology-driven organisation modernising its infrastructure and operations. They're hiring an experienced Site Reliability Engineer to help transition from legacy systems to modern, automated, Infrastructure-as-Code environments.
What You'll Do
- Build automation software in Python for cloud infrastructure
- Drive Infrastructure as Code adoption using Terraform
- Implement configuration management (Ansible/SaltStack)
- Improve CI/CD pipelines, monitoring, and system reliability
- Participate in a sustainable on-call rotation
What You'll Bring
- 3+ years' experience in AWS and/or GCP production environments
- Strong Python and Linux skills
- Hands-on Terraform and IaC experience
- Experience with CI/CD and configuration management tools
- Collaborative mindset and interest in solving complex problems
Why Apply
- High-impact role with real ownership
- Opportunity to modernise and automate at scale
- Strong technical challenges and growth potential