Terraform Github Jobs Remote Jobs in Usa
37 positions found
Location: Alpharetta, GA (3 days a week onsite)
Duration: 6 months
Job Description:
We are seeking a skilled Site Reliability Engineer to join our team and help build, maintain, and scale our cloud-native infrastructure. You will work closely with development and operations teams to ensure our systems are reliable, scalable, and efficient. The ideal candidate is passionate about automation, observability, and infrastructure-as-code, and thrives in a collaborative, fast-paced environment.
Key Responsibilities
Design, implement, and manage cloud infrastructure on Azure using Terraform and Terragrunt.
Maintain and optimize Kubernetes clusters on Azure Kubernetes Service (AKS).
Build and manage CI/CD pipelines using GitHub Actions/Workflows and ArgoCD for GitOps deployments.
Enhance system reliability by implementing monitoring, alerting, and observability solutions with Grafana.
Automate operational tasks to reduce toil and improve team efficiency.
Participate in on-call rotations, incident response, and post-mortem analysis.
Collaborate with development teams to improve application performance, scalability, and resilience.
Implement and advocate for SRE best practices, including SLIs, SLOs, and error budgets.
Continuously improve system performance, cost efficiency, and security.
Required Skills & Qualifications
3+ years of experience in an SRE, DevOps, or cloud infrastructure role.
Strong experience with Azure cloud services and infrastructure.
Hands-on experience with java and Terraform and Terragrunt for infrastructure-as-code.
Proficiency with Kubernetes (preferably AKS and container orchestration.
Experience with CI/CD tools, especially GitHub Workflows/Actions and ArgoCD.
Solid understanding of observability tools like Grafana (Prometheus, Loki, Tempo experience is a plus).
Education Requirements Bachelor's degree required, (Masters preferred)
As a Senior IT Engineer, you will play a pivotal role in our organization by driving the design, implementation, and optimization of cloud-based solutions.
You will leverage your expertise in Azure technologies to develop innovative and scalable architectures that meet business objectives while adhering to industry best practices.
Your primary focus will be on creating robust, secure, and efficient cloud environments that enhance operational excellence and performance efficiency.
You will work closely with cross-functional teams to integrate solutions, streamline data management processes, and automate infrastructure provisioning.
The key responsibilities of this position are to: Design and implement cloud-based solutions adhering to the Cloud Well-Architected Framework (Reliability, Security, Cost Optimization, Operational Excellence, Performance Efficiency).
Demonstrated ability to architect, develop, and execute cloud-based solutions from conception through successful deployment.
Develop and manage data pipelines, notebooks, and data flows using Microsoft Fabric and PowerBI.
Optimize cloud resources for efficiency, reliability, and cost savings through providing technical assessments to cloud environment’s design and integration challenges and recommending mitigation approaches.
Utilize AI to integrate intelligent solutions into applications.
Automate infrastructure provisioning and configuration using Terraform, Puppet, Ansible, or Chef.
Configure and manage networking solutions, including ExpressRoute, to ensure secure and efficient connectivity.
Implement monitoring and observability using Monitoring tools (Dynatrace, Application Insights, Splunk, CloudWatch and Azure Monitor).
Develop and maintain cloud-native applications using microservices and APIs.
Write scripts in Python, PowerShell, and Bash to automate processes.
Build and deploy applications using Node.js.
Implement CI/CD pipelines and manage source control for efficient development workflows.
Administer Identity & Access Management using EntraID.
Ensure security and compliance using Microsoft Defender and Purview, adhering to NIST, CUI, HIPAA, and FERPA standards.
**This position is considered essential and may be required to work at the normal work location or an alternative location during a major catastrophic event, weather emergency, or other operational emergency to help maintain the continuity of University services.
** **May be required to work evenings, nights, weekends, or different shifts for extended periods.
** Physical Demands: Work is performed in an office environment and requires the ability to operate standard office equipment and keyboards.
Must have the ability to walk short distances, and/or drive a vehicle to deliver and pick up materials.
KNOWLEDGE, SKILLS, & ABILITIES: Knowledge of all aspects of software and cloud infrastructure.
Knowledge of various enterprise solutions.
Skill in oral and written communication.
Skill in the use of office productivity software such as Office 365 or Google Workspaces.
Ability to lead presentations with small to medium-sized groups Able to organize and provide direction to junior staff members Ability to bridge the gap between team manager and team members on technical topics.
Ability to effectively manage workload and delivery assignments on time.
Additional Job Details Preferences: Demonstrated proficiency with cloud-based services Experience designing, implementing, and maintaining CI/CD pipelines using Azure DevOps Pipelines and/or GitHub Actions Demonstrated understanding of Microsoft Fabric, Power BI, Azure AI and Windows server operating systems Demonstrated understanding of Microsoft 365 services and functionality including expertise with Entra ID for Identity & Access Management.
Hands-on experience with infrastructure-as-code tools (Terraform, Puppet, Ansible, Chef).
Experience with Python, PowerShell, Bash scripting, and Node.js Experience with microservices architecture and API development.
Strong problem-solving skills and ability to work in a fast-paced environment.
Excellent communication and collaboration skills to work with cross-functional teams.
Experience with Agile methodologies and DevOps practices.
Knowledge of compliance standards (NIST, CUI, HIPAA, FERPA).
Knowledge of cloud infrastructure monitoring tools (e.g., AWS CloudWatch, Azure Monitor, Google Cloud Operations Suite) Strong knowledge of networking concepts and private cloud connectivity configuration.
Use cloud cost management tools to track and analyze cloud spending Additional Certifications: AZ-305: Designing Microsoft Azure Infrastructure Solutions AZ-104: Microsoft Azure Administrator AZ-500: Azure Security Engineer Associate (preferred for security-focused responsibilities Licenses/Certification: N/A Minimum Qualifications: Education: Bachelor’s degree from an accredited college or university.
Experience: Four (4) years of professional experience implementing and managing enterprise solutions.
Other: Additional work experience as defined above may be substituted on a year-for-year basis for up to four (4) years of the required education.
Strong written and oral communication skills Required Application Materials: List of Three References, Resume, Cover Letter Best Consideration Date: March 30, 2026 Posting Close Date: N/A Open Until Filled: Yes Salary Range: $152,480.00
- $182,976.00 Please apply at: Additional Information: Please note that all positions within the Division of Information Technology (DIT) have an in person component with expected time in our College Park, MD location per week.
Telework is not a guaranteed work arrangement.
Visa Sponsorship Information: DIT will not sponsor the successful candidate for work authorization in the United States now or in the future.
F1 STEM OPT support is not available for this position.
Job Risks: Not Applicable to This Position Financial Disclosure Required: No For more information on Financial Disclosure, please visit Maryland's State Ethics Commission website .
Department: DIT-EE-Platform Services Worker Sub-Type: Staff Regular Benefits Summary: For more information on Regular Exempt benefits, select this link .
Background Checks: Offers of employment are contingent on completion of a background check.
Information reported by the background check will not automatically disqualify anyone from employment.
Before any adverse decision, the finalist will have an opportunity to provide information to the University regarding disclosable background check information.
The University reserves the right to rescind the offer of employment or otherwise decline or terminate employment if the information reported by the background check is deemed incompatible with the position, regardless of when the background check is completed.
Employment Eligibility: The successful candidate must complete employment eligibility verification (on Form I-9) by presenting documents that establish identity and work authorization within the timeframe required by federal immigration law, and where applicable, to demonstrate renewed employment authorization.
Failure to complete employment eligibility verification or reverification within the timeframe set forth by law may result in suspension or termination of employment.
EEO Statement : The University of Maryland, College Park is an Equal Opportunity Employer.
All qualified applicants will receive equal consideration for employment.
Please read the University’s Equal Employment Opportunity Statement of Policy.
Title IX Non-Discrimination Notice See above description for requirements.
Compensation: $150-200k Responsibilities: • Finding and improving operational efficiencies to best suit Cloud resource delivery, access management and security implementations.
• Support of production workloads within a multi-Cloud environment, including, but not limited to monitoring, patching, backup and restoration of Cloud resources.
• Delivery of Cloud infrastructure for partner solutions.
• Creation and support of automation and infrastructure as code solutions for resource creation and policy, leveraging PowerShell/Azure DevOps, Ansible and other approved corporate automation and orchestration platforms.
• Sound familiarity with orchestration and automation practices at scale, with a heavy emphasis on designing solutions for other teams to consume.
• Prior experience working with version control tools such as Github / Azure DevOps, including repository management, pipeline as code, and sdlc workflows.
• Basic understanding/debugging experience when it comes to application infrastructure, databases, networking, and DNS.
• Design, creation and maintenance of complex Infrastructure as Code and Pipelines as Code solutions in a highly reusable capacity.
• Work with internal teams and vendors on the integration of a diverse set of systems into the central ITSM/ITOM platform, ServiceNow.
• Participate in the automated implementation of monitoring systems, for event monitoring, alerting and metrics.
• Client opportunities and help facilitate a mechanism to change, evolve, improve, and simplify the infrastructure and supporting processes/procedures.
• Directly interface with support teams across all disciplines, to facilitate a closer relationship for collaborative implementations and knowledge sharing.
• Ensure handover of new/updated systems/documentation to team providing 24x7x365 support.
Qualifications: • Proficiency in Cloud services related to one or more Cloud providers including IaaS, PaaS and SaaS.
• Strong automation skillset with the ability to identify and create automation workflows.
• Strong PowerShell or other scripting experience, especially with the goal of automation.
• Experience with infrastructure as code development and iteration in either Terraform/ARM/Bicep/CloudFormation.
• Hands on operational experience and knowledge with Azure or AWS.
• Experience with monitoring and log aggregation tools such as Azure Monitor, Log Analytics, CloudWatch, CloudTrail, Splunk, ELK, etc.
• Basic knowledge of foundational IT tooling across a wealth of domains to facilitate understanding of automation creation.
• Fundamental understanding of public vs private networking in the Cloud.
• Strong knowledge on Git merging and branching strategies.
• Ability to execute proof of concepts and deploy complex solutions.
• Understanding of typical SDLC processes, workflow as pertaining to infrastructure.
• Basic understanding of Atlassian Suite (Jira/Confluence) is a plus.
• Prior experience working with orchestration tools (Rundeck/Cutover preferred) and infrastructure-as-code tools (HashiCorp Terraform).
• Excellent verbal and written communication skills and ability to articulate requirements, concepts and ideas to business and technology partners.
• Strong technical ability for diagnosis, triage, troubleshooting and problem analysis with the ability to communicate results to business stakeholders, IT support teams to resolve issues both quickly and effectively.
• Ability to influence people outside the immediate span of control, negotiate and resolve conflicts, and work with business users, IT partners and vendors.
• High-Level customer service mindset and commitment to deliver quality results to internal stakeholders in a demanding environment.
• A strong sense of urgency and accountability with exceptional time management skills.
• Comfortable in effectively communicating with business end users, technical IT Teams, network partners and vendors.
• Comfortable in fast-paced environment with changing priorities and schedules.
Compensation: $130-155k Responsibilities:
- Participate in full lifecycle development of SDLC and implement all DevOps procedures to manage and support the CI/CD process including the automation of the build, test, deploy pipelines and configuration management.
- Employ best practices for designing automation processes and utilities that can be easily used by the development teams.
- Design and develop a best practice release management process that employs separation of control and proper approvals.
- Closely partner with the security and infrastructure teams to incorporate corporate standards into the CI/CD and provisioning processes.
- Maintaining source control management system and integrating it with software build and deployment.
- Responsibility for the build environment: resolve build issues, help coordinate complex software test environments and software releases.
- Monitoring of Applications operational processes, escalating and facilitating failure resolution as appropriate.
Qualifications: Required
- 5+ years of professional experience of working with the full software development life cycle and designing/developing best practice CI/CD pipelines, GitHub Actions, Ansible (IaC), Terraform/CloudFormation, K8s, test automation, static code analysis, Artifactory and release management processes.
- Proficient in at least two of the following Windows batch/PowerShell, bash, Python.
- Knowledgeable about networking (TCP, UDP, ICMP, ARP, DNS, TLS, HTTP, SSH, NAT, firewall, load balancing, etc).
- Strong experience with managing and support of Windows/Linux Servers.
- Good understanding of deployment of various platforms such as web/REST API, messaging bus/queue, application services, Microservices and Cloud Serverless components/managed platform.
- Experience working with relational databases/SQL and no-SQL, other database technologies are a plus.
- A curiosity concerning technology and the ability to learn new systems and tools quickly.
- Excellent communication skills and the ability to work in a collaborative environment.
Preferred
- Experience with Cloud solutions i.e Azure (VNet, privateLink, Blob storage, Azure SQL, Web App, Data Factory, Client, AKS, ARO, SQL Server/Cosmos) / AWS (VPC, EC2, S3, Route53, ECS, EKS, RDS, ALB/NLB).
- Experience with code-quality (SonarQube, GitHub Enterprise Advanced Security/CodeQL, Jfrog Artifactory + Xray).
- Experience with containers and orchestration technologies (Docker, K8s, OpenShift).
- Experience with application telemetry, monitoring and alerting solutions (Splunk, LogicMonitor, AWS CloudWatch, Azure Insight or similar).
*Securian Financial Groups internal position title isInfrastructure Consultant.
Position Summary:
We are looking for an AWS Dev OpsEngineer with hands-on experience in AWS Tools to help build and maintain cloud infrastructure that supports our data and analytics platforms. This role is ideal for someone with a solid foundation in cloud engineering who is ready to take ownership of infrastructure components and collaborate across teams to deliver scalable, secure, and efficient solutions. Executes projects to research, proof of concept, and implement new solutions. Maintains awareness of trends and technologies to meet new and emerging stakeholder requirements.
Responsibilities include but not limited to:
- Champion the selection of Data and Analytics platforms and tools to be used across the enterprise.
- Engage with users and vendors to execute technology proof of concepts that validate use cases and the value of new solutions.
- Onboard new platforms and tools to ensure solutions are adopted effectively across the organization.
- Collaborate with Data Engineering and Data Science teams to identify tooling gaps and inefficiencies and partner to implement solutions for their needs.
- Build and maintain AWS infrastructure using Infrastructure as Code (IaC) tools such as Terraform or CloudFormation.
- Configure and manage core AWS services including EC2, S3, IAM, RDS, Lambda, Glue, Redshift and VPC Networking.
- Support data engineering and analytics teams by provisioning and optimizing cloud resources for data pipelines and analytics workloads.
- Implement monitoring, logging, and alerting solutions using tools like CloudWatch.
- Participate in CI/CD pipeline development and deployment automation.
- Ensure infrastructure security through proper IAM policies, encryption, and network configurations.
- Collaborate with data engineers, data scientists, and solution architects to improve infrastructure design and performance.
- Document infrastructure processes and configurations for operational transparency and team knowledge sharing.
- Ability to work with ambiguity and organize requirements to identify options.
- Ability to build productive relationships and collaborate with partners and stakeholders.
- Proficient in managing and maintaining Linux-based systems, including installation, configuration, performance tuning, and troubleshooting. Experienced with shell scripting, system security, user management, and automation tools to ensure reliable and efficient server operations.
Qualifications:
- 2-5 years of experience in cloud infrastructure engineering, with a focus on AWS.
- Experience working with structured and unstructured data and data storage technology RDS Postgres, S3
- Proficiency in Infrastructure as Code tools (Terraform, CloudFormation, or AWS CDK).
- Experience with AWS services relevant to data and analytics (e.g., Glue, Redshift).
- Familiarity with DevOps tools and practices (e.g., Git, GitHub, CI/CD, Docker).
- Basic scripting skills in Python, Bash, or PowerShell.
- Understanding of cloud networking, security, and cost optimization principles.
- Strong communication and collaboration skills.
Preferred Qualifications:
- AWS certification (e.g., AWS Certified Solutions Architect - Associate, Associate Developer or Associate Machine Learning Engineer).
- Exposure to big data tools and frameworks (e.g., Spark, Hive). Airflow, Control M, Streamsets
- Cloud Data Platforms: Snowflake, AWS Redshift, RDS, Postgres
- Data Governance & Cataloging: Informatica Intelligent Data Management Cloud (IDMC), Cloud Data Governance and Catalog (CDGC)
- Master Data Management: Reltio MDM for authoritative data domains
- AI Enablement: SageMaker, AWS Bedrock, Textract, etc. for AI development, MLflow for model lifecycle management
- Experience working in agile or cross-functional teams.
#LI-Hybrid**This position willhave a hybrid working arrangement, workingin-officefor a minimum of3days aweek.**
Securian Financial believes in hybrid work as an integral part of our culture. Associates get the benefit of working both virtually and in our offices. Ifyou'reina commutable distance (90 minutes)you'lljoin us 3 days each week in our offices to collaborate and build relationships. Our policy allows flexibility for the reality of business and personal schedules.
The estimated base pay range for this job is:
$89,000.00 - $164,300.00Pay may vary depending on job-related factors and individual experience, skills, knowledge, etc. More information on base pay and incentive pay (if applicable) can be discussed with a member of the Securian Financial Talent Acquisition team.
Be you. With us. At Securian Financial, we understand that attracting top talent means offering more than just a job - it means providing a rewarding and fulfilling career. As a valued member of our high-performing team, we want you to connect with your work, your relationships and your community. Enjoy our comprehensive range of benefits designed to enhance your professional growth, well-being and work-life balance, including the advantages listed here:
Paid time off:
We want you to take time off for what matters most to you. Our PTO program provides flexibility for associates to take meaningful time away from work to relax, recharge and spend time doing what's important to them. And Securian Financial rewards associates for their service by providing additional PTO the longer you stay at Securian.
Leave programs: Securian's flexible leave programs allow time off from work for parental leave, caregiver leave for family members, bereavement and military leave.
Holidays: Securian provides nine company paid holidays.
Company-funded pension plan and a 401(k) retirement plan: Share in the success of our company. Securian's 401(k) company contribution is tied to our performance up to 10 percent of eligible earnings, with a target of 5 percent. The amount is based on company results compared to goals related to earnings, sales and service.
Health insurance: From the first day of employment, associates and their eligible family members - including spouses, domestic partners and children - are eligible for medical, dental and vision coverage.
Volunteer time: We know the importance of community. Through company-sponsored events, volunteer paid time off, a dollar-for-dollar matching gift program and more, we encourage you to support organizations important to you.
Associate Resource Groups: Build connections, be yourself and develop meaningful relationships at work through associate-led ARGs. Dedicated groups focus on a variety of interests and affinities, including:
Mental Wellness and Disability
Pride at Securian Financial
Securian Young Professionals Network
Securian Multicultural Network
Securian Women and Allies Network
Servicemember Associate Resource Group
For more information regarding Securian's benefits, please review our Benefits page.
This information is not intended to explain all the provisions of coverage available under these plans. In all cases, the plan document dictates coverage and provisions.
Securian Financial Group, Inc. does not discriminate based on race, color, religion, national origin, sex, gender, gender identity, sexual orientation, age, marital or familial status, pregnancy, disability, genetic information, political affiliation, veteran status, status in regard to public assistance or any other protected status. If you are a job seeker with a disability and require an accommodation to apply for one of our jobs, please contact us by email at , by telephone (voice), or 711 (Relay/TTY).
To view our privacy statement click here
To view our legal statement click here
Remote working/work at home options are available for this role.
Role - Contact center engineer - Five9
Location : Phoenix, AZ (Onsite)
JD:
8+ years of commercial software development experience
Design and implement scalable Caas and IVA solutions leveraging Google Cloud and Google CX
(Dialoglow CX), including conversational IVR design, NLU/NLP modeling, intent and flow orchestration, webhook integrations, speech-to-text/text-to-speech, and seamless integration with Five and enterprise systems.
* Architect secure, resilient cloud infrastructure on GCP using services such as GKE, Cloud Run, Cloud Functions, Pub/Sub, Apigee, and BigQuery, implementing IAM, VPC design, encryption, multi-region high availability, and Infrastructure as Code (Terraform) to support enterprise-grade customer experience platforms.
* Architect, implement, and optimize Five CRaas solutions, including ACD, skills-based routing, dialer, omnichannel capabilities, and campaign management, ensuring scalable, secure, and compliant contact center operations.
* Lead integrations and migrations leveraging Five APls and telephony capabilities, including CRM/CTI integrations, webhooks, SIP/WeRTC, security configuration (RAC, PCI/HIPAA), and transition from legacy contact center platforms to Five.
* Experience with Agile development, Continuous Integration, and Continuous Delivery, including working knowledge of various tools in the CI/D pipeline. DevOps and Observability.
* Experience with automated release management using GitHub Actions.
* Experience in Architecture design and modeling should possess strong skills in designing and modeling complex systems and architectures.
* Good understanding of data structures, algorithms, and design patterns
* Great written communication and documentation abilities
* Strong understanding of cloud security architecture and Encryption and OAth.
* Looks proactively beyond the obvious for continuous improvement opportunities.
* Leadership and communication: lead teams and collaborate with stakeholders, so strong leadership and communication skills are essential.
* Excellent communication skills, with the ability to influence at all levels across functions, from both technical and non-technical perspectives alike.
Business Area:
EngineeringSeniority Level:
Mid-Senior levelJob Description:
At Cloudera, we empower people to transform complex data into clear and actionable insights. With as much data under management as the hyperscalers, we're the preferred data partner for the top companies in almost every industry. Powered by the relentless innovation of the open source community, Cloudera advances digital transformation for the world's largest enterprises.
The Product Security group ensures our platforms are secure by design and compliant with the world's most rigorous industry and government standards. As a Staff Product Security Engineer, you will serve as a technical architect of trust and the primary connective tissue between Security, Product, and Engineering teams. You will be responsible for translating complex global security requirements into actionable, automated engineering solutions, acting as the "go-to" expert for the Security Features team.
As a senior technical member of the team, you will exercise significant latitude in defining technical objectives and architectural approaches to complex challenges. Leveraging a deep understanding of distributed systems and cloud-native platforms, you will lead high-impact, security-driven initiatives across the entire Cloudera product suite.
As a Staff Software Engineer, you will:
Architect and maintain advanced build tooling to automate and accelerate vulnerability remediation across all engineering pillars.
Lead Proof of Concepts (POCs) and evaluate third-party security tools to enhance our security posture without compromising developer velocity.
Design and develop core security features, including FIPS compliance, TLS/Encryption, Secrets Rotation, Identity & Access Management (IAM), and Certificate Management.
Drive root-cause analysis and triage for complex, product-wide stability issues related to security infrastructure.
Engineer specialized observability tools, such as encryption inventories, to audit and measure security standards during feature delivery.
Author comprehensive design specifications and test plans for cross-component security features, providing technical clarity in the face of ambiguity.
Elevate the team's technical bar through high-quality code reviews, documentation standards, and active mentorship of engineering talent.
Partner across organizational lines, collaborating with internal stakeholders and senior management to resolve customer escalations and align with long-term objectives.
We're excited about you if you have (Required Qualifications):
Bachelor's degree in Computer Science or a related field (or equivalent experience) with 6+ years of professional software engineering experience.
Deep technical expertise in containerized environments, specifically Kubernetes (EKS) and Docker.
Strong command of general-purpose and scripting languages, including Java, Python, Go, and Bash.
Proven experience with Infrastructure-as-Code (IaC) tools such as Terraform and Helm to automate secure infrastructure rollouts.
Expert-level experience automating complex CI/CD pipelines using platforms such as GitLab CI/CD, Jenkins, or GitHub Actions.
Exceptional troubleshooting skills with a track record of identifying root causes for site outages and resolving P1 escalations.
You may also have (Preferred Qualifications):
Experience with Post-Quantum Cryptography to support upcoming product transitions.
Practical experience with FIPS 140-3, TLS 1.3, and modern encryption standards.
Proven ability to automate CVE remediation and integrate SAST/DAST scanning tools-such as Trivy, Aquasec, Tenable, or Fortify-into developer workflows.
Familiarity with government compliance frameworks and industry standards including FedRAMP, ISO 27001, and SOC 2.
Deep understanding of secure coding practices and common vulnerabilities as outlined in the OWASP Top 10.
Experience working with Identity and Access Management (IAM) or Identity Governance platforms.
Strong management skills with a demonstrated ability to influence cross-functional teams and drive results in a remote environment.
This role is not eligible for immigration sponsorship
What you can expect from us:
Generous PTO Policy
Support work life balance with Unplugged Days
Flexible WFH Policy
Mental & Physical Wellness programs
Phone and Internet Reimbursement program
Access to Continued Career Development
Comprehensive Benefits and Competitive Packages
Paid Volunteer Time
Employee Resource Groups
EEO/VEVRAA
#LI-BV1
#LI-REMOTE
Splunk Engineer/Cloud Logging Engineer (CLS Support)
Job ID
2026-2158
# of Openings
1
Overview
Pyramid Systems is seeking an Cloud Logging Engineer (Splunk & AWS) who is responsible for ensuring the availability, performance, and security.
Responsibilities
- Advise on cost efficiency for future usage and cost optimization for current infrastructure.
- Automate the management and enforcement of policies.
- Create and maintain documentation related to architecture and operational processes for CLS (Centralized Logging Solution).
- Develop a set of best practices and architecture patterns.
- Help maintain regulatory compliance of the CLS (Centralized Logging Solution) infrastructure.
- Help monitor and maintain CLS performance, availability, and capacity.
- Help maintain application container images.
- Offer solutions for ingestion of logs to Splunk via cloud native solutions.
- Maintain all infrastructure as code.
- Provide operations monitoring of CLS platform to enable proactive issue identification, response, and resolution.
- Recommend and execute improvements to the existing CLS architecture and design with growth and scalability in mind to optimize performance, stability, reliability, and agility.
- Responsible for reporting on current infrastructure status, and planning for future usage.
- Responsible for Beats agent deployments and container infrastructure analysis, optimization, and capacity planning.
- Maintain CI/CD pipelines for configuration deployments to applications.
- Support large-scale deployments with data feeds from multiple on premise and cloud data centers.
- Upgrade, install, configure monitoring solution for AWS for Windows and Linux servers.
- Utilize automation tool such as Terraform, Ansible, AWS Cloud Formation, Azure Resource Manager, or similar.
- Participate in a rotating on call schedule and weekly off hours maintenance.
Qualifications
- Splunk certification required***
- Candidate background eligibility requirements are United States citizen or be a Permanent Resident and have lived in the United States for at least 3 years, clean criminal background and able to obtain a Public Trust (High-Risk) Position.
Bachelor's degree in computer science, electronics engineering or other engineering or technical discipline OR AWS/Azure Certification (AWS Professional / Specialty Cert. OR Azure Expert / Advanced Cert.) OR 4 years of relevant experience in one of the VAECOT suite of tools (Science Logic, Dynatrace, Turbot, AppDynamics)
Minimum of three (3) years of experience in leading technical teams to achieve objectives and outcomes.
Minimum of six (6) years setting up, configuring, and using AWS cloud operational tools to ensure service level agreements and performance targets are met, and continued compliance with policies, standards and guidelines.
Minimum of three (3) years specific to monitoring Centralized Logging Solution (CLS)/Splunk
Subject matter expertise with ALL VAEC Cloud Service Providers which currently includes Microsoft Azure and Amazon Web Services (AWS).
Experience with programming with Splunk language (SPL) or equivalent (e.g., Python, Powershell, AWS or Azure CLI).
One or more of these Splunk certifications: Splunk Core Certified Power User, Splunk Core Certified Advanced Power User, Splunk Enterprise Certified Admin, Splunk Enterprise Certified Architect, Splunk Enterprise Security Certified Admin, Splunk IT Service Intelligence Certified Admin.
Knowledge of enterprise logging, with a focus on security event logging.
Solid understanding of cloud concepts, either using Azure or AWS semantics.
Experience in one or more of the VAECOT suite of tools, shown below:
VAEC Operational Tools (VAECOT)
Some experience in one or more of the following tools:
Third party tools
* Application Performance Monitoring: Dynatrace, AppDynamics
* Cloud Security: Nessus, NetSkope, Enterprise Security External Change Council, Identity and Assessment Management, Continuous Monitoring as a Service, McAfee, eMASS, Centrify
* Cloud Governance: Turbot
* DevOps/Configuration Management/Help Desk: Ansible, Service Desk, ScienceLogic, ServiceNow, SPLUNK, Jira ServiceDesk, Cloudockit, GitHub
* Containerization: Red Hat OpenShift
* Migration: CloudKey, Version One
* Reporting: Apptio
Cloud Service Provider (CSP) Operational Tools Tools/Services
* AWS Security: System Manager (Explorer and OpsCenter), CloudWatch, Config, CloudTrail, Elasticsearch (Kinesis DataStreams), GuardDuty, Inspector, Key Management Service (KMS), Security Hub, Directory Service, Identity and Access Management, Resource Access Manager, Cognito, Secrets Manager, Certificate Manager, Artifact
* Aws Monitoring and Logging: QuickSight, Eventbridge (AWS Kinesis DataStreams), Simple Notification Service (SMS), Elasticsearch (AWS Kinesis DataStreams), CloudTrail, CloudWatch
* Aws Networking: Virtual Private Cloud (VPC), Route S3, API Gateway, Direct Connect, AppStream 2.0, Transit Gateway, Elastic Loadbalancer, Firewall Manager, WAF & Shield
* AWS Storage: Cloud Tiering Services to S3 from On-Prem, Simple Storage Services (S3), S3 Glacier, Storage Gateway, Elastic File System (EFS), Backup
* Azure Security: Monitor (Log Analytics and ASC), Event Hubs, Security Center (ASC), Information Protection (AIP) , Key Vault, PowerBI, Network Watcher (Performance Monitor), Monitor (Log Analytics and ASC)
* Azure Monitoring and Logging: Information Protection (AIP), Advance Threat Protection, Security Center (ASC), Information Protection (AIP), Key Vault, Active Directory, Role Based Access Control (RBAC), Resource Manager (ARM), Resource Graph (ARG), Active Directory B2C, Key Vault, App Service, Service Trust Portal
* Azure Networking: Virtual Network, Traffic Manager, DNS, Application Gateway, Express Route, Web Apps, FrontDoor, VPN Gateway, Loadbalancer, Firewall
* Azure Storage: NetApp File Service, Storage (Blobs, Disks, Files, Queues, Tables), Storage Archive Access Tier, StorSimple, Files, Backup
Target Pay Range
The below listed pay range for this position is not a guarantee of compensation or salary. The final offered salary will be influenced by a host of factors including, but not limited to, geographic location, Federal Government contract labor categories and contract wage rates, relevant prior work experience, specific skills and competencies, education, and certifications. Our employees value the flexibility at Pyramid Systems that allows them to balance quality work and their personal lives. We offer competitive compensation, benefits, to include our Employee Stock Ownership Program, FlexPTO, and learning and development opportunities.
Pyramid Min
USD $92,168.00/Yr.
Pyramid Max
USD $138,252.00/Yr.
Why Pyramid?
Pyramid Systems, Inc. is an award-winning, technology leader, driving digital transformation across federal agencies. We empower forward-thinking innovations, accelerate production-ready software, and deliver secure solutions so federal agencies can meet their mission goals. Voted a Top Workplace, both regionally (Washington, DC) and Nationally (USA) the past 2 years (2023 and 2024) based on the feedback from our employees, we are headquartered in Fairfax, VA. and have a growing national footprint. We value and promote our Flexible Workplace approach because of the positive impacts it has on work-life integration. We remain committed to ensuring every employee's voice is heard, performance and results are recognized and rewarded, development and advancement is a focus, and diversity, equity and inclusion is a company priority. We offer competitive compensation and benefits (including a recently launched Employee Stock Ownership Plan - ESOP), a robust performance-based rewards program, and we know how to have fun! Our people and culture have endured and delivered for our clients for nearly three decades.
EEO Statement
Pyramid Systems, Inc. is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, or protected veteran status and will not be discriminated against on the basis of disability.
We areseekingan experienced and forward-thinkingSolution Architect - Data Engineeringto lead the design and implementation of scalable, secure, and high-performance data solutions. The ideal candidate will have deep expertise withPython and SQL, experience with data warehouses (Snowflake or something similar), a strong command ofengineering best practices(includinglinters and code formatters, project organization, and managing environments), and practical experience buildingCI/CD pipelinesto ensure robust, automated delivery of data pipelines and services.
Responsibilities
- Architect Scalable Data Solutions
Design and implement end-to-end data engineering architectures that are scalable, maintainable, and performant across batch and real-time processing systems.
- Engineering Leadership
Lead by example with high-quality Python code,utilizinglinters (e.g.,pylint,flake8,black) and enforcing code cleanliness, readability, and best practices across teams.
- CI/CD Pipeline Development
Build, manage, and optimize CI/CD pipelines using tools such asGitHub Actions,GitLab CI,CircleCI, orJenkinsto automate testing, code quality checks, and deployment of data engineering components.
- Data Governance & Quality
Establish data validation, logging, and monitoring strategies to ensure data integrity and reliability at scale.
- Collaborate Cross-Functionally
Work closely with data scientists, software engineers, DevOps, and business stakeholders to translate requirements into technical solutions and ensure alignment with overall enterprise architecture.
- Mentorship & Code Reviews
Provide guidance to junior developers, lead technical reviews, and enforce clean coding standards throughout the data engineering team.
Required Skills & Experience
- 7+ years of experience in software or data engineering, with 3+ years in an architectural or technical leadership role.
- Expert-levelproficiencyinPython and SQL, with a deep understanding of best practices, performance tuning, and maintainable code patterns.
- Proven experience withlinters,formatters, and other static analysis tools to ensure code quality and compliance.
- Hands-on experience designing and implementingCI/CD pipelinesfor data pipelines, APIs, and other backend services.
- Solid knowledge of modern data platforms and technologies (e.g., Spark, Airflow,dbt, Kafka, Snowflake,BigQuery, etc.).
- Strong understanding of software engineering practices such as version control, testing, and continuous integration.
Desired Skills & Experience
- Experience working in cloud environments (AWS, GCP, or Azure).
- Familiarity with Infrastructure as Code (IaC) tools like Terraform or CloudFormation.
- Understanding of security, compliance, and governance in data pipelines.
- Excellent communication and documentation skills.
- Strong leadership presence with the ability to mentor and influence teams.
- Problem-solver with a focus on delivering value and simplicity through technology.
Wage and Benefits
We offer a Total Rewards package that includes medical and dental coverage, 401(k) plans, flex spending, life insurance, disability, employee discount program, employee stock purchase program and paid family benefits to support you and your family.The salary range for this position is posted below. Where an employee or prospective employee is paid within this range will depend on, among other factors, actual ranges for current/former employees in the subject position, market considerations, budgetary considerations, tenure and standing with the Company (applicable to current employees), as well as the employee's/applicant's skill set, level of experience, and qualifications.
Employment Transparency
It is the policy of our company to provide equal employment opportunities to all employees and applicants for employment without regard to race, color, ethnicity, gender, age, religion, creed, national origin, sexual orientation, gender identity, marital status, citizenship, genetic information, veteran status, disability, or any other basis prohibited by applicable federal, state, or local law.
Please note this job description is not designed to cover or contain a comprehensive listing of activities, duties, or responsibilities that are required of the employee for this job. Duties, responsibilities, and activities may change at any time with or without notice.
The employer will make reasonable accommodations in compliance with the American with Disabilities Act of 1990. The job description will be reviewed periodically as duties and responsibilities change with business necessity. Essential and other job functions are subject to modification. Reasonable accommodations may be provided to enable individuals with disabilities to perform the essential functions.
For applicants to jobs in the United States: In compliance with the current Americans with Disabilities Act and state and local laws, if you have a disability and would like to request an accommodation to apply for a position with our company, please email .
Salary Range$200,000—$220,000 USDJob Description:
Principal Azure Engineer, Platform & Delivery:
The Principal Azure Engineer, Platform & Delivery is a senior technical leader responsible for designing, building, and delivering enterprise-scale Microsoft Azure solutions. This role combines deep hands-on Azure engineering expertise with ownership of delivery outcomes, often serving as the technical lead for initiatives without dedicated project management. The ideal candidate can translate complex or ambiguous business needs into secure, scalable Azure solutions and ensure they are executed predictably and effectively.
Required Qualifications:
- Deep technical experience designing and operating high-availability, scalable infrastructure including networking, storage, virtualization, and identity.
- Developing and maintaining automated deployment modules using tools like Terraform or ARM templates.
- Optimizing delivery pipelines (e.g., Azure DevOps, GitHub Actions) to ensure repeatable, secure platform services.
- Proven experience implementing enterprise Azure networking architectures.
- Experience migrating and modernizing workloads from on-premises environments to Azure.
- Implementing governance frameworks, RBAC, and security baselines using Microsoft Defender for Cloud and Azure Policy.
- Demonstrated ability to lead engineers and influence stakeholders without formal authority.
- Experience defining and implementing monitoring and observability solutions.
- Lead end-to-end delivery of multiple concurrent Azure initiatives from intake and design through implementation and operational handoff.
- Act as the technical project lead for Azure initiatives where no formal project manager is assigned.
- Maintain visibility into all in-flight Azure work and provide regular status updates, risk reporting, and summaries.
- Coordinate work across infrastructure, security, networking, application, and vendor teams.
- Proactively identify delivery risks and blockers and drive resolution to keep initiatives moving forward.
- Balance speed, cost, risk, and compliance when making technical and delivery tradeoff decisions.
- Mentor and guide engineers, establishing technical standards, patterns, and best practices.
- Produce high-quality technical documentation, architectural artifacts, and operational runbooks.
- Foster strong partnerships with application teams to enable successful Azure adoption.
Additional Skills and Experience:
- Deep proficiency in Azure compute (VMs, AKS), storage, networking (VNETs, NSGs), and identity (Microsoft Entra ID).
- Experience operating in regulated environments such as healthcare, financial services, or higher education, including frameworks like HIPAA, HITRUST, SOC 2, or GDPR.
- Working knowledge of IT service management concepts.
- Experience with Azure Cost Management and FinOps practices.
- Strong problem investigation, root cause analysis, and decision-making skills.
Education and Experience:
- Bachelor’s degree or equivalent experience.
- Minimum of 10 years of professional IT experience, with at least 5 years in a senior, architect-level, or principal cloud engineering role.
- Demonstrated experience leading enterprise-scale Azure initiatives with multiple parallel workstreams.
Advanced Software Engineering Manager – DevOps / CI/CD
Location: Onsite – Cincinnati, Ohio (Blue Ash)
Employment Type: Direct Hire
Compensation: $145,000–$160,000/year
Benefits: Health, Dental, Vision, 401(k) with Match, PTO, Paid Holidays, and additional enterprise benefits
Travel: None
Start: ASAP
Open Role Due To: Technology modernization and enterprise platform expansion
About the Role
Our client, a global leader in enterprise technology and innovation, is seeking an Advanced Software Engineering Manager – DevOps / CI/CD to architect, operate, and continuously enhance a modern CI/CD ecosystem that enables fast, secure, and reliable software delivery across the organization.
This role will drive pipeline scalability, automation maturity, platform modernization, and operational excellence. You will partner closely with development, QA, architecture, IT, security, and product teams to accelerate engineering productivity while ensuring enterprise-grade security, compliance, and governance standards are met.
This is a highly visible leadership role requiring both strategic vision and strong hands-on technical expertise.
What You’ll Do
- Lead the design, development, and delivery of scalable, high-quality DevOps and CI/CD solutions aligned to enterprise architecture standards
- Architect and modernize CI/CD pipelines using Azure DevOps, GitHub Actions, Jenkins, and/or GitLab CI
- Develop Infrastructure as Code solutions using Terraform, Ansible, ARM/Bicep, and related tools
- Translate technology strategy into actionable roadmaps supporting enterprise project portfolios
- Drive modernization initiatives across cloud, infrastructure, automation, and security domains
- Lead migration strategies from current-state to future-state architecture aligned with capital planning cycles
- Present architectural recommendations to executive leadership, including cost/benefit and risk analysis
- Mentor engineers on software engineering best practices, reusable design patterns, and architectural principles
- Promote reusable frameworks, shared libraries, and automation standards
- Author and review architectural artifacts including system diagrams, interface specifications, and technical design documentation
Why This Role Stands Out
- Enterprise-wide impact with high visibility to senior leadership
- Opportunity to modernize and scale CI/CD platforms in a large, complex environment
- Strong influence over cloud architecture, automation strategy, and operational excellence
- Competitive compensation and comprehensive benefits package
What We’re Looking For
- Bachelor’s Degree in Computer Science or STEM-related field
- 8+ years of experience in DevOps, CI/CD, Infrastructure Engineering, or Site Reliability Engineering
- 5+ years leading the design and delivery of complex, large-scale, high-quality software or automation systems
- Strong hands-on expertise in software or infrastructure engineering, including design patterns and code structure
- Expertise with CI/CD platforms (Azure DevOps, GitHub Actions, Jenkins, GitLab CI)
- Strong proficiency in Infrastructure as Code tools (Terraform, Ansible, ARM/Bicep)
- Deep knowledge in at least two of the following: cloud platforms, APIs, application development, infrastructure/network design, middleware, servers & storage, database management, or operations
- Solid understanding of network and security architecture principles
- Experience with Linux and/or Windows systems, virtualization, and containerization
- Experience with enterprise observability and monitoring platforms
- Proven ability to operate strategically while delivering hands-on technical solutions
- Strong written, verbal, and presentation communication skills
Preferred Skills
- Experience building and mentoring high-performing engineering teams
- Experience designing and implementing elastic architectures in Azure and/or Google Cloud Platform
- Experience working in large-scale or regulated enterprise environments
- Experience with Google Cloud Platform (GCP) or Google Distributed Cloud (GDCE)
- Awareness of industry trends and competitive DevOps landscape
Role: Engineering Lead (Java/DevOps)
Location: Burbank, CA - Onsite, Hybrid
1) 8+ years in full-stack development
a. Experience building distributed systems with strong proficiency in Java/Spring Boot, Angular (or React/Vue)
2) DevOps fluency: Proven track record designing and delivering AWS-native architectures (Lambda, API Gateway/AppSync, Step Functions, Event Bridge, DynamoDB/RDS, S3)
a. 8+ years of experience.
3) 8+ years of CI/CD fluency: Hands-on with CI/CD pipelines.
a. (GitHub Actions or AWS Code Pipeline), Infrastructure as Code (CloudFormation/CDK/Terraform), and automated testing frameworks. You help the team maintain stability, automate deployments, and manage the integrity of development across environments (Dev, QA, Prod).
Technology Requirements:
1) Full-Stack Depth & Breadth
Experience building distributed systems with strong proficiency in Java/Spring Boot, Angular (or React/Vue), and AWS-native architectures (e.g., Lambda, Step Functions, Event Bridge, AppSync, DynamoDB).
2) DevOps and Automation Expertise
You've led teams through CI/CD transformations, working with tools like GitHub Actions, AWS Code Pipeline, or Terraform/CDK, and helped establish reliable, repeatable release pipelines
LTIMindtree is an equal opportunity employer that is committed to diversity in the workplace. Our employment decisions are made without regard to race, color, creed, religion, sex (including pregnancy, childbirth or related medical conditions), gender identity or expression, national origin, ancestry, age, family-care status, veteran status, marital status, civil union status, domestic partnership status, military service, handicap or disability or history of handicap or disability, genetic information, atypical hereditary cellular or blood trait, union affiliation, affectional or sexual orientation or preference, or any other characteristic protected by applicable federal, state, or local law, except where such considerations are bona fide occupational qualifications permitted by law.
A little about us...
Role: Azure DevOps Engineer
Location: Berkeley Heights, NJ
Job Description:
1. Extensive hands-on experience on GitHub Actions writing workflows in YAML using re-usable templates
2. Extensive hands-on experience with application CI/CD pipelines both for Azure and on-prem for different frameworks
3. Hands on experience with Azure DevOps and migration programs of CI/CD pipelines preferably from Azure DevOps to GitHub Actions
4. Proficiency in integrating and consuming REST APIs to achieve automation through scripting
5. Hands on experience with atleast 1 scripting language and has done out of box automations for platforms like People Soft, SharePoint, MDM etc
6. Hands on experience with CI/CD of databases
7. Good to have experience with infrastructure-as-code including ARM templates Terraform Azure CLI Azure PowerShell modules
8. Exposure to monitoring tools like ELK Prometheus Grafana
LTIMindtree is an equal opportunity employer that is committed to diversity in the workplace. Our employment decisions are made without regard to race, color, creed, religion, sex (including pregnancy, childbirth or related medical conditions), gender identity or expression, national origin, ancestry, age, family-care status, veteran status, marital status, civil union status, domestic partnership status, military service, handicap or disability or history of handicap or disability, genetic information, atypical hereditary cellular or blood trait, union affiliation, affectional or sexual orientation or preference, or any other characteristic protected by applicable federal, state, or local law, except where such considerations are bona fide occupational qualifications permitted by law.
As a Data Science/Data Engineer Intern, you will work on cutting-edge analytical and data engineering projects that drive measurable business impact across pricing, underwriting, marketing, and claims.
This internship is ideal for a technically curious, motivated problem-solver who wants hands-on data science experience.
RESPONSIBILITIES
- Support the design, construction, and optimization of robust data pipelines to enable machine learning and analytical modeling.
- Contribute to the design and implementation of data and ML workflows using orchestration tools such as Dagster, Airflow, or similar frameworks.
- Help implement data quality checks, validation routines, and monitoring for automated data workflows.
- Assist in organizing and managing internal GitHub repositories to standardize ML project structures and best practices.
- Collaborate with data scientists and engineers to automate the ingestion, transformation, and delivery of data for model development.
- Contribute to initiatives migrating analytical processes into cloud-based data lake architectures and modern platforms such as AWS or Snowflake.
- Develop reusable and well-tested code to support analytical pipelines and internal tools using Python and SQL.
- Conduct data mining, cleansing, and preparation tasks to build high-quality analytical datasets.
- Participate in model development, including data profiling, model training, validation, and interpretation.
- Build and evaluate predictive models that enhance profitability through improved segmentation and estimation of insurance risk.
- Assist in studies evaluating new business models for customer segmentation, retention, and lifetime value.
- Collaborate with business leaders to translate insights into operational improvements and cost efficiencies.
QUALIFICATIONS
- Currently pursuing or recently completed a Master’s in Data Science, Computer Science, Statistics, Economics, or related field.
- Proficiency in Python (Pandas, NumPy, Scikit-learn, XGBoost, or PyTorch) and SQL.
- Understanding of data engineering concepts, ETL/ELT workflows, and machine learning deployment.
- Exposure to workflow orchestration tools (e.g., Airflow, Dagster, Prefect) and Git/GitHub for collaborative development.
- Familiarity with Docker, CI/CD pipelines, and infrastructure-as-code tools such as Terraform preferred.
- Knowledge of AWS cloud services such as S3, Lambda, EC2, or SageMaker a plus.
- Experience with common modeling techniques (e.g., GLM, tree-based models, Bayesian statistics, NLP, deep learning) through coursework or projects.
- Strong analytical, communication, and problem-solving skills.
- A self-starter mindset, with attention to detail and enthusiasm for learning new technologies.
SALARY RANGE
The pay range for this position is $35 hourly.
ABOUT THE COMPANY
The Plymouth Rock Company and its affiliated group of companies write and manage over $2 billion in personal and commercial auto and homeowner’s insurance throughout the Northeast and mid-Atlantic, where we have built an unparalleled reputation for service. We continuously invest in technology, our employees thrive in our empowering environment, and our customers are among the most loyal in the industry. The Plymouth Rock group of companies employs more than 1,900 people and is headquartered in Boston, Massachusetts. Plymouth Rock Assurance Corporation holds an A.M. Best rating of “A-/Excellent."
Job Title: Cloud Systems Engineer
Location: Pleasonton, CA - Onsite with 80%
Duration: 9 Months
Pay Range: $70/hr -$90/hr on W2 (DOE)
Description:
This hybrid cloud role will be part of a talented team that is responsible for the growth, on-going maintenance and development of IaaS running primarily on Linux and Windows in cloud and on-premises. Provides level 3 support (on an on-call support rotation model) for complex systems and applications. Work across product teams and operations to design and implements tools that automate infrastructure configurations, provisioning, builds, and deployments.
This position will be remote and will need to work nights and weekends.
Responsibilities:
- Updating, designing, and adding automation for provisioning and managing infrastructure, operating system, and middleware components.
- Designing, engineering, and maintaining a highly complex and secure cloud environment for Oracle and Azure Cloud.
- Provide support for complex system and applications, rolling out new systems, maintaining, migrating, upgrading and improving the long-term performance of the systems.
- Perform as a member of the systems administration team to support the installation, optimization, integration, troubleshooting, backup, recovery, modification, security, and upgrading of IT systems and components to provide services that enable customers to effectively apply IT to business requirements
- Have knowledge of tools and mechanisms for distributing new or upgraded software to ensure customers receive current versions of supported software, as they become available.
- Shall be able to work with other senior staff to recommend and design systems architecture and topology from both general and specific perspectives.
- Provide an on-call support rotation model.
•Collaboration & Influencing
•Learning Agility
•Drives Results
•Customer Impact
Skills and Experience:
- Bachelor's degree or equivalent experience in IT Preferred and 6+ years of progressive experience in engineering and design capacities, and documentation.
- The ideal candidate must be a self-starter with strong work habits and have mid-level career experience in maintaining large numbers of Linux and Windows servers in Bare Metal, ESX, and Cloud environments.
- Deep Understanding of Windows/Linux Fundamentals including Reading/Understanding system logs, run tracing and debugging tools, and network packet captures for analysis of resolving root cause.
- Implementing and Administering VMware ESX/ESXi/Vcenter, Creating vSwitches, Port groups, NIC Teaming, Bonding, and VLAN or Virtual Network Problem management.
- Experience with public cloud platforms – IaaS, PaaS, Kubernetes, Docker, and Vagrant (Azure & OCI are preferred)
- Experience with Automation technologies like Infrastructure as a Code with Chef, Terraform, and Jenkins. Deployment technologies such as, PXE Kickstart, Windows SCCM, and configuration managment for mobile devices. Scripting experience, specifically with Bash/python/Ruby/Perl/Powershell language Preferred.
- Experience with Warehouse or Manufacturing automation a plus working with PLC, Conveyance, and Warehouse Controls Systems, Understanding of Warehouse Management Systems, and Prior work with real-time systems Preferred
- Experience with Monitoring tools like Nagios or New-Relic and Kafka or other stream service.
Additional Skills:
- Github/Github Actions
- Ansible/Chef
- Azure/OCI Cloud
- Linux Containers/Kubernetes/Docker
- Oracle KVM / KVM
- Oracle Enterprise Linux
Benefits Info: Russell Tobin/Pride Global offers eligible employee’s comprehensive healthcare coverage (medical, dental, and vision plans), supplemental coverage (accident insurance, critical illness insurance and hospital indemnity), 401(k)-retirement savings, life & disability insurance, an employee assistance program, legal support, auto, home insurance, pet insurance and employee discounts with preferred vendors.
Title: Full Stack Developer with AI
Duration: 12 Months+
Location: Spring, TX
Type: Onsite
We are seeking a Full Stack Developer who will contribute to building scalable backend services including platform and utility modules application. You will also play an active role in implementing GenAI use cases using modern agentic frameworks.
You will collaborate with product owner, trading fusion developers, data engineers, and other full stack developers across regions.
Responsibilities?:?
- Platform Engineering & Support
- Develop, enhance, and support components of the Global Trading App platform
- Implement monitoring, alerting, and telemetry capabilities using modern observability tools
- Improve platform reliability, scalability, and performance through proactive engineering
- Author infrastructure-as-code using Terraform for cloud resources
Application & Service Development
- Build secure and scalable backend APIs (primarily in Python / FastAPI)
- Create responsive and efficient React-based UI components
- Develop reusable utility modules for fusion teams to accelerate delivery
GenAI & Agentic Solutions
- Implement GenAI-powered features using LLMs, vector databases, and multi-agent frameworks
- Develop "agentic" workflows for automation, troubleshooting, and developer productivity
- Build model integration and evaluation
Collaboration & Standards
- Contribute to engineering best practices and documentation
- Work closely with global trading fusion teams to ensure alignment and technical excellence
Qualifications?:
- Python (advanced): APIs, data processing, async programming
- React: modern component-based UI development
- FastAPI: building high performance backend services
- DBT: data engineering and transformation
- GitHub/CI/CD: strong version control and build pipeline experience
Preferred Skills:
- Terraform, Azure, AWS: infrastructure provisioning and automation
- Databricks, Snowflake
- GenAI / Multi-Agent
- Experience implementing solutions using LLMs, embeddings, prompt engineering
- Familiarity with agentic coding frameworks (e.g., LangChain, AutoGen, OpenAI agents, etc.)
- Understanding of RAG, model orchestration, and AI application patterns
Soft Skills:
- Strong problem-solving skills and ownership mindset
- Ability to work in global, cross-functional teams
- Clear communication and documentation abilities
- Comfort operating in fast-paced, high-availability environments
- Adaptability and willingness to learn new technologies and methodologies
This role supports the full software development lifecycle, including front-end development, back-end services, database design, system integration, deployment, and ongoing operational support.
The engineer collaborates with cross-functional teams to deliver reliable, integrated technology solutions aligned with business needs.
Candidates will be considered at Level III, IV, or V depending on experience and demonstrated technical leadership.
Key Responsibilities Full Stack Development • Design, develop, test, and maintain enterprise-grade applications across the technology stack.
• Build modern, responsive, and user-friendly interfaces using React or similar frameworks.
• Develop backend services, RESTful APIs, and microservices using Java (Spring Boot), Node.js, and/or Python.
• Ensure applications are optimized for performance, scalability, reliability, and maintainability.
Architecture & Integration • Contribute to system design and architectural decisions.
• Develop and maintain integrations between enterprise platforms to ensure data accuracy and operational efficiency.
• Participate in API design, microservices architecture, and modernization initiatives.
Cloud & DevOps • Deploy and support applications in Azure environment.
• Implement and maintain CI/CD pipelines to support automated builds, testing, and deployments.
• Utilize containerization and orchestration tools such as Docker and Kubernetes.
• Support infrastructure-as-code and DevOps best practices.
Operational Excellence & Support • Monitor system performance and troubleshoot issues across the stack.
• Perform root cause analysis and implement long-term solutions.
• Plan and execute upgrades, enhancements, and system optimizations.
• Provide visibility into application health and performance metrics.
Collaboration & Leadership • Partner with business stakeholders, analysts, and technical teams to translate requirements into scalable solutions.
• Participate in Agile/Scrum ceremonies and iterative development processes.
• Mentor junior engineers and contribute to knowledge sharing.
• Lead technical initiatives or projects based on level and experience.
Compliance & Security • Ensure adherence to corporate policies and regulatory standards (including RUS, OSHA, SOX, NERC, FERC, and ITS requirements).
• Apply secure coding practices and support application and infrastructure security initiatives.
• Promote a culture of compliance, accountability, and continuous improvement.
Qualifications Education Bachelor's degree in Computer Science, Engineering, Information Systems, or a related technical field.
Experience by Level Level III • 4+ years of full stack development experience.
• Independently manages development tasks and production support.
• Leads smaller initiatives and contributes to team projects.
Level IV • 6+ years of experience including application architecture and system optimization.
• Leads development projects and provides technical direction.
• Collaborates cross-functionally to deliver integrated enterprise solutions.
Level V • 8+ years of experience architecting and managing enterprise-scale applications.
• Oversees major technical initiatives.
• Provides strategic technical leadership and drives innovation across IT functions.
Technical Expertise • Java (Spring Boot), React.js or similar framework, Python, Node.js • Microservices architecture and API management • MSSQL, Oracle, MongoDB • Azure or AWS/GCP (cloud-native architectures preferred) • CI/CD pipelines, GitHub • Docker, Kubernetes, Terraform • Secure coding practices (OAuth, JWT, SSL) • Observability, logging, and monitoring tools • Familiarity of ML/AI technologies Key Competencies • Strong analytical and troubleshooting skills • Excellent written and verbal communication abilities • Customer-focused mindset • Ability to work independently and collaboratively • Commitment to continuous learning and technical growth Why Join OPC, GTC, and GSOC? • Work on impactful, mission-critical enterprise systems • Contribute to modernization and cloud transformation initiatives • Grow your technical leadership capabilities • Be part of a collaborative, innovation-driven IT organization
Senior Technical Support Engineer
Location: San Francisco, CA | Raleigh, NC | Dallas, TX | Boston, MA
Schedule: Hybrid – 3 days onsite required
Employment Type: 6-Month Contract-to-Hire
Pay Rate: $65–68/hour
Start Date: ASAP
About the Role
The Technical Solutions team is focused on advancing care and research innovation. We support new business initiatives by expanding product capabilities in strategic areas and delivering a scalable technical support framework across multiple product portfolios.
As a Senior Technical Support Engineer, you will partner closely with internal stakeholders to identify, reproduce, troubleshoot, and resolve complex technical issues. You will support infrastructure, permissions, and configuration changes while delivering high-level technical support and sustaining engineering services that help customers achieve meaningful business outcomes.
This role offers the opportunity to collaborate with customers, developers, architects, and operations teams to solve challenging, high-impact problems. You will also contribute to building support tooling and infrastructure to improve operational efficiency.
Travel up to 10% may be required.
Key Responsibilities
- Own and manage technical customer issues from identification through full resolution
- Reproduce and troubleshoot complex technical problems, including reviewing and analyzing code to determine root cause
- Project manage new client deployment issues through to completion
- Implement infrastructure, security, and permissions configuration changes
- Drive operational efficiencies by identifying improvements in process, tooling, and product functionality
- Develop playbooks and knowledge base documentation to streamline issue resolution
- Create internal reports and dashboards for issue tracking and performance monitoring
Minimum Qualifications
- Bachelor’s degree in Computer Science, Information Systems, Mathematics, Statistics, or related field
- Cloud operations experience (creating buckets, virtual machines, and managing security access controls/IAM)
- 3+ years of experience with Python or another object-oriented programming language
- 3+ years of experience working with SQL
- Experience troubleshooting data-related issues
- Proficiency with GitHub and Jira
- Strong troubleshooting skills with the ability to track complex technical details
- Excellent communication skills with the ability to translate technical findings for both senior developers and non-technical stakeholders
Preferred Qualifications
- 4+ years of experience in healthcare technology
- Experience supporting highly regulated software environments
- Experience with R
- Infrastructure-as-Code (IaC) experience such as Terraform, Ansible, or similar tools
- Self-starter mindset with strong ownership and a passion for driving issues through to resolution
Our Ideal Candidate
We are seeking an experienced cloud and DevOps engineer with over 5 years of experience designing, automating, and maintaining scalable AWS infrastructure, CI/CD pipelines, and secure cloud environments. In the role of Senior Cloud Platform Engineer, you should demonstrate expertise in Infrastructure as Code, scripting, containerization, and modern monitoring or alerting platforms, as well as strong skills working across teams. Success in this position requires a talent for optimizing cloud resources, ensuring security and compliance, and facilitating fast, reliable software deployments. Having experience with HIPAA-compliant systems, .NET platforms, or serverless computing is considered a significant advantage.
Responsibilities
- Design, implement, and maintain CI/CD pipelines using tools like AWS CDK, AWS CodePipeline, or GitHub Actions.
- Manage infrastructure as code (IaC) using Terraform, CloudFormation, or similar tools.
- Monitor system performance and availability using tools like CloudWatch, Prometheus, Grafana, or Datadog.
- Automate repetitive tasks and deployment processes to improve team efficiency.
- Collaborate with software engineers, QA, and product teams to ensure smooth deployments and rapid iteration.
- Implement and enforce security best practices and compliance across infrastructure and deployment pipelines.
- Identify optimizations to reduce cloud resource usage across AWS accounts.
- Maintain documentation for infrastructure, processes, and compliance requirements.
- Work with multiple teams to implement their deployments using common practices.
- Manage Builds and the corresponding documentation
- Monitor package versions, track EOL dates, and upgrade to keep infrastructure current
Qualifications
- B.S. Computer Science degree or equivalent experience.
- 5+ years of experience in DevOps, Site Reliability Engineering, or related roles.
- 2+ years of hands-on AWS Experience
- Strong experience with cloud platforms (AWS, Azure, or GCP).
- Proficiency in scripting languages such as Bash, Python, or PowerShell.
- Experience with containerization and orchestration (Docker, Kubernetes).
- Familiarity with monitoring, logging, and alerting tools.
- Solid understanding of networking, security, and system administration.
- Strong communication skills and ability to work cross-functionally.
Salary: $80,000
- $150,000 per year A bit about us: Are you interested in working for the world's largest cannabis market with a footprint that covers the entire breadth of the state of California? Are you someone who wants to be part of the growth of a fast-growing industry? At client company, our mission is clear: to provide the ultimate one stop shop cannabis experience by offering exceptional customer service and diversified products.
We strive to build long-term customer loyalty.
We’re building a consumer-centric organization that is focused on sharing the transformational potential of cannabis with the world.
Our product line is one of the best-selling cannabis brands in the market today and has claimed the title of the best-selling vape brand across all BDSA-tracked markets and best-selling brand overall in the California market! We are rooted in California and have expanded our operations across the United States, with even more growth on the horizon! Additionally, we’re building distribution networks to bring our products to over 60 countries worldwide.
We recognize that our employees are at the center of our success, and we take pride in a corporate culture that emphasizes our core values: Influence, Inspire, Innovate, Win, & Grow! Our employees come from a wide range of retail backgrounds, each bringing their own unique skills and talents to the table as we work together to continue our incredible growth.
If you are interested in partaking in the journey of building a nationally recognized and leading brand, we want to hear from you! Why join us? Benefits & Compensation: Additional details about compensation and benefits eligibility for this role will be provided during the hiring process.
All employees are provided competitive compensation, paid training, and employee discounts on our products and services.
We offer a range of benefits packages based on employee eligibility, including: Paid Vacation Time, Paid Sick Leave, Paid Holidays, Parental Leave.
Health, Dental, and Vision Insurance.
Employee Assistance Program.
401k with generous employer match.
Life Insurance.
Job Details Data Scientist Why this role matters You’ll architect, build, and operate the data infrastructure that powers analytics, machine learning, and operational reporting across the organization.
This is a hands-on role for a data scientist who thrives at the intersection of engineering, modeling, and production-grade systems.
What you’ll do Design and evolve our lakehouse architecture (Delta Lake on AWS/Azure), including storage layout, compute strategy (Databricks, EMR, Synapse), and performance SLAs Build robust batch and streaming data pipelines using Apache Spark (PySpark/Scala), Databricks Workflows, Azure Data Factory, and event-driven integrations (Kinesis, Event Hubs, Kafka) Develop and maintain analytical and ML-ready data models, semantic layers, and dimensional schemas for BI and production use Implement observability and reliability features including custom Spark metrics, anomaly detection, logging, and automated data quality checks Optimize compute and storage performance through cluster tuning, caching, partitioning, and format selection, with measurable cost savings Enforce data governance policies including RBAC, row-level security, cataloging, lineage tracking, and compliance with GDPR, HIPAA/FHIR, and CCPA/CDPA Collaborate with analytics, product, and platform teams to define SLAs, manage incident response, and guide data contract best practices Mentor junior engineers and lead design reviews, setting standards for scalable, maintainable data systems What you bring 8+ years of experience in data science or data engineering Expert-level Python and SQL; working knowledge of Scala/Java for Spark 3–5+ years of experience with Apache Spark and Databricks (including Delta Live Tables and performance tuning) Strong cloud experience in AWS (S3, EMR, Glue, Lambda, Kinesis) and/or Azure (ADLS Gen2, ADF, Event Hubs, Synapse) Experience with orchestration tools (Airflow, ADF) and transformation frameworks (dbt) Familiarity with data warehouses such as Snowflake, Redshift, or Databricks SQL Proven track record of cost and performance optimization in cloud data environments Solid foundation in data modeling, CI/CD, Docker/Linux, and Git Ability to translate business needs into scalable data products and lead projects end-to-end Nice to have Experience with custom Spark listeners, low-latency streaming, or event-driven architectures Exposure to healthcare data pipelines (FHIR) and DLT troubleshooting Dashboarding experience with Power BI, Tableau, or Looker Familiarity with governance frameworks and MLOps (feature stores, model monitoring) Our stack Databricks, Spark (PySpark/Scala), Delta Lake, Airflow, ADF, dbt, AWS (S3/EMR/Glue/Kinesis), Azure (ADLS/Synapse/Event Hubs), Terraform, GitHub Actions, Docker, CloudWatch, REST Success in 6–12 months looks like Reliable, observable pipelines with