Prometheus Relabelconfigs Drop Example Jobs in Usa
1,372 positions found
Responsible for assisting the Count Room supervisory staff and participating in the dropping and counting the money that is pulled off the casino floor and other revenue sources.
How You Will Create the Extraordinary- Sorts, counts, and records contents of slot, table game and poker drop boxes, bill changers and currency drop boxes, and Sportsbook Kiosk currency according to set procedures.
- Wraps all moneys picked up daily and transfers to vault.
- Transfers drop devices/carts from casino floor to count rooms.
- Removes drop boxes from carts and assembles in numerical order to determine if all boxes have been delivered and accounted for.
- Sorts, counts, straps, and records the results of daily counts in accordance to departmental and regulatory policies.
- Compares and contrasts totals from physical count to those entered in the computer as well as the numbers recorded on currency counters.
- May be assigned to verify, strap, and seal back currency.
- Prepares reports recording any discrepancies from the computer totals to the actual physical totals, while notifying the lead, or manager to any variance between the two.
- When assigned as computer operator, will enter all receipts.
- Identifies different denominations of gaming chips and currency; also required to count and stock chips and currency.
- Retrieves full and empty drop boxes.
- Performs minor repairs and maintenance on count room equipment and drop boxes.
- Notifies count room leadership of malfunctioning equipment.
- Maintains an ethical work habit in adhering to regulatory, departmental, and company policies.
- Performs other duties as assigned, always presenting oneself as a credit to Caesars and encourages others to do the same.
- Must present a well-groomed appearance.
- Compares information contained in drop boxes to data stored on computer terminal.
- Meets the attendance guidelines of the job and adheres to regulatory, departmental and company policies.
- High school diploma or equivalent required.
- Must be 21+
- Prior count room or money handling experience (casino or bank) is preferred.
- Must be able to work any day of the week due to demand.
- Adding machine, computer terminal operation, and currency counting machine skills helpful.
- Basic mechanical (repair) ability preferred.
- Must possess a team mentality with the ability to work in a secured and surveillance area for prolonged period of time with coworkers.
- Must have the manual dexterity to open small locks, grip as well as remove and replace slot boxes while maintaining a fast pace to meet time constraints.
- Must be able to maintain a fast pace under stressful conditions.
- Must be able to read, write, speak and understand English.
- Must be able to obtain a LA Gaming License.
Additional Requirements
- Must be able to stoop, bend, kneel, crouch and pick-up money dropped on floor.
- Must be able to grip objects and have good finger movement when counting and handling currency.
- Must be able to differentiate denominations of chips and authenticity of currency.
- Must be able to stand for extended periods of time.
- Must be able to operate a computer, ten key adding machine and money counter.
- Must be able to respond to visual and aural cues.
- Ability to continuously maneuver in and around the casino, and around all count rooms.
- Ability to tolerate areas containing secondary smoke, high noise levels, bright lights, and dust.
- Ability to work at a fast pace in mentally and physically stressful situations.
Job Identification 79092
Job Schedule Part time
Locations Horseshoe Bossier City (On-site)
We're looking for teams that want to get on a contracted dedicated account that has nothing but long hauls and great pay! If you're a team that is looking for consistent lanes and miles, APPLY TODAY!
Qualifications:
- 3 months Class A CDL experience
- No more than 3 moving violations in last 36 months
- No more than 2 accidents in last 36 months
- No speeding tickets over 15 MPH in last 12 months
Info on the run:
- GA to NV/CA - And back
- Or OTR lanes depending on where you live
- Drop and hook on both ends
- No touch
- Dedicated customer
- Same lanes every week
- 53ft dry van
- $2000+ weekly per driver if you go company W2
- $2500+ weekly per driver if you go 1099 Lease Op
We have options for both company and lease teams for this dedicated account. If you're in the market for a dedicated lane - APPLY NOW!
About Prometheus Group
Prometheus Group is a team of self-starters centered on being resourceful, accountable, and results-focused. Career progress is based on merit and not years of service or attaining certifications. Our drive and dedication to creating great products for our global customers are at the heart of all we do! In joining Prometheus, you will become a part of the largest global provider of comprehensive enterprise asset management (EAM) software solutions that support the management life cycle for equipment maintenance and operations.
Job Summary
The Associate Account Manager is responsible for selling Prometheus Group products and services to new and existing Prometheus Group customers as well as developing pipeline to drive revenue growth.
Responsibilities
- Sales and Territory Development
- Achieve or exceed quota targets.
- Develop effective and specific opportunity/account plans to ensure revenue target delivery and sustainable growth
- Establish and maintain key relationships at multiple levels within the assigned accounts
- Develop and deliver overviews, presentations, and coordinate high level functional product demonstrations to new and existing accounts.
- Participate in the development of the assigned region, including accounts, account relationships, prospect profiling and sales cycles.
- Participate in the development and delivery of a comprehensive business case to address customer and prospect priorities and pain points.
- Follow a disciplined approach to maintaining a rolling pipeline. Keep pipeline current and moving up the pipeline curve.
- Support all Prometheus promotions and events in the assigned region.
- Maintain SalesForce system with accurate customer and pipeline information.
- Territory Development
- Find and nurture opportunities for company’s solutions by cold calling into targeted companies and following up with leads
- Work in coordination with Account Executives to target high value prospects in assigned territory Generate a sufficient number of new opportunities per month
- Articulate business value to prospects
- Gather prospects needs and understand their initiatives as they relate to the solution
- Occasionally travel for trade shows and travel to customers
- Document all activity related to accounts using
- Keep current with industry knowledge and best practices to effectively communicate with prospects
Skills and Experience
- B.S/B.A. in Business, Communication or related field
- 2-4 years’ sales experience within software
- Customer-focused, results-driven, excellent communicator
- High energy, self-starter who can learn quickly and easily engage clients
- Able to work effectively as an individual contributor as well as within a team
- Foreign language skills a plus
Benefits Overview
We offer an attractive benefits program to meet the diverse needs of our teammates:
- Employee base HSA plan, dental, life and short-term disability coverage 100% paid for by Prometheus Group
- HSA & FSA plan options
- Retirement Savings with Generous Company Match & Immediate Vesting
- Gym membership to O2 Fitness
- Casual dress attire
- Half-Day Fridays
- Generous Paid Time Off
- Company Outings, Trips & Activities
- Paid Parental Leave
Prometheus Group is proud to be an Equal Employment Opportunity and Affirmative Action employer. We do not discriminate based upon race, religion, color, national origin, gender (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics.
Company Description
Prometheus Materials develops innovative sustainable building materials to drive the transition toward a carbon-negative future. Using nature-inspired processes, the company utilizes microalgae to produce its ProZERO™ line of carbon-negative supplemental cement blends, designed for ready-mix concrete applications, manufactured products, and licensed material solutions. These cutting-edge materials address the environmental challenges of traditional construction while offering scalable solutions for concrete manufacturers.
Role Description
The Director of Business Development is responsible for identifying and developing the sales and marketing strategies leading to long-term, profitable growth. You will evaluate and execute new business opportunities which align with Prometheus Materials’ overall market growth strategies. This position will work closely with distributors, vendors, and customers. Additionally, close collaboration with internal business units (biotechnology, research and development, manufacturing, and product management) will be essential to the success of the Director of Business Development.
Responsibilities:
This is a summary of activities and is not intended to be all-inclusive of all responsibilities.
· Develop, own, and execute a formal business plan aligned with company objectives
· Develop, maintain, and track product backlog and bid activity
· Establish revenue goal KPIs and deliver results
· Manage strategic relationships to maximize revenue performance
· Create and manage key account plans, including defined goals, activities, and timelines
· Communicating regular updates of key performance indicators, including volume, revenue, and strategic initiatives
· Identify, secure, grow, and manage key licensing opportunities across multiple industries
· Research, analyze, and implement key market trends within low-embodied carbon building materials
· Monitor and maintain competitive intelligence, including competitor products, pricing strategies, and development activities
· Regularly review the sales cycle and implement continuous improvement strategies
· Travel up to 40% as required
Qualifications:
Use your existing network or develop a robust network of key stakeholders to increase market awareness, market share, and success of the formal business plan.
· Bachelor’s degree in Business or a related field, or equivalent experience
· Minimum of 5 years of experience in sales, marketing, or product management
· Experience within the building materials industry preferred (e.g., sand and gravel, cement, ready mix, or admixtures)
· Proven experience collaborating with industry experts (Architects and Engineers)
· Working knowledge of key high-level industry standards relating to cement, concrete, and aggregates
· Demonstrated experience developing, managing, and executing sales strategies to drive revenue growth
· Strong understanding of business-to-business sales cycles, sales strategies, and key performance metrics
· Experience building, leading, and managing multi-dimensional sales team
· Proficiency with Customer Relationship Management (CRM) software and sales reporting
· Solid financial and business acumen, including budgeting, forecasting, and pricing strategies
· Strong negotiation, presentation, and facilitation skills
· Knowledge or experience with sustainability initiatives, LEED certification, and carbon reduction targets
Please send resume and cover letter to
L3Harris is the Trusted Disruptor in defense tech. With customers' mission-critical needs always in mind, our employees deliver end-to-end technology solutions connecting the space, air, land, sea and cyber domains in the interest of national security.
Job Title: Specialist, Software Engineering (Service Reliability Engineer)
Job Code: 33584
Job Location: Melbourne, FL or Chantilly, VA (on-site)
Job Schedule: Rotational shifts 24x7
Job Description:
L3Harris is seeking an experienced Software Engineer to join our dynamic team, focusing on operating, maintaining, and sustaining a Cloud-based 24x7 operational system. The role encompasses the live monitoring and real-time anomaly troubleshooting of Cloud data operations, and participating in the sustainment development efforts (patching, upgrades) for that environment, using an agile development process.
Essential Functions:
* Develop, maintain, and enhance cloud applications using Python, Typescript and Java
* Provide 24x7 real time monitoring and troubleshooting of an operational Cloud based Data Operations system (Shift work; nights and weekends as required on a rotational schedule).
* Shifts to include: First Shift Day (06:00am - 2:30pm EST), 2nd Shift Evening 10% differential (2:00pm - 10:30pm EST), 3rd Sift Night 12% differential (10:00pm - 06:30am EST).
* First line anomaly lead: reporting/resolving/future mitigation of any issues encountered.
* Collaborate with cross-functional teams to define and implement engineering changes (enhancements, automation, capability improvements and bug fixes)
* Develop and maintain technical documentation related to Cloud sustainment and operations.
* Organize and support training sessions for operations personnel as required.
* Ensure compliance with NIST, Department of Commerce and NOAA IT security standards such as FISMA, FedRAMP, and NIST 800.53
* Develop, maintain, and enhance cloud applications using Python, Typescript and Java.
* Ability to obtain and maintain a Public Trust.
* Tool familiarity: Grafana, Prometheus, influx DB, Postgres, CDK (Cloud Development Kit), Cloud formation, Ansible, GIT (Global Information Tracker), Active Directory, Networking, GRE (Global Accelerator Resolvers & Endpoints), EKS (Elastic Kubernetes Service), RDS (Relational Database Service), Lambda, IAM (Identity and Access Management)
Qualifications:
* Bachelor's Degree and a minimum of 4 years of prior related experience. Graduate Degree or equivalent with 2 years of prior related experience. In lieu of a degree, minimum of 8 years of prior related experience.
* Experience with commercial cloud systems (Cloud Linux SYS Admin/coding) and services (AWS, Azure, etc.).
* Experience with any of the following tools: Grafana, Prometheus, influxdb, Postgres, CDK (Cloud Development Kit), Cloud formation, Ansible, GIT (Global Information Tracker), Active Directory, Networking, GRE (Global Accelerator Resolvers & Endpoints), EKS (Elastic Kubernetes Service), RDS (Relational Database Service), Lambda, IAM (Identity and Access Management).
Preferred Additional Skills:
* Experience with Agile software development best practices and tools (SAFe, Jira, Git, etc.) and participation in continuous Agile planning and coordination.
* Experience with containerization technologies (Docker, Kubernetes).
* Familiarity with container observability tools (Prometheus, Grafana, InfluxDB, PromQL).
* Background in security architecture and secure coding practices.
* Experience in domain-driven design (DDD) and API-first development.
#LI-KB1
L3Harris Technologies is proud to be an Equal Opportunity Employer. L3Harris is committed to treating all employees and applicants for employment with respect and dignity and maintaining a workplace that is free from unlawful discrimination. All applicants will be considered for employment without regard to race, color, religion, age, national origin, ancestry, ethnicity, gender (including pregnancy, childbirth, breastfeeding or other related medical conditions), gender identity, gender expression, sexual orientation, marital status, veteran status, disability, genetic information, citizenship status, characteristic or membership in any other group protected by federal, state or local laws. L3Harris maintains a drug-free workplace and performs pre-employment substance abuse testing and background checks, where permitted by law.
Please be aware many of our positions require the ability to obtain a security clearance. Security clearances may only be granted to U.S. citizens. In addition, applicants who accept a conditional offer of employment may be subject to government security investigation(s) and must meet eligibility requirements for access to classified information.
By submitting your resume for this position, you understand and agree that L3Harris Technologies may share your resume, as well as any other related personal information or documentation you provide, with its subsidiaries and affiliated companies for the purpose of considering you for other available positions.
L3Harris Technologies is an E-Verify Employer. Please click here for the E-Verify Poster in English or Spanish. For information regarding your Right To Work, please click here for English or Spanish.
Compensation: $150-195k Responsibilities: • Design, deploy, and manage container orchestration platforms using OpenShift and AKS.
• Administer and optimize Linux-based systems in hybrid and multi-cloud environments.
• Automate infrastructure provisioning and configuration using Ansible Automation Platform.
• Develop and maintain Infrastructure as Code (IaC) using Terraform, Helm, and GitOps workflows.
• Collaborate with DevOps and application teams to implement CI/CD pipelines and DevSecOps practices.
• Monitor system performance, troubleshoot issues, and ensure high availability and disaster recovery.
• Implement security best practices for containerized workloads and cloud environments.
• Provide technical leadership and mentorship to junior engineers.
• Stay current with emerging technologies and contribute to strategic cloud initiatives.
• Assist with migrations to cloud, ensuring best practices are followed and architecture is compliant with company standards.
Qualifications: Required: • Bachelor's degree in computer science, Engineering, or related field (or equivalent experience).
• 5+ years of professional experience in Linux system administration and cloud engineering.
• 3+ years of hands-on experience with OpenShift and AKS in production environments.
• Strong proficiency in scripting languages (e.g., Bash, Python).
• Experience with CI/CD tools (e.g., Jenkins, GitLab CI, ArgoCD).
• Deep understanding of Kubernetes architecture, networking, and security.
• Familiarity with cloud platforms (Azure, AWS, GCP) and hybrid cloud strategies.
• Knowledge of monitoring and logging tools (Prometheus, Grafana, ELK stack).
• Excellent problem-solving and communication skills.
• Linux Administration: Deep expertise in RHEL environment.
• Container Platforms: 3+ years of hands-on experience with OpenShift and AKS.
• Automation: Proficiency with Ansible, Ansible Tower/AAP, and scripting (Bash, Python).
• Infrastructure as Code: Experience with Terraform, Helm, and GitOps tools (e.g., ArgoCD, Flux).
• CI/CD: Familiarity with Jenkins, GitLab CI, Azure DevOps, or similar tools.
• Cloud Platforms: Strong knowledge of Azure, with exposure to AWS or GCP a plus.
• Monitoring & Logging: Experience with Prometheus, Grafana, ELK/EFK, and Azure Monitor.
• Security: Understanding of container security, RBAC, network policies, and compliance frameworks.
• Networking: Solid grasp of Kubernetes networking, service mesh (e.g., Istio), and ingress controllers.
Preferred: • Red Hat Certified Specialist in OpenShift Administration.
• Microsoft Certified: Azure Kubernetes Service Specialist.
• Experience with service mesh technologies (e.g., Istio, Linkerd).
• Experience in regulated industries (e.g., finance, healthcare) is a plus.
We're Hiring: Director of IT Architecture (Remote, with onsite meetings as needed)
We are seeking a Director of IT Architecture | Enterprise Architecture | Cloud & Systems Leader to lead and shape the IT architecture strategy for a growing healthcare organization. This is a unique opportunity to design and implement technology solutions that support business goals, regulatory compliance, and modern healthcare delivery.
Key Responsibilities:
- Define and execute a comprehensive IT architecture strategy aligned with clinical, operational, and business objectives.
- Lead and manage a team of network, cloud, and systems architects, fostering collaboration and high performance.
- Oversee network, cloud, and systems architecture initiatives, ensuring security, scalability, and interoperability.
- Evaluate, test, and implement modern platform visibility solutions (DataDog, Dynatrace, New Relic, Prometheus / Grafana).
- Collaborate with IT leadership, business stakeholders, vendors, and cloud providers to optimize technology investments.
- Establish IT governance, standards, and best practices to ensure compliance with industry regulations (HIPAA, HITECH, HITRUST).
- Monitor performance, risks, and cost optimization across all IT architecture initiatives.
Required Qualifications:
- Bachelor’s degree in Computer Science, IT, Healthcare Informatics, or related field.
- 10+ years of progressive experience in IT architecture, including at least 5 years in a leadership role managing network, cloud, and systems architecture teams, preferably in healthcare.
- Hands-on experience with cloud platforms (AWS, Azure) and hybrid environments.
- Demonstrated history of assessing, testing, and implementing modern platform visibility solutions (DataDog, Dynatrace, New Relic, Prometheus / Grafana).
- Strong expertise in network architecture (SD-WAN, VPNs, firewalls, healthcare data exchange networks).
- Deep knowledge of systems architecture, including server infrastructure, virtualization, storage, disaster recovery, and healthcare IT standards (HL7, FHIR, DICOM).
- Strong leadership, communication, and stakeholder management skills.
- Strategic thinker with strong problem-solving and analytical abilities.
Preferred Qualifications:
- Master’s degree in a related field.
- Relevant cloud certifications (AWS Solutions Architect Professional, AWS Security Specialty, Microsoft Azure Solutions Architect).
- Security and architecture certifications such as CISSP, CCNP, HITRUST, or FinOps.
This is a remote role with the flexibility to work from home, while requiring occasional onsite meetings for leadership collaboration and strategic planning.
If you are a visionary IT leader with a strong healthcare background, experience leading cloud, network, and systems architecture teams, and a passion for building scalable, secure IT platforms, we want to hear from you!
Lead Enterprise Tooling Engineer — Tenant Inc.
Overview
Tenant Inc. is modernizing its enterprise tooling, automation, and visibility ecosystem to better support our engineering, operations, finance, sales, and customer support teams. The Lead Enterprise Tooling Engineer plays a critical role in this transformation by owning the strategy, architecture, and execution of integrations across Jira, Microsoft 365, HubSpot, Zendesk, Intuit Enterprise, ERP systems, and internal platforms. This role ensures that our business systems work together seamlessly, data flows reliably across the organization, and leaders have a unified view of operational performance.
By connecting enterprise tools with application telemetry and APM insights, this position enables a single source of truth for workflow health, customer impact, and cross-system reliability. The ideal candidate blends technical expertise with business acumen, ensuring that tooling investments directly support Tenant’s operational goals and modernization roadmap.
Key Responsibilities
Enterprise Tooling Architecture & Integration
• Design and maintain the integrations that connect our core business systems, ensuring information flows consistently across Jira, Microsoft 365, HubSpot, Zendesk, Intuit Enterprise, ERP platforms, and internal applications.
• Build automated workflows and API-driven processes that reduce manual effort, eliminate redundant work, and improve data accuracy.
• Lead the unification of identity, permissions, and user lifecycle management across enterprise tools to support operational efficiency and compliance.
• Oversee cross-platform data synchronization for contacts, leases, tickets, financial data, and operational workflows to ensure a consistent and reliable customer and business experience.
APM, Observability & Unified Visibility
• Integrate observability and APM platforms (OpenSearch, Prometheus, Grafana, New Relic, Catchpoint, CloudWatch, clickstream analytics) with enterprise systems to provide end-to-end visibility across the business.
• Connect system telemetry with business workflows—linking application performance to Jira issues, Zendesk tickets, HubSpot activities, and ERP events.
• Develop executive-ready dashboards that consolidate operational KPIs, workflow performance, integration health, and customer impact into a single pane of glass.
• Implement alerting and automated correlation to help teams identify issues faster and understand their business implications.
• Partner with DevOps and SRE to ensure observability data is actionable and accessible across the organization.
Workflow Automation & Process Optimization
• Design automated workflows that streamline processes across engineering, support, sales, finance, and operations.
• Build Jira workflows, dashboards, and governance structures that support predictable releases and cross-team alignment.
• Automate HubSpot → Jira → Zendesk → ERP workflows to reduce handoffs, shorten cycle times, and improve customer responsiveness.
• Partner with Finance to automate Intuit Enterprise and ERP processes such as invoicing, reconciliation, and reporting.
API Engineering & Custom Development
• Develop and maintain custom integrations, middleware, and internal tools that improve operational efficiency and reduce manual work.
• Implement reliable error handling, monitoring, and logging to ensure integrations remain stable and transparent.
• Ensure all integrations meet security, scalability, and compliance requirements.
Data Quality, Governance & Observability
• Establish data governance standards that ensure accuracy, consistency, and auditability across enterprise tools.
• Implement monitoring and alerting for integration health and workflow performance.
• Partner with Security and Compliance to maintain SOC2, PCI, and internal governance standards.
Cross-Functional Leadership & Collaboration
• Serve as the strategic and technical leader for enterprise tooling, automation, and observability initiatives.
• Partner with Engineering, Product, Support, Sales, Finance, and Operations to understand business needs and translate them into scalable solutions.
• Mentor engineers and administrators across Jira, HubSpot, Zendesk, and Microsoft 365.
• Promote best practices for automation, documentation, and cross-system reliability.
Operational Excellence
• Lead root cause analysis for integration and workflow issues, ensuring long-term solutions rather than short-term fixes.
• Reduce manual effort across departments through automation and improved tooling.
• Maintain clear documentation for integrations, workflows, and system dependencies.
• Evaluate new tools, vendors, and opportunities to improve operational efficiency and business outcomes.
Required Qualifications
• 7+ years in enterprise tooling, business systems engineering, DevOps, or integration engineering.
• Deep experience with APIs for Jira, Microsoft 365, PowerBI, HubSpot, Zendesk, and similar SaaS platforms.
• Hands-on experience with observability and APM platforms (OpenSearch, Prometheus, Grafana, New Relic, Catchpoint, CloudWatch, clickstream analytics).
• Strong scripting and automation skills (Python, Node.js, PowerShell).
• Experience designing workflow automation across multiple business systems.
• Strong understanding of identity management, SSO, and permission models.
• Experience with data governance, monitoring, and integration reliability.
• Proven ability to lead cross-functional initiatives and collaborate with business stakeholders.
Preferred Qualifications
• Experience with Intuit Enterprise, ERP systems, or financial system integrations.
• Background in multi-tenant SaaS environments.
• Experience improving customer experience through event-driven architectures (webhooks, queues, EventBridge, SNS/SQS).
• Familiarity with ETL pipelines, data warehousing, and analytics platforms.
• Experience supporting engineering release workflows and IT DevOps processes.
Success Indicators at Tenant Inc.
• A unified, executive-ready view of operational performance that connects APM telemetry, enterprise workflows, and business outcomes.
• Automated, reliable workflows across Jira, HubSpot, Zendesk, Microsoft 365, and ERP systems.
• Significant reduction in manual work across engineering, support, sales, and finance.
• Clean, consistent, and governed data across enterprise tools.
• Reliable integrations with clear dashboards, alerting, and business impact visibility.
• Strong cross-team alignment and measurable improvements in operational efficiency.
• A scalable, well-documented tooling architecture that supports Tenant’s modernization strategy.
#EnterpriseEngineering #BusinessSystems #ToolingEngineering #AutomationEngineering
#SystemsIntegration #APM #Observability
RESPONSIBILITIES:
* Architect, design, and maintain scalable CI/CD pipelines using Azure/AWS DevSecOps.
* Build and optimize Docker-based microservices, images, and deployment pipelines.
* Lead deployments across Docker Swarm, Kubernetes/EKS, and multi-location environments.
* Develop infrastructure automation using Ansible, bash scripting, Terraform and Git-based workflow.
* Manage release pipelines using container registries, artifact feeds, template pipelines, and multi-stage workflows.
* Design multi-environment strategies for dev, QA, staging, and production deployment.
* Implement cloud-native services with AWS & Azure cloud platforms.
* Implement basic security practices, including IAM roles, secrets management, and access controls.
* Develop secure, modular, reusable build and release systems.
* Work closely with full-stack engineering teams (Angular, Java, Python , backend APIs, database engineers).
* Mentor junior DevOps engineers and lead DevOps roadmap decisions.
KNOWLEDGE REQUIREMENTS:
DevOps Expertise :
Azure DevOps pipelines, YAML templating, CI/CD strategy, Git branching models.
Containerization & Orchestration :
Docker images, Docker Compose, Docker Swarm, multi-node/multi-location deployments.
Cloud Technologies :
Azure deployments & infrastructure, AWS (IAM, Lambda, S3, CloudWatch).
Programming / Scripting Languages :
Python, Bash, Linux/Unix administration, awk, shell automation, groovy.
Infrastructure Automation :
Ansible playbooks, tasks/roles, inventory design, configuration management.
Distributed Deployment Architecture :
Multi-site replication, node selection by IP, dynamic service routing.
Database Stack Experience :
PostgreSQL, MySQL, MariaDB operations & migrations.
Observability & Logging :
CloudWatch monitoring, log collection, Prometheus, Grafana, reporting & metrics.
Version Control & Build Systems :
Azure Devops, Git, Git submodules, artifact storage, registry solutions, Secrets Management.
Nice to have AI knowledge/experience and willingness to learn.
EDUCATION & EXPERIENCE REQUIREMENTS
* BS degree in Electrical/Computer Engineering, Computer Science or related field. MS preferred.
* 7+ years experience in a software devops/development/test capacity with enterprise server, storage or networking products.
Job Description
At Boeing, we innovate and collaborate to make the world a better place. We’re committed to fostering an environment for every teammate that’s welcoming, respectful and inclusive, with great opportunity for professional growth. Find your future with us.
The Boeing Company is currently seeking a Software Engineer–DevSecOps (Senior or Lead) to support our E7A Software Development Environments team located in Tukwila, Washington.
This team is responsible for building and maintaining multiple secure development environments that enable continuous integration and secure software development for the US Air force E-7A Rapid Prototype Program. This position will focus on developing and maintaining RKE2 and EKS backed software development tools supporting Maven Gradle and Python development pipelines in both AWS and onsite disconnected environments.
Position Responsibilities:
Lead the design, development, testing and verification of E-7A software engineering tools
Develop and enhance our DevSecOps operations across multiple environments
Collaborate with cross-functional teams to integrate and implement solutions for software development
Provide end user support and troubleshooting to software development teams using program defined pipeline configurations and development environments
Monitor and maintain software development environments and tools
Participate in peer reviews and provide technical leadership to junior engineers
Utilize analytical problem-solving skills to resolve issues and improve processes and tools
Our team is currently hiring for a broad range of experience levels including: Senior and Lead Engineers.
This position is expected to be 100% onsite. The selected candidate will be required to work onsite at one of the listed location options.
Security Clearance and Export Control Requirements:
This position requires an active Secret U.S. Security Clearance. (A U.S. Security Clearance that has been active in the past 24 months is considered active.)
Basic Qualifications (Required Skills/ Experience):
3+ years of experience administering Linux systems
3+ years of experience administering Kubernetes EKS or RKE2 deployments
3+ years of experience with AWS cloud development and containerization
Programming or scripting experience in Java or C++ and Python
Preferred Qualifications (Desired Skills/Experience):
Bachelor of Science degree from an accredited course of study in engineering, engineering technology (includes manufacturing engineering technology), chemistry, physics, mathematics, data science, or computer science
Ability to set up, and manage a toolchain that uses GitLab, GitLab-CI, Gradle, Maven, Nexus/Artifactory, SonarQube, Prometheus, FluentBit, CloudWatch, Helm, Terraform, Ansible, OpenShift, Docker and more
AWS Certification is preferred
Containerization knowledge including Docker, Kubernetes, Helm, Rancher, Istio, Big Bang
Understanding of designing and implementing full stack/Microservice infrastructure
Understanding of secure software development methodologies and Security First mindset
Understanding of cloud architecture and design methodologies
Strong Working knowledge of the CI/CD process including debugging, test, and integration of software tools
Knowledge of general software development and testing tools, including compilers, linkers, debuggers, and requirements management tools
Experience with Confluence, Jira
Drug Free Workplace:
Boeing is a Drug Free Workplace (DFW) where post offer applicants and employees are subject to testing for marijuana, cocaine, opioids, amphetamines, PCP, and alcohol when criteria is met as outlined in our policies.
Union:
This is a union-represented position.
CodeVue Coding Challenge:
To be considered for this position you will be required to complete a technical assessment as part of the selection process. Failure to complete the assessment will remove you from consideration.
Pay & Benefits:
At Boeing, we strive to deliver a Total Rewards package that will attract, engage and retain the top talent. Elements of the Total Rewards package include competitive base pay and variable compensation opportunities.
The Boeing Company also provides eligible employees with an opportunity to enroll in a variety of benefit programs, generally including health insurance, flexible spending accounts, health savings accounts, retirement savings plans, life and disability insurance programs, and a number of programs that provide for both paid and unpaid time away from work.
The specific programs and options available to any given employee may vary depending on eligibility factors such as geographic location, date of hire, and the applicability of collective bargaining agreements.
Pay is based upon candidate experience and qualifications, as well as market and business considerations.
Summary pay range for Senior Level: $130,900 - $177,100.
Summary pay range for Lead Level: $161,500 - $218,500.
Applications for this position will be accepted until Mar. 30, 2026
Export Control Requirements:
This position must meet U.S. export control compliance requirements. To meet U.S. export control compliance requirements, a “U.S. Person” as defined by 22 C.F.R. §120.62 is required. “U.S. Person” includes U.S. Citizen, U.S. National, lawful permanent resident, refugee, or asylee.
Export Control Details:
US based job, US Person required
Relocation
This position offers relocation based on candidate eligibility.
Security Clearance
This position requires an active U.S. Secret Security Clearance (U.S. Citizenship Required). (A U.S. Security Clearance that has been active in the past 24 months is considered active)
Visa Sponsorship
Employer will not sponsor applicants for employment visa status.
Shift
This position is for 1st shift
Equal Opportunity Employer:
Boeing is an Equal Opportunity Employer. Employment decisions are made without regard to race, color, religion, national origin, gender, sexual orientation, gender identity, age, physical or mental disability, genetic factors, military/veteran status or other characteristics protected by law.
RESPONSIBILITIES:
- Architect, design, and maintain scalable CI/CD pipelines using Azure/AWS DevSecOps.
- Build and optimize Docker-based microservices, images, and deployment pipelines.
- Lead deployments across Docker Swarm, Kubernetes/EKS, and multi-location environments.
- Develop infrastructure automation using Ansible, bash scripting, Terraform and Git-based workflow.
- Manage release pipelines using container registries, artifact feeds, template pipelines, and multi-stage workflows.
- Design multi-environment strategies for dev, QA, staging, and production deployment.
- Implement cloud-native services with AWS & Azure cloud platforms.
- Implement basic security practices, including IAM roles, secrets management, and access controls.
- Develop secure, modular, reusable build and release systems.
- Work closely with full-stack engineering teams (Angular, Java, Python , backend APIs, database engineers).
- Mentor junior DevOps engineers and lead DevOps roadmap decisions.
KNOWLEDGE REQUIREMENTS:
DevOps Expertise:
Azure DevOps pipelines, YAML templating, CI/CD strategy, Git branching models.
Containerization & Orchestration:
Docker images, Docker Compose, Docker Swarm, multi-node/multi-location deployments.
Cloud Technologies:
Azure deployments & infrastructure, AWS (IAM, Lambda, S3, CloudWatch).
Programming / Scripting Languages:
Python, Bash, Linux/Unix administration, awk, shell automation, groovy.
Infrastructure Automation:
Ansible playbooks, tasks/roles, inventory design, configuration management.
Distributed Deployment Architecture:
Multi-site replication, node selection by IP, dynamic service routing.
Database Stack Experience:
PostgreSQL, MySQL, MariaDB operations & migrations.
Observability & Logging:
CloudWatch monitoring, log collection, Prometheus, Grafana, reporting & metrics.
Version Control & Build Systems:
Azure Devops, Git, Git submodules, artifact storage, registry solutions, Secrets Management.
Nice to have AI knowledge/experience and willingness to learn.
EDUCATION & EXPERIENCE REQUIREMENTS
- BS degree in Electrical/Computer Engineering, Computer Science or related field. MS preferred.
- 7+ years experience in a software devops/development/test capacity with enterprise server, storage or networking products.
RESPONSIBILITIES:
- Architect, design, and maintain scalable CI/CD pipelines using Azure/AWS DevSecOps.
- Build and optimize Docker-based microservices, images, and deployment pipelines.
- Lead deployments across Docker Swarm, Kubernetes/EKS, and multi-location environments.
- Develop infrastructure automation using Ansible, bash scripting, Terraform and Git-based workflow.
- Manage release pipelines using container registries, artifact feeds, template pipelines, and multi-stage workflows.
- Design multi-environment strategies for dev, QA, staging, and production deployment.
- Implement cloud-native services with AWS & Azure cloud platforms.
- Implement basic security practices, including IAM roles, secrets management, and access controls.
- Develop secure, modular, reusable build and release systems.
- Work closely with full-stack engineering teams (Angular, Java, Python , backend APIs, database engineers).
- Mentor junior DevOps engineers and lead DevOps roadmap decisions.
KNOWLEDGE REQUIREMENTS:
DevOps Expertise:
Azure DevOps pipelines, YAML templating, CI/CD strategy, Git branching models.
Containerization & Orchestration:
Docker images, Docker Compose, Docker Swarm, multi-node/multi-location deployments.
Cloud Technologies:
Azure deployments & infrastructure, AWS (IAM, Lambda, S3, CloudWatch).
Programming / Scripting Languages:
Python, Bash, Linux/Unix administration, awk, shell automation, groovy.
Infrastructure Automation:
Ansible playbooks, tasks/roles, inventory design, configuration management.
Distributed Deployment Architecture:
Multi-site replication, node selection by IP, dynamic service routing.
Database Stack Experience:
PostgreSQL, MySQL, MariaDB operations & migrations.
Observability & Logging:
CloudWatch monitoring, log collection, Prometheus, Grafana, reporting & metrics.
Version Control & Build Systems:
Azure Devops, Git, Git submodules, artifact storage, registry solutions, Secrets Management.
Nice to have AI knowledge/experience and willingness to learn.
EDUCATION & EXPERIENCE REQUIREMENTS
- BS degree in Electrical/Computer Engineering, Computer Science or related field. MS preferred.
- 7+ years experience in a software devops/development/test capacity with enterprise server, storage or networking products.
RESPONSIBILITIES:
- Architect, design, and maintain scalable CI/CD pipelines using Azure/AWS DevSecOps.
- Build and optimize Docker-based microservices, images, and deployment pipelines.
- Lead deployments across Docker Swarm, Kubernetes/EKS, and multi-location environments.
- Develop infrastructure automation using Ansible, bash scripting, Terraform and Git-based workflow.
- Manage release pipelines using container registries, artifact feeds, template pipelines, and multi-stage workflows.
- Design multi-environment strategies for dev, QA, staging, and production deployment.
- Implement cloud-native services with AWS & Azure cloud platforms.
- Implement basic security practices, including IAM roles, secrets management, and access controls.
- Develop secure, modular, reusable build and release systems.
- Work closely with full-stack engineering teams (Angular, Java, Python , backend APIs, database engineers).
- Mentor junior DevOps engineers and lead DevOps roadmap decisions.
KNOWLEDGE REQUIREMENTS:
DevOps Expertise:
Azure DevOps pipelines, YAML templating, CI/CD strategy, Git branching models.
Containerization & Orchestration:
Docker images, Docker Compose, Docker Swarm, multi-node/multi-location deployments.
Cloud Technologies:
Azure deployments & infrastructure, AWS (IAM, Lambda, S3, CloudWatch).
Programming / Scripting Languages:
Python, Bash, Linux/Unix administration, awk, shell automation, groovy.
Infrastructure Automation:
Ansible playbooks, tasks/roles, inventory design, configuration management.
Distributed Deployment Architecture:
Multi-site replication, node selection by IP, dynamic service routing.
Database Stack Experience:
PostgreSQL, MySQL, MariaDB operations & migrations.
Observability & Logging:
CloudWatch monitoring, log collection, Prometheus, Grafana, reporting & metrics.
Version Control & Build Systems:
Azure Devops, Git, Git submodules, artifact storage, registry solutions, Secrets Management.
Nice to have AI knowledge/experience and willingness to learn.
EDUCATION & EXPERIENCE REQUIREMENTS
- BS degree in Electrical/Computer Engineering, Computer Science or related field. MS preferred.
- 7+ years experience in a software devops/development/test capacity with enterprise server, storage or networking products.
At Boeing, we innovate and collaborate to make the world a better place. We're committed to fostering an environment for every teammate that's welcoming, respectful and inclusive, with great opportunity for professional growth. Find your future with us.
The Boeing Company is currently seeking a Software Engineer-DevSecOps (Senior or Lead) to support our E7A Software Development Environments team located in Tukwila, Washington .
This team is responsible for building and maintaining multiple secure development environments that enable continuous integration and secure software development for the US Air force E-7A Rapid Prototype Program. This position will focus on developing and maintaining RKE2 and EKS backed software development tools supporting Maven Gradle and Python development pipelines in both AWS and onsite disconnected environments.
Position Responsibilities:
* Lead the design, development, testing and verification of E-7A software engineering tools
* Develop and enhance our DevSecOps operations across multiple environments
* Collaborate with cross-functional teams to integrate and implement solutions for software development
* Provide end user support and troubleshooting to software development teams using program defined pipeline configurations and development environments
* Monitor and maintain software development environments and tools
* Participate in peer reviews and provide technical leadership to junior engineers
* Utilize analytical problem-solving skills to resolve issues and improve processes and tools
Our team is currently hiring for a broad range of experience levels including: Senior and Lead Engineers.
This position is expected to be 100% onsite. The selected candidate will be required to work onsite at one of the listed location options.
Security Clearance and Export Control Requirements:
This position requires an active Secret U.S. Security Clearance. (A U.S. Security Clearance that has been active in the past 24 months is considered active.)
Basic Qualifications (Required Skills/ Experience):
* 3+ years of experience administering Linux systems
* 3+ years of experience administering Kubernetes EKS or RKE2 deployments
* 3+ years of experience with AWS cloud development and containerization
* Programming or scripting experience in Java or C++ and Python
Preferred Qualifications (Desired Skills/Experience):
* Bachelor of Science degree from an accredited course of study in engineering, engineering technology (includes manufacturing engineering technology), chemistry, physics, mathematics, data science, or computer science
* Ability to set up, and manage a toolchain that uses GitLab, GitLab-CI, Gradle, Maven, Nexus/Artifactory, SonarQube, Prometheus, FluentBit, CloudWatch, Helm, Terraform, Ansible, OpenShift, Docker and more
* AWS Certification is preferred
* Containerization knowledge including Docker, Kubernetes, Helm, Rancher, Istio, Big Bang
* Understanding of designing and implementing full stack/Microservice infrastructure
* Understanding of secure software development methodologies and Security First mindset
* Understanding of cloud architecture and design methodologies
* Strong Working knowledge of the CI/CD process including debugging, test, and integration of software tools
* Knowledge of general software development and testing tools, including compilers, linkers, debuggers, and requirements management tools
* Experience with Confluence, Jira
Drug Free Workplace:
Boeing is a Drug Free Workplace (DFW) where post offer applicants and employees are subject to testing for marijuana, cocaine, opioids, amphetamines, PCP, and alcohol when criteria is met as outlined in our policies.
Union:
This is a union-represented position.
CodeVue Coding Challenge:
To be considered for this position you will be required to complete a technical assessment as part of the selection process. Failure to complete the assessment will remove you from consideration.
Pay & Benefits:
At Boeing, we strive to deliver a Total Rewards package that will attract, engage and retain the top talent. Elements of the Total Rewards package include competitive base pay and variable compensation opportunities.
The Boeing Company also provides eligible employees with an opportunity to enroll in a variety of benefit programs, generally including health insurance, flexible spending accounts, health savings accounts, retirement savings plans, life and disability insurance programs, and a number of programs that provide for both paid and unpaid time away from work.
The specific programs and options available to any given employee may vary depending on eligibility factors such as geographic location, date of hire, and the applicability of collective bargaining agreements.
Pay is based upon candidate experience and qualifications, as well as market and business considerations.
Summary pay range for Senior Level: $130,900 - $177,100.
Summary pay range for Lead Level: $161,500 - $218,500.
Applications for this position will be accepted until Mar. 30, 2026
Export Control Requirements:
This position must meet U.S. export control compliance requirements. To meet U.S. export control compliance requirements, a "U.S. Person" as defined by 22 C.F.R. §120.62 is required. "U.S. Person" includes U.S. Citizen, U.S. National, lawful permanent resident, refugee, or asylee.
Export Control Details:
US based job, US Person required
Relocation
This position offers relocation based on candidate eligibility.
Security Clearance
This position requires an active U.S. Secret Security Clearance (U.S. Citizenship Required). (A U.S. Security Clearance that has been active in the past 24 months is considered active)
Visa Sponsorship
Employer will not sponsor applicants for employment visa status.
Shift
This position is for 1st shift
Equal Opportunity Employer:
Boeing is an Equal Opportunity Employer. Employment decisions are made without regard to race, color, religion, national origin, gender, sexual orientation, gender identity, age, physical or mental disability, genetic factors, military/veteran status or other characteristics protected by law.
RESPONSIBILITIES:
- Architect, design, and maintain scalable CI/CD pipelines using Azure/AWS DevSecOps.
- Build and optimize Docker-based microservices, images, and deployment pipelines.
- Lead deployments across Docker Swarm, Kubernetes/EKS, and multi-location environments.
- Develop infrastructure automation using Ansible, bash scripting, Terraform and Git-based workflow.
- Manage release pipelines using container registries, artifact feeds, template pipelines, and multi-stage workflows.
- Design multi-environment strategies for dev, QA, staging, and production deployment.
- Implement cloud-native services with AWS & Azure cloud platforms.
- Implement basic security practices, including IAM roles, secrets management, and access controls.
- Develop secure, modular, reusable build and release systems.
- Work closely with full-stack engineering teams (Angular, Java, Python , backend APIs, database engineers).
- Mentor junior DevOps engineers and lead DevOps roadmap decisions.
KNOWLEDGE REQUIREMENTS:
DevOps Expertise:
Azure DevOps pipelines, YAML templating, CI/CD strategy, Git branching models.
Containerization & Orchestration:
Docker images, Docker Compose, Docker Swarm, multi-node/multi-location deployments.
Cloud Technologies:
Azure deployments & infrastructure, AWS (IAM, Lambda, S3, CloudWatch).
Programming / Scripting Languages:
Python, Bash, Linux/Unix administration, awk, shell automation, groovy.
Infrastructure Automation:
Ansible playbooks, tasks/roles, inventory design, configuration management.
Distributed Deployment Architecture:
Multi-site replication, node selection by IP, dynamic service routing.
Database Stack Experience:
PostgreSQL, MySQL, MariaDB operations & migrations.
Observability & Logging:
CloudWatch monitoring, log collection, Prometheus, Grafana, reporting & metrics.
Version Control & Build Systems:
Azure Devops, Git, Git submodules, artifact storage, registry solutions, Secrets Management.
Nice to have AI knowledge/experience and willingness to learn.
EDUCATION & EXPERIENCE REQUIREMENTS
- BS degree in Electrical/Computer Engineering, Computer Science or related field. MS preferred.
- 7+ years experience in a software devops/development/test capacity with enterprise server, storage or networking products.
Remote working/work at home options are available for this role.
RESPONSIBILITIES:
~ Architect, design, and maintain scalable CI/CD pipelines using Azure/AWS DevSecOps.
~ Build and optimize Docker-based microservices, images, and deployment pipelines.
~ Lead deployments across Docker Swarm, Kubernetes/EKS, and multi-location environments.
~ Develop infrastructure automation using Ansible, bash scripting, Terraform and Git-based workflow.
~ Manage release pipelines using container registries, artifact feeds, template pipelines, and multi-stage workflows.
~ Design multi-environment strategies for dev, QA, staging, and production deployment.
~ Implement cloud-native services with AWS & Azure cloud platforms.
~ Implement basic security practices, including IAM roles, secrets management, and access controls.
~ Develop secure, modular, reusable build and release systems.
~ Work closely with full-stack engineering teams (Angular, Java, Python , backend APIs, database engineers).
~ Mentor junior DevOps engineers and lead DevOps roadmap decisions.
KNOWLEDGE REQUIREMENTS:
DevOps Expertise :
Azure DevOps pipelines, YAML templating, CI/CD strategy, Git branching models.
Containerization & Orchestration :
Docker images, Docker Compose, Docker Swarm, multi-node/multi-location deployments.
Cloud Technologies :
Azure deployments & infrastructure, AWS (IAM, Lambda, S3, CloudWatch).
Programming / Scripting Languages :
Python, Bash, Linux/Unix administration, awk, shell automation, groovy.
Infrastructure Automation :
Ansible playbooks, tasks/roles, inventory design, configuration management.
Distributed Deployment Architecture :
Multi-site replication, node selection by IP, dynamic service routing.
Database Stack Experience :
PostgreSQL, MySQL, MariaDB operations & migrations.
Observability & Logging :
CloudWatch monitoring, log collection, Prometheus, Grafana, reporting & metrics.
Version Control & Build Systems :
Azure Devops, Git, Git submodules, artifact storage, registry solutions, Secrets Management.
Nice to have AI knowledge/experience and willingness to learn.
EDUCATION & EXPERIENCE REQUIREMENTS
~ BS degree in Electrical/Computer Engineering, Computer Science or related field. MS preferred.
~7+ years experience in a software devops/development/test capacity with enterprise server, storage or networking products.
Responsibilities The DevOps / SRE will: Build and maintain the infrastructure that supports the firm's trading systems Collaborate with development teams to design and implement automated build and deployment pipelines Drive the rapid adoption of new processes/systems Provide hands-on support to the trading team Qualifications BS/MS in Computer Science, Engineering, or related discipline 5+ years experience in the Platform, SRE, Production, or Systems Engineering fields Excellent knowledge of all aspects of the software engineering process, including Coding, Testing, Deployment, Scalability, Security, and Maintainability Ability to set-up andmanage CI/CD activities and tools (e.g.
Gitlab, Bitbucket), as well as build you own solutions (e.g.
Java/Gradle) Track record of working with distributed systems in a trading environment e.g.
Aeron, Kafka, and RabbitMQ Deep understanding of best practices, design patterns, and principles for highly decoupled and scalable systems Good knowledge of Unix systems / Bash / networks Experience with infrastructure and application observability tooling e.g.
Datadog, Prometheus, and Grafana Strong knowledge in coding/scripting (Java, Python, Go, or Bash) Experience with automation/configuration frameworks using Terraform, Kustomize, Ansible, Helm, or an equivalent Desired skills Experience with cloud platforms (ideally AWS) Experience in API Management (routing, gateways, versioning) with profound understanding of API Development aspects Ability to apply strategies for efficient communication, data consistency, and resilience across micro services, including experience with API design, message-based communication, and event-driven architectures Experience in defining and enforcing architectural patterns (SOA, CQRS, Event Sourcing etc.) Experience in performance/stress test and system tuning
Key Responsibilities:
- Design and deploy observability frameworks leveraging tools such as Grafana, Dynatrace, Prometheus, ELK, Splunk, etc. Define best practices for monitoring, alerting, and visualization across hybrid and multi-cloud environments.
- Develop strategies for monitoring KPIs tied to business outcomes (e.g., sales performance, supply chain efficiency, customer experience).
- Collaborate with business and IT teams to identify key metrics and integrate them into dashboards and alerting systems.
- Implement AIOps solutions using industry-leading platforms like OpenAI, AWS Bedrock, Google Gemini, Anthropic, and similar technologies.
- Develop predictive analytics and anomaly detection models to proactively identify and resolve operational issues.
- Integrate observability tools with ITSM platforms and automation workflows. Enable automated root cause analysis and remediation using AI/ML models.
- Provide observability strategies for infrastructure (servers, storage, cloud), applications (microservices, APIs), and networks (LAN/WAN, SD-WAN). Collaborate with DevOps, SRE, and IT operations teams to ensure end-to-end visibility and reliability.
- Establish observability standards, KPIs, and SLAs for performance and availability. Ensure compliance with security and regulatory requirements in monitoring solutions.
- Develop scalable architecture using LLMs, agentic frameworks, and multi-modal AI technologies.
- Build AI-powered analytics platforms for IT operations analysis, anomaly detection, and predictive insights.
- Architect and deploy intelligent chatbots for IT support and self-service capabilities.
- Integrate AI solutions with existing IT operations tools and workflows.
- Implement automated remediation and root cause analysis using AI/ML models.
Qualifications:
- 10-13 years of relevant experience
- Hands-on experience with Grafana, Dynatrace, and other monitoring platforms.
- Practical experience implementing AI-based solutions for anomaly detection, predictive maintenance, and automated remediation. Familiarity with OpenAI, Bedrock, Gemini, Anthropic, or similar AI platforms.
- Strong understanding of infrastructure, application architectures, and networking. Experience with cloud platforms (AWS, Azure, GCP) and container orchestration (Kubernetes).
- Proficiency in Python, Bash, or similar scripting languages for automation and integration.
- Strong experience with LLMs (OpenAI, Anthropic, Gemini, Bedrock) and agentic AI solutions.
- Hands-on experience in designing AI architectures for enterprise IT environments.
- Proficiency in Python or similar languages for AI model integration and automation.
Location: Alpharetta, GA (3 days a week onsite)
Duration: 6 months
Job Description:
We are seeking a skilled Site Reliability Engineer to join our team and help build, maintain, and scale our cloud-native infrastructure. You will work closely with development and operations teams to ensure our systems are reliable, scalable, and efficient. The ideal candidate is passionate about automation, observability, and infrastructure-as-code, and thrives in a collaborative, fast-paced environment.
Key Responsibilities
Design, implement, and manage cloud infrastructure on Azure using Terraform and Terragrunt.
Maintain and optimize Kubernetes clusters on Azure Kubernetes Service (AKS).
Build and manage CI/CD pipelines using GitHub Actions/Workflows and ArgoCD for GitOps deployments.
Enhance system reliability by implementing monitoring, alerting, and observability solutions with Grafana.
Automate operational tasks to reduce toil and improve team efficiency.
Participate in on-call rotations, incident response, and post-mortem analysis.
Collaborate with development teams to improve application performance, scalability, and resilience.
Implement and advocate for SRE best practices, including SLIs, SLOs, and error budgets.
Continuously improve system performance, cost efficiency, and security.
Required Skills & Qualifications
3+ years of experience in an SRE, DevOps, or cloud infrastructure role.
Strong experience with Azure cloud services and infrastructure.
Hands-on experience with java and Terraform and Terragrunt for infrastructure-as-code.
Proficiency with Kubernetes (preferably AKS and container orchestration.
Experience with CI/CD tools, especially GitHub Workflows/Actions and ArgoCD.
Solid understanding of observability tools like Grafana (Prometheus, Loki, Tempo experience is a plus).
Education Requirements Bachelor's degree required, (Masters preferred)
About Pinterest:
Millions of people around the world come to our platform to find creative ideas, dream about new possibilities and plan for memories that will last a lifetime. At Pinterest, we're on a mission to bring everyone the inspiration to create a life they love, and that starts with the people behind the product.
Discover a career where you ignite innovation for millions, transform passion into growth opportunities, celebrate each other's unique experiences and embrace theflexibility to do your best work. Creating a career you love? It's Possible.
At Pinterest, AI isn't just a feature, it's a powerful partner that augments our creativity and amplifies our impact, and we're looking for candidates who are excited to be a part of that. To get a complete picture of your experience and abilities, we'll explore your foundational skills and how you collaborate with AI.
Through our interview process, what matters most is that you can always explain your approach, showing us not just what you know, but how you think. You can read more about our AI interview philosophy and how we use AI in our recruiting process here.
We're seeking an exceptional Staff Software Engineer to join our Observability team at Pinterest. This role combines deep technical expertise in distributed systems and data engineering with a product-oriented mindset to build world-class observability solutions that empower our engineering organization.As a Staff Engineer on the Observability team, you'll be responsible for designing and building the infrastructure and tools that provide visibility into Pinterest's large-scale distributed systems, helping thousands of engineers understand, debug, and optimize their services.
What you'll do:
- Define and execute the observability roadmap, treating it as a product. Understand engineering team needs and translate them into technical solutions with measurable impact.
- Architect, build, and scale distributed observability infrastructure (metrics, logs, traces) to handle massive volumes across Pinterest's distributed systems.
- Build high-performance data pipelines and storage for real-time and historical telemetry analysis at Pinterest scale.
- Champion Best Practices: Establish observability standards and patterns across the organization, making it easy for teams to instrument their services and gain actionable insights
- Technical Leadership: Mentor engineers, lead architectural reviews, and influence technical decisions across teams to improve overall system reliability and performance
- Cross-functional Collaboration: Partner with SRE, Infrastructure, Product Engineering, and other teams to understand pain points and deliver solutions that improve developer productivity and system reliability
- Innovation: Stay current with observability trends and technologies, evaluating and adopting cutting-edge tools and techniques to keep Pinterest at the forefront
What we're looking for:
- Bachelor's degree in Computer Science, Engineering, or a related field, or equivalent experience.
- Product Mindset: Demonstrated ability to work backwards from customer needs -understanding user needs, prioritizing features, measuring success, and iterating based on feedback. Experience building internal platforms or tools with strong adoption
- Distributed Systems Expertise: 7+ years of experience designing and operating large-scale distributed systems with deep understanding of consistency, availability, scalability, and failure modes
- Data Engineering Skills: Strong background in building data pipelines, working with time-series databases, columnar storage, stream processing (Kafka, Flink, etc.), and data modeling at scale
- Observability Domain Knowledge: Hands-on experience with modern observability tools and practices including metrics, logging, tracing, and profiling. Familiarity with OpenTelemetry, Prometheus, Grafana, or similar technologies
- Programming Proficiency: Expert-level coding skills in languages like Java, Python, Go, or Scala with ability to write production-quality code
- Systems Thinking: Ability to see the big picture while managing complex technical details, balancing trade-offs between cost, performance, and reliability
- Experience building observability platforms from the ground up or significantly scaling existing solutions
- Familiarity with cloud-native architectures and technologies (Kubernetes, service mesh, etc.)
- Track record of driving adoption of internal platforms through excellent documentation, UX, and developer advocacy
- Experience with machine learning or anomaly detection applied to observability use cases
- Strong communication skills with ability to influence stakeholders at all levels
- Contributions to open-source observability projects, a plus
In-Office Requirement Statement:
- We let the type of work you do guide the collaboration style. That means we're not always working in an office, but we continue to gather for key moments of collaboration and connection.
- This role will need to be in the office for in-person collaboration 1-2 times/quarter and therefore can be situated anywhere in the country.
Relocation Statement:
- This position is not eligible for relocation assistance. Visit our PinFlex page to learn more about our working model.
#LI-REMOTE
#LI-JT1
At Pinterest we believe the workplace should be equitable, inclusive, and inspiring for every employee. In an effort to provide greater transparency, we are sharing the base salary range for this position. The position is also eligible for equity. Final salary is based on a number of factors including location, travel, relevant prior experience, or particular skills and expertise.
Information regarding the culture at Pinterest and benefits available for this position can be found here.
US based applicants only$177,185—$364,795 USDOur Commitment to Inclusion:
Pinterest is an equal opportunity employer and makes employment decisions on the basis of merit. We want to have the best qualified people in every job. All qualified applicants will receive consideration for employment without regard to race, color, ancestry, national origin, religion or religious creed, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, age, marital status, status as a protected veteran, physical or mental disability, medical condition, genetic information or characteristics (or those of a family member) or any other consideration made unlawful by applicable federal, state or local laws. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. If you require a medical or religious accommodation during the job application process, please completethis formfor support.