Prometheus Relabelconfigs Example Jobs in Usa
53 positions found
Company Description
Prometheus Materials develops innovative sustainable building materials to drive the transition toward a carbon-negative future. Using nature-inspired processes, the company utilizes microalgae to produce its ProZERO™ line of carbon-negative supplemental cement blends, designed for ready-mix concrete applications, manufactured products, and licensed material solutions. These cutting-edge materials address the environmental challenges of traditional construction while offering scalable solutions for concrete manufacturers.
Role Description
The Director of Business Development is responsible for identifying and developing the sales and marketing strategies leading to long-term, profitable growth. You will evaluate and execute new business opportunities which align with Prometheus Materials’ overall market growth strategies. This position will work closely with distributors, vendors, and customers. Additionally, close collaboration with internal business units (biotechnology, research and development, manufacturing, and product management) will be essential to the success of the Director of Business Development.
Responsibilities:
This is a summary of activities and is not intended to be all-inclusive of all responsibilities.
· Develop, own, and execute a formal business plan aligned with company objectives
· Develop, maintain, and track product backlog and bid activity
· Establish revenue goal KPIs and deliver results
· Manage strategic relationships to maximize revenue performance
· Create and manage key account plans, including defined goals, activities, and timelines
· Communicating regular updates of key performance indicators, including volume, revenue, and strategic initiatives
· Identify, secure, grow, and manage key licensing opportunities across multiple industries
· Research, analyze, and implement key market trends within low-embodied carbon building materials
· Monitor and maintain competitive intelligence, including competitor products, pricing strategies, and development activities
· Regularly review the sales cycle and implement continuous improvement strategies
· Travel up to 40% as required
Qualifications:
Use your existing network or develop a robust network of key stakeholders to increase market awareness, market share, and success of the formal business plan.
· Bachelor’s degree in Business or a related field, or equivalent experience
· Minimum of 5 years of experience in sales, marketing, or product management
· Experience within the building materials industry preferred (e.g., sand and gravel, cement, ready mix, or admixtures)
· Proven experience collaborating with industry experts (Architects and Engineers)
· Working knowledge of key high-level industry standards relating to cement, concrete, and aggregates
· Demonstrated experience developing, managing, and executing sales strategies to drive revenue growth
· Strong understanding of business-to-business sales cycles, sales strategies, and key performance metrics
· Experience building, leading, and managing multi-dimensional sales team
· Proficiency with Customer Relationship Management (CRM) software and sales reporting
· Solid financial and business acumen, including budgeting, forecasting, and pricing strategies
· Strong negotiation, presentation, and facilitation skills
· Knowledge or experience with sustainability initiatives, LEED certification, and carbon reduction targets
Please send resume and cover letter to
L3Harris is the Trusted Disruptor in defense tech. With customers' mission-critical needs always in mind, our employees deliver end-to-end technology solutions connecting the space, air, land, sea and cyber domains in the interest of national security.
Job Title: Specialist, Software Engineering (Service Reliability Engineer)
Job Code: 33584
Job Location: Melbourne, FL or Chantilly, VA (on-site)
Job Schedule: Rotational shifts 24x7
Job Description:
L3Harris is seeking an experienced Software Engineer to join our dynamic team, focusing on operating, maintaining, and sustaining a Cloud-based 24x7 operational system. The role encompasses the live monitoring and real-time anomaly troubleshooting of Cloud data operations, and participating in the sustainment development efforts (patching, upgrades) for that environment, using an agile development process.
Essential Functions:
* Develop, maintain, and enhance cloud applications using Python, Typescript and Java
* Provide 24x7 real time monitoring and troubleshooting of an operational Cloud based Data Operations system (Shift work; nights and weekends as required on a rotational schedule).
* Shifts to include: First Shift Day (06:00am - 2:30pm EST), 2nd Shift Evening 10% differential (2:00pm - 10:30pm EST), 3rd Sift Night 12% differential (10:00pm - 06:30am EST).
* First line anomaly lead: reporting/resolving/future mitigation of any issues encountered.
* Collaborate with cross-functional teams to define and implement engineering changes (enhancements, automation, capability improvements and bug fixes)
* Develop and maintain technical documentation related to Cloud sustainment and operations.
* Organize and support training sessions for operations personnel as required.
* Ensure compliance with NIST, Department of Commerce and NOAA IT security standards such as FISMA, FedRAMP, and NIST 800.53
* Develop, maintain, and enhance cloud applications using Python, Typescript and Java.
* Ability to obtain and maintain a Public Trust.
* Tool familiarity: Grafana, Prometheus, influx DB, Postgres, CDK (Cloud Development Kit), Cloud formation, Ansible, GIT (Global Information Tracker), Active Directory, Networking, GRE (Global Accelerator Resolvers & Endpoints), EKS (Elastic Kubernetes Service), RDS (Relational Database Service), Lambda, IAM (Identity and Access Management)
Qualifications:
* Bachelor's Degree and a minimum of 4 years of prior related experience. Graduate Degree or equivalent with 2 years of prior related experience. In lieu of a degree, minimum of 8 years of prior related experience.
* Experience with commercial cloud systems (Cloud Linux SYS Admin/coding) and services (AWS, Azure, etc.).
* Experience with any of the following tools: Grafana, Prometheus, influxdb, Postgres, CDK (Cloud Development Kit), Cloud formation, Ansible, GIT (Global Information Tracker), Active Directory, Networking, GRE (Global Accelerator Resolvers & Endpoints), EKS (Elastic Kubernetes Service), RDS (Relational Database Service), Lambda, IAM (Identity and Access Management).
Preferred Additional Skills:
* Experience with Agile software development best practices and tools (SAFe, Jira, Git, etc.) and participation in continuous Agile planning and coordination.
* Experience with containerization technologies (Docker, Kubernetes).
* Familiarity with container observability tools (Prometheus, Grafana, InfluxDB, PromQL).
* Background in security architecture and secure coding practices.
* Experience in domain-driven design (DDD) and API-first development.
#LI-KB1
L3Harris Technologies is proud to be an Equal Opportunity Employer. L3Harris is committed to treating all employees and applicants for employment with respect and dignity and maintaining a workplace that is free from unlawful discrimination. All applicants will be considered for employment without regard to race, color, religion, age, national origin, ancestry, ethnicity, gender (including pregnancy, childbirth, breastfeeding or other related medical conditions), gender identity, gender expression, sexual orientation, marital status, veteran status, disability, genetic information, citizenship status, characteristic or membership in any other group protected by federal, state or local laws. L3Harris maintains a drug-free workplace and performs pre-employment substance abuse testing and background checks, where permitted by law.
Please be aware many of our positions require the ability to obtain a security clearance. Security clearances may only be granted to U.S. citizens. In addition, applicants who accept a conditional offer of employment may be subject to government security investigation(s) and must meet eligibility requirements for access to classified information.
By submitting your resume for this position, you understand and agree that L3Harris Technologies may share your resume, as well as any other related personal information or documentation you provide, with its subsidiaries and affiliated companies for the purpose of considering you for other available positions.
L3Harris Technologies is an E-Verify Employer. Please click here for the E-Verify Poster in English or Spanish. For information regarding your Right To Work, please click here for English or Spanish.
Compensation: $150-195k Responsibilities: • Design, deploy, and manage container orchestration platforms using OpenShift and AKS.
• Administer and optimize Linux-based systems in hybrid and multi-cloud environments.
• Automate infrastructure provisioning and configuration using Ansible Automation Platform.
• Develop and maintain Infrastructure as Code (IaC) using Terraform, Helm, and GitOps workflows.
• Collaborate with DevOps and application teams to implement CI/CD pipelines and DevSecOps practices.
• Monitor system performance, troubleshoot issues, and ensure high availability and disaster recovery.
• Implement security best practices for containerized workloads and cloud environments.
• Provide technical leadership and mentorship to junior engineers.
• Stay current with emerging technologies and contribute to strategic cloud initiatives.
• Assist with migrations to cloud, ensuring best practices are followed and architecture is compliant with company standards.
Qualifications: Required: • Bachelor's degree in computer science, Engineering, or related field (or equivalent experience).
• 5+ years of professional experience in Linux system administration and cloud engineering.
• 3+ years of hands-on experience with OpenShift and AKS in production environments.
• Strong proficiency in scripting languages (e.g., Bash, Python).
• Experience with CI/CD tools (e.g., Jenkins, GitLab CI, ArgoCD).
• Deep understanding of Kubernetes architecture, networking, and security.
• Familiarity with cloud platforms (Azure, AWS, GCP) and hybrid cloud strategies.
• Knowledge of monitoring and logging tools (Prometheus, Grafana, ELK stack).
• Excellent problem-solving and communication skills.
• Linux Administration: Deep expertise in RHEL environment.
• Container Platforms: 3+ years of hands-on experience with OpenShift and AKS.
• Automation: Proficiency with Ansible, Ansible Tower/AAP, and scripting (Bash, Python).
• Infrastructure as Code: Experience with Terraform, Helm, and GitOps tools (e.g., ArgoCD, Flux).
• CI/CD: Familiarity with Jenkins, GitLab CI, Azure DevOps, or similar tools.
• Cloud Platforms: Strong knowledge of Azure, with exposure to AWS or GCP a plus.
• Monitoring & Logging: Experience with Prometheus, Grafana, ELK/EFK, and Azure Monitor.
• Security: Understanding of container security, RBAC, network policies, and compliance frameworks.
• Networking: Solid grasp of Kubernetes networking, service mesh (e.g., Istio), and ingress controllers.
Preferred: • Red Hat Certified Specialist in OpenShift Administration.
• Microsoft Certified: Azure Kubernetes Service Specialist.
• Experience with service mesh technologies (e.g., Istio, Linkerd).
• Experience in regulated industries (e.g., finance, healthcare) is a plus.
Job Title: Senior DevOps Engineer – Kubernetes & Platform Engineering
Location: Bethesda, MD
Duration: Direct Hire
Job Description:
We are seeking a Senior DevOps Engineer to join a high-performing Platform Engineering team responsible for building and scaling a secure, multi-cloud infrastructure. This role will focus on Kubernetes (AKS & GKE), CI/CD automation, and Internal Developer Platform (IDP) enablement to improve developer experience and system reliability.
The ideal candidate will have deep expertise in cloud-native technologies, distributed systems, and observability, with a strong focus on automation, security, and scalability.
Key Responsibilities
- Design, build, and manage multi-cloud Kubernetes platforms (Azure AKS & GKE) with a focus on scalability, security, and cost optimization
- Develop and enhance an Internal Developer Platform using Backstage, enabling self-service capabilities and standardized development workflows
- Build and optimize CI/CD pipelines using Jenkins and GitHub Actions (pipelines-as-code, artifact management, environment promotion)
- Orchestrate data and batch workflows using Argo Workflows, ensuring efficient scheduling and execution of complex jobs
- Manage and optimize distributed systems including Kafka, CockroachDB, Couchbase, and Elasticsearch (performance tuning, backup/restore, disaster recovery)
- Implement and maintain observability frameworks using Prometheus, Grafana, and Tempo (metrics, logging, tracing, alerting)
- Develop platform tools and automation using Go, Node.js, or C#/.NET
- Enforce best practices for security, compliance, and reliability, including secrets management, RBAC, and incident response
- Collaborate cross-functionally with engineering teams to improve developer experience and platform adoption
Qualifications
- 5–8+ years of experience in DevOps, SRE, or Platform Engineering
- Hands-on experience with Kubernetes in multi-cloud environments (AKS & GKE)
- Strong experience with CI/CD tools such as Jenkins and GitHub Actions
- Experience with Backstage or Internal Developer Platform (IDP) tools
- Hands-on experience with Argo Workflows (not limited to ArgoCD)
- Strong knowledge of distributed systems (Kafka, CockroachDB/Postgres, Couchbase, Elasticsearch)
- Experience implementing observability solutions (Prometheus, Grafana, Tempo)
- Proficiency in at least one programming language: Go, Node.js, or C#/.NET
Ideal Candidate Profile
- Proven experience building and scaling cloud-native platforms in multi-cloud environments
- Strong understanding of Kubernetes architecture, networking, and security best practices
- Experience driving developer self-service and platform standardization using Backstage
- Hands-on with workflow orchestration and large-scale data pipelines
- Deep understanding of SRE principles (SLOs, SLIs, error budgets, incident management)
- Ability to balance performance, reliability, and cost-efficiency in production systems
- Strong communication skills with the ability to work across engineering and product teams
Dexian is a leading provider of staffing, IT, and workforce solutions with over 12,000 employees and 70 locations worldwide. As one of the largest IT staffing companies and the 2nd largest minority-owned staffing company in the U.S., Dexian was formed in 2023 through the merger of DISYS and Signature Consultants. Combining the best elements of its core companies, Dexian's platform connects talent, technology, and organizations to produce game-changing results that help everyone achieve their ambitions and goals. Dexian's brands include Dexian DISYS, Dexian Signature Consultants, Dexian Government Solutions, Dexian Talent Development and Dexian IT Solutions. Visit to learn more. Dexian is an Equal Opportunity Employer that recruits and hires qualified candidates without regard to race, religion, sex, sexual orientation, gender identity, age, national origin, ancestry, citizenship, disability, or veteran status.
We're Hiring: Director of IT Architecture (Remote, with onsite meetings as needed)
We are seeking a Director of IT Architecture | Enterprise Architecture | Cloud & Systems Leader to lead and shape the IT architecture strategy for a growing healthcare organization. This is a unique opportunity to design and implement technology solutions that support business goals, regulatory compliance, and modern healthcare delivery.
Key Responsibilities:
- Define and execute a comprehensive IT architecture strategy aligned with clinical, operational, and business objectives.
- Lead and manage a team of network, cloud, and systems architects, fostering collaboration and high performance.
- Oversee network, cloud, and systems architecture initiatives, ensuring security, scalability, and interoperability.
- Evaluate, test, and implement modern platform visibility solutions (DataDog, Dynatrace, New Relic, Prometheus / Grafana).
- Collaborate with IT leadership, business stakeholders, vendors, and cloud providers to optimize technology investments.
- Establish IT governance, standards, and best practices to ensure compliance with industry regulations (HIPAA, HITECH, HITRUST).
- Monitor performance, risks, and cost optimization across all IT architecture initiatives.
Required Qualifications:
- Bachelor’s degree in Computer Science, IT, Healthcare Informatics, or related field.
- 10+ years of progressive experience in IT architecture, including at least 5 years in a leadership role managing network, cloud, and systems architecture teams, preferably in healthcare.
- Hands-on experience with cloud platforms (AWS, Azure) and hybrid environments.
- Demonstrated history of assessing, testing, and implementing modern platform visibility solutions (DataDog, Dynatrace, New Relic, Prometheus / Grafana).
- Strong expertise in network architecture (SD-WAN, VPNs, firewalls, healthcare data exchange networks).
- Deep knowledge of systems architecture, including server infrastructure, virtualization, storage, disaster recovery, and healthcare IT standards (HL7, FHIR, DICOM).
- Strong leadership, communication, and stakeholder management skills.
- Strategic thinker with strong problem-solving and analytical abilities.
Preferred Qualifications:
- Master’s degree in a related field.
- Relevant cloud certifications (AWS Solutions Architect Professional, AWS Security Specialty, Microsoft Azure Solutions Architect).
- Security and architecture certifications such as CISSP, CCNP, HITRUST, or FinOps.
This is a remote role with the flexibility to work from home, while requiring occasional onsite meetings for leadership collaboration and strategic planning.
If you are a visionary IT leader with a strong healthcare background, experience leading cloud, network, and systems architecture teams, and a passion for building scalable, secure IT platforms, we want to hear from you!
Lead Enterprise Tooling Engineer — Tenant Inc.
Overview
Tenant Inc. is modernizing its enterprise tooling, automation, and visibility ecosystem to better support our engineering, operations, finance, sales, and customer support teams. The Lead Enterprise Tooling Engineer plays a critical role in this transformation by owning the strategy, architecture, and execution of integrations across Jira, Microsoft 365, HubSpot, Zendesk, Intuit Enterprise, ERP systems, and internal platforms. This role ensures that our business systems work together seamlessly, data flows reliably across the organization, and leaders have a unified view of operational performance.
By connecting enterprise tools with application telemetry and APM insights, this position enables a single source of truth for workflow health, customer impact, and cross-system reliability. The ideal candidate blends technical expertise with business acumen, ensuring that tooling investments directly support Tenant’s operational goals and modernization roadmap.
Key Responsibilities
Enterprise Tooling Architecture & Integration
• Design and maintain the integrations that connect our core business systems, ensuring information flows consistently across Jira, Microsoft 365, HubSpot, Zendesk, Intuit Enterprise, ERP platforms, and internal applications.
• Build automated workflows and API-driven processes that reduce manual effort, eliminate redundant work, and improve data accuracy.
• Lead the unification of identity, permissions, and user lifecycle management across enterprise tools to support operational efficiency and compliance.
• Oversee cross-platform data synchronization for contacts, leases, tickets, financial data, and operational workflows to ensure a consistent and reliable customer and business experience.
APM, Observability & Unified Visibility
• Integrate observability and APM platforms (OpenSearch, Prometheus, Grafana, New Relic, Catchpoint, CloudWatch, clickstream analytics) with enterprise systems to provide end-to-end visibility across the business.
• Connect system telemetry with business workflows—linking application performance to Jira issues, Zendesk tickets, HubSpot activities, and ERP events.
• Develop executive-ready dashboards that consolidate operational KPIs, workflow performance, integration health, and customer impact into a single pane of glass.
• Implement alerting and automated correlation to help teams identify issues faster and understand their business implications.
• Partner with DevOps and SRE to ensure observability data is actionable and accessible across the organization.
Workflow Automation & Process Optimization
• Design automated workflows that streamline processes across engineering, support, sales, finance, and operations.
• Build Jira workflows, dashboards, and governance structures that support predictable releases and cross-team alignment.
• Automate HubSpot → Jira → Zendesk → ERP workflows to reduce handoffs, shorten cycle times, and improve customer responsiveness.
• Partner with Finance to automate Intuit Enterprise and ERP processes such as invoicing, reconciliation, and reporting.
API Engineering & Custom Development
• Develop and maintain custom integrations, middleware, and internal tools that improve operational efficiency and reduce manual work.
• Implement reliable error handling, monitoring, and logging to ensure integrations remain stable and transparent.
• Ensure all integrations meet security, scalability, and compliance requirements.
Data Quality, Governance & Observability
• Establish data governance standards that ensure accuracy, consistency, and auditability across enterprise tools.
• Implement monitoring and alerting for integration health and workflow performance.
• Partner with Security and Compliance to maintain SOC2, PCI, and internal governance standards.
Cross-Functional Leadership & Collaboration
• Serve as the strategic and technical leader for enterprise tooling, automation, and observability initiatives.
• Partner with Engineering, Product, Support, Sales, Finance, and Operations to understand business needs and translate them into scalable solutions.
• Mentor engineers and administrators across Jira, HubSpot, Zendesk, and Microsoft 365.
• Promote best practices for automation, documentation, and cross-system reliability.
Operational Excellence
• Lead root cause analysis for integration and workflow issues, ensuring long-term solutions rather than short-term fixes.
• Reduce manual effort across departments through automation and improved tooling.
• Maintain clear documentation for integrations, workflows, and system dependencies.
• Evaluate new tools, vendors, and opportunities to improve operational efficiency and business outcomes.
Required Qualifications
• 7+ years in enterprise tooling, business systems engineering, DevOps, or integration engineering.
• Deep experience with APIs for Jira, Microsoft 365, PowerBI, HubSpot, Zendesk, and similar SaaS platforms.
• Hands-on experience with observability and APM platforms (OpenSearch, Prometheus, Grafana, New Relic, Catchpoint, CloudWatch, clickstream analytics).
• Strong scripting and automation skills (Python, Node.js, PowerShell).
• Experience designing workflow automation across multiple business systems.
• Strong understanding of identity management, SSO, and permission models.
• Experience with data governance, monitoring, and integration reliability.
• Proven ability to lead cross-functional initiatives and collaborate with business stakeholders.
Preferred Qualifications
• Experience with Intuit Enterprise, ERP systems, or financial system integrations.
• Background in multi-tenant SaaS environments.
• Experience improving customer experience through event-driven architectures (webhooks, queues, EventBridge, SNS/SQS).
• Familiarity with ETL pipelines, data warehousing, and analytics platforms.
• Experience supporting engineering release workflows and IT DevOps processes.
Success Indicators at Tenant Inc.
• A unified, executive-ready view of operational performance that connects APM telemetry, enterprise workflows, and business outcomes.
• Automated, reliable workflows across Jira, HubSpot, Zendesk, Microsoft 365, and ERP systems.
• Significant reduction in manual work across engineering, support, sales, and finance.
• Clean, consistent, and governed data across enterprise tools.
• Reliable integrations with clear dashboards, alerting, and business impact visibility.
• Strong cross-team alignment and measurable improvements in operational efficiency.
• A scalable, well-documented tooling architecture that supports Tenant’s modernization strategy.
#EnterpriseEngineering #BusinessSystems #ToolingEngineering #AutomationEngineering
#SystemsIntegration #APM #Observability
Be a part of a team that’s ensuring Dell Technologies' product integrity and customer satisfaction. Our IT Software Engineer team turns business requirements into technology solutions by designing, coding and testing/debugging applications, as well as documenting procedures for use and constantly seeking quality improvements.
Dell Technologies is seeking an accomplished Senior Principal Software Engineer to lead and evolve our mission‑critical Kubernetes on Bare Metal platform that powers internal applications, Federal workloads, and product engineering teams. This is a high‑visibility role influencing core platform strategy, architecture, reliability, and automation for one of Dell’s most critical infrastructure services.
You will be the technical authority (SME) for our enterprise container ecosystem—driving innovation, solving complex engineering challenges at scale, and elevating the developer experience for thousands of internal customers.
Join us to do the best work of your career and make a profound social impact as a Senior Principal - Kubernetes Platform Engineer on our Software Engineer-IT Team in Round Rock, Texas.
What you’ll achieve
In this Senior Principal role, you will shape the future of our container platform and ensure its resilience, scalability, and operational excellence. You will work at the intersection of platform engineering, infrastructure architecture, and Kubernetes operations on bare metal systems.
You will:
Serve as the Subject Matter Expert for Dell’s Kubernetes on Bare Metal container platform across IT, Federal, and Product engineering organizations
Partner with application teams to enable adoption, tune platform capabilities, and troubleshoot complex distributed systems issues
Architect, design, and implement automation‑first solutions for provisioning, scaling, and operating bare‑metal Kubernetes clusters
Develop high‑quality technical solutions aligned with business and platform roadmaps
Lead design reviews, technical deep dives, and incident retrospectives to drive continuous improvement
Mentor, train, and guide junior and mid‑level engineers to elevate overall team capability
Act as a domain expert in Kubernetes, container networking, platform reliability, and infrastructure automation
Take the first step towards your dream career
Every Dell Technologies team member brings something unique to the table. Here’s what we are looking for with this role:
Essential Requirements
8–12+ years of professional experience in software engineering, platform engineering, or infrastructure engineering
U.S. Citizen (required for participation in Federal projects)
Hands‑on expertise with Infrastructure‑as‑Code (IaC) using Ansible, Terraform, or similar tools in on‑premise/bare‑metal environments
Deep proficiency in Linux internals, storage fundamentals, and Kubernetes, including: Calico, CNI, BGP/eBGP, Storage, RBAC, Pod Security, Operators, Operator SDK / Fabric8 APIs
Strong programming skills in Shell, Python, Helm, and related automation frameworks
Knowledge of enterprise infrastructure services such as HA/DR, DNS, Load Balancers, Firewalls, TLS, and Git‑based workflows
Experience with observability and telemetry tools (ELK, Prometheus, Grafana, Dynatrace, Splunk, etc.)
Demonstrated success working in Agile teams—both independently and in highly distributed environments
Desirable Requirements
Bachelor’s degree in Computer Science, Engineering, or equivalent professional experience
Experience with AI and emerging Agentic AI tooling for automation or operations
Compensation
Dell is committed to fair and equitable compensation practices. The salary range for this position is $156,400 to $202,400.
Benefits and Perks of working at Dell Technologies
Your life. Your health. Supported by your benefits. You can explore the overall benefits experience that awaits you as a Dell Technologies team member — right now at
Who we are
We believe that each of us has the power to make an impact. That’s why we put our team members at the center of everything we do. If you’re looking for an opportunity to grow your career with some of the best minds and most advanced tech in the industry, we’re looking for you.
Dell Technologies is a unique family of businesses that helps individuals and organizations transform how they work, live and play. Join us to build a future that works for everyone because Progress Takes All of Us.
Dell Technologies is committed to the principle of equal employment opportunity for all employees and to providing employees with a work environment free of discrimination and harassment. Read the full Equal Employment Opportunity Policy here.
Job ID: R286341
RESPONSIBILITIES:
- Architect, design, and maintain scalable CI/CD pipelines using Azure/AWS DevSecOps.
- Build and optimize Docker-based microservices, images, and deployment pipelines.
- Lead deployments across Docker Swarm, Kubernetes/EKS, and multi-location environments.
- Develop infrastructure automation using Ansible, bash scripting, Terraform and Git-based workflow.
- Manage release pipelines using container registries, artifact feeds, template pipelines, and multi-stage workflows.
- Design multi-environment strategies for dev, QA, staging, and production deployment.
- Implement cloud-native services with AWS & Azure cloud platforms.
- Implement basic security practices, including IAM roles, secrets management, and access controls.
- Develop secure, modular, reusable build and release systems.
- Work closely with full-stack engineering teams (Angular, Java, Python , backend APIs, database engineers).
- Mentor junior DevOps engineers and lead DevOps roadmap decisions.
KNOWLEDGE REQUIREMENTS:
DevOps Expertise:
Azure DevOps pipelines, YAML templating, CI/CD strategy, Git branching models.
Containerization & Orchestration:
Docker images, Docker Compose, Docker Swarm, multi-node/multi-location deployments.
Cloud Technologies:
Azure deployments & infrastructure, AWS (IAM, Lambda, S3, CloudWatch).
Programming / Scripting Languages:
Python, Bash, Linux/Unix administration, awk, shell automation, groovy.
Infrastructure Automation:
Ansible playbooks, tasks/roles, inventory design, configuration management.
Distributed Deployment Architecture:
Multi-site replication, node selection by IP, dynamic service routing.
Database Stack Experience:
PostgreSQL, MySQL, MariaDB operations & migrations.
Observability & Logging:
CloudWatch monitoring, log collection, Prometheus, Grafana, reporting & metrics.
Version Control & Build Systems:
Azure Devops, Git, Git submodules, artifact storage, registry solutions, Secrets Management.
Nice to have AI knowledge/experience and willingness to learn.
EDUCATION & EXPERIENCE REQUIREMENTS
- BS degree in Electrical/Computer Engineering, Computer Science or related field. MS preferred.
- 7+ years experience in a software devops/development/test capacity with enterprise server, storage or networking products.
RESPONSIBILITIES:
- Architect, design, and maintain scalable CI/CD pipelines using Azure/AWS DevSecOps.
- Build and optimize Docker-based microservices, images, and deployment pipelines.
- Lead deployments across Docker Swarm, Kubernetes/EKS, and multi-location environments.
- Develop infrastructure automation using Ansible, bash scripting, Terraform and Git-based workflow.
- Manage release pipelines using container registries, artifact feeds, template pipelines, and multi-stage workflows.
- Design multi-environment strategies for dev, QA, staging, and production deployment.
- Implement cloud-native services with AWS & Azure cloud platforms.
- Implement basic security practices, including IAM roles, secrets management, and access controls.
- Develop secure, modular, reusable build and release systems.
- Work closely with full-stack engineering teams (Angular, Java, Python , backend APIs, database engineers).
- Mentor junior DevOps engineers and lead DevOps roadmap decisions.
KNOWLEDGE REQUIREMENTS:
DevOps Expertise:
Azure DevOps pipelines, YAML templating, CI/CD strategy, Git branching models.
Containerization & Orchestration:
Docker images, Docker Compose, Docker Swarm, multi-node/multi-location deployments.
Cloud Technologies:
Azure deployments & infrastructure, AWS (IAM, Lambda, S3, CloudWatch).
Programming / Scripting Languages:
Python, Bash, Linux/Unix administration, awk, shell automation, groovy.
Infrastructure Automation:
Ansible playbooks, tasks/roles, inventory design, configuration management.
Distributed Deployment Architecture:
Multi-site replication, node selection by IP, dynamic service routing.
Database Stack Experience:
PostgreSQL, MySQL, MariaDB operations & migrations.
Observability & Logging:
CloudWatch monitoring, log collection, Prometheus, Grafana, reporting & metrics.
Version Control & Build Systems:
Azure Devops, Git, Git submodules, artifact storage, registry solutions, Secrets Management.
Nice to have AI knowledge/experience and willingness to learn.
EDUCATION & EXPERIENCE REQUIREMENTS
- BS degree in Electrical/Computer Engineering, Computer Science or related field. MS preferred.
- 7+ years experience in a software devops/development/test capacity with enterprise server, storage or networking products.
RESPONSIBILITIES:
- Architect, design, and maintain scalable CI/CD pipelines using Azure/AWS DevSecOps.
- Build and optimize Docker-based microservices, images, and deployment pipelines.
- Lead deployments across Docker Swarm, Kubernetes/EKS, and multi-location environments.
- Develop infrastructure automation using Ansible, bash scripting, Terraform and Git-based workflow.
- Manage release pipelines using container registries, artifact feeds, template pipelines, and multi-stage workflows.
- Design multi-environment strategies for dev, QA, staging, and production deployment.
- Implement cloud-native services with AWS & Azure cloud platforms.
- Implement basic security practices, including IAM roles, secrets management, and access controls.
- Develop secure, modular, reusable build and release systems.
- Work closely with full-stack engineering teams (Angular, Java, Python , backend APIs, database engineers).
- Mentor junior DevOps engineers and lead DevOps roadmap decisions.
KNOWLEDGE REQUIREMENTS:
DevOps Expertise:
Azure DevOps pipelines, YAML templating, CI/CD strategy, Git branching models.
Containerization & Orchestration:
Docker images, Docker Compose, Docker Swarm, multi-node/multi-location deployments.
Cloud Technologies:
Azure deployments & infrastructure, AWS (IAM, Lambda, S3, CloudWatch).
Programming / Scripting Languages:
Python, Bash, Linux/Unix administration, awk, shell automation, groovy.
Infrastructure Automation:
Ansible playbooks, tasks/roles, inventory design, configuration management.
Distributed Deployment Architecture:
Multi-site replication, node selection by IP, dynamic service routing.
Database Stack Experience:
PostgreSQL, MySQL, MariaDB operations & migrations.
Observability & Logging:
CloudWatch monitoring, log collection, Prometheus, Grafana, reporting & metrics.
Version Control & Build Systems:
Azure Devops, Git, Git submodules, artifact storage, registry solutions, Secrets Management.
Nice to have AI knowledge/experience and willingness to learn.
EDUCATION & EXPERIENCE REQUIREMENTS
- BS degree in Electrical/Computer Engineering, Computer Science or related field. MS preferred.
- 7+ years experience in a software devops/development/test capacity with enterprise server, storage or networking products.