Tvheadend Docker Container Jobs in Usa
2,283 positions found — Page 2
RESPONSIBILITIES:
- Architect, design, and maintain scalable CI/CD pipelines using Azure/AWS DevSecOps.
- Build and optimize Docker-based microservices, images, and deployment pipelines.
- Lead deployments across Docker Swarm, Kubernetes/EKS, and multi-location environments.
- Develop infrastructure automation using Ansible, bash scripting, Terraform and Git-based workflow.
- Manage release pipelines using container registries, artifact feeds, template pipelines, and multi-stage workflows.
- Design multi-environment strategies for dev, QA, staging, and production deployment.
- Implement cloud-native services with AWS & Azure cloud platforms.
- Implement basic security practices, including IAM roles, secrets management, and access controls.
- Develop secure, modular, reusable build and release systems.
- Work closely with full-stack engineering teams (Angular, Java, Python , backend APIs, database engineers).
- Mentor junior DevOps engineers and lead DevOps roadmap decisions.
KNOWLEDGE REQUIREMENTS:
DevOps Expertise:
Azure DevOps pipelines, YAML templating, CI/CD strategy, Git branching models.
Containerization & Orchestration:
Docker images, Docker Compose, Docker Swarm, multi-node/multi-location deployments.
Cloud Technologies:
Azure deployments & infrastructure, AWS (IAM, Lambda, S3, CloudWatch).
Programming / Scripting Languages:
Python, Bash, Linux/Unix administration, awk, shell automation, groovy.
Infrastructure Automation:
Ansible playbooks, tasks/roles, inventory design, configuration management.
Distributed Deployment Architecture:
Multi-site replication, node selection by IP, dynamic service routing.
Database Stack Experience:
PostgreSQL, MySQL, MariaDB operations & migrations.
Observability & Logging:
CloudWatch monitoring, log collection, Prometheus, Grafana, reporting & metrics.
Version Control & Build Systems:
Azure Devops, Git, Git submodules, artifact storage, registry solutions, Secrets Management.
Nice to have AI knowledge/experience and willingness to learn.
EDUCATION & EXPERIENCE REQUIREMENTS
- BS degree in Electrical/Computer Engineering, Computer Science or related field. MS preferred.
- 7+ years experience in a software devops/development/test capacity with enterprise server, storage or networking products.
Remote working/work at home options are available for this role.
RESPONSIBILITIES:
Architect, design, and maintain scalable CI/CD pipelines using Azure/AWS DevSecOps.
Build and optimize Docker-based microservices, images, and deployment pipelines.
Lead deployments across Docker Swarm, Kubernetes/EKS, and multi-location environments.
Develop infrastructure automation using Ansible, bash scripting, Terraform and Git-based workflow.
Manage release pipelines using container registries, artifact feeds, template pipelines, and multi-stage workflows.
Design multi-environment strategies for dev, QA, staging, and production deployment.
Implement cloud-native services with AWS & Azure cloud platforms.
Implement basic security practices, including IAM roles, secrets management, and access controls.
Develop secure, modular, reusable build and release systems.
Work closely with full-stack engineering teams (Angular, Java, Python , backend APIs, database engineers).
Mentor junior DevOps engineers and lead DevOps roadmap decisions.
KNOWLEDGE REQUIREMENTS:
DevOps Expertise:
Azure DevOps pipelines, YAML templating, CI/CD strategy, Git branching models.
Containerization & Orchestration:
Docker images, Docker Compose, Docker Swarm, multi-node/multi-location deployments.
Cloud Technologies:
Azure deployments & infrastructure, AWS (IAM, Lambda, S3, CloudWatch).
Programming / Scripting Languages:
Python, Bash, Linux/Unix administration, awk, shell automation, groovy.
Infrastructure Automation:
Ansible playbooks, tasks/roles, inventory design, configuration management.
Distributed Deployment Architecture:
Multi-site replication, node selection by IP, dynamic service routing.
Database Stack Experience:
PostgreSQL, MySQL, MariaDB operations & migrations.
Observability & Logging:
CloudWatch monitoring, log collection, Prometheus, Grafana, reporting & metrics.
Version Control & Build Systems:
Azure Devops, Git, Git submodules, artifact storage, registry solutions, Secrets Management.
Nice to have AI knowledge/experience and willingness to learn.
EDUCATION & EXPERIENCE REQUIREMENTS
BS degree in Electrical/Computer Engineering, Computer Science or related field. MS preferred.
7+ years experience in a software devops/development/test capacity with enterprise server, storage or networking products.
RESPONSIBILITIES:
~ Architect, design, and maintain scalable CI/CD pipelines using Azure/AWS DevSecOps.
~ Build and optimize Docker-based microservices, images, and deployment pipelines.
~ Lead deployments across Docker Swarm, Kubernetes/EKS, and multi-location environments.
~ Develop infrastructure automation using Ansible, bash scripting, Terraform and Git-based workflow.
~ Manage release pipelines using container registries, artifact feeds, template pipelines, and multi-stage workflows.
~ Design multi-environment strategies for dev, QA, staging, and production deployment.
~ Implement cloud-native services with AWS & Azure cloud platforms.
~ Implement basic security practices, including IAM roles, secrets management, and access controls.
~ Develop secure, modular, reusable build and release systems.
~ Work closely with full-stack engineering teams (Angular, Java, Python , backend APIs, database engineers).
~ Mentor junior DevOps engineers and lead DevOps roadmap decisions.
KNOWLEDGE REQUIREMENTS:
DevOps Expertise :
Azure DevOps pipelines, YAML templating, CI/CD strategy, Git branching models.
Containerization & Orchestration :
Docker images, Docker Compose, Docker Swarm, multi-node/multi-location deployments.
Cloud Technologies :
Azure deployments & infrastructure, AWS (IAM, Lambda, S3, CloudWatch).
Programming / Scripting Languages :
Python, Bash, Linux/Unix administration, awk, shell automation, groovy.
Infrastructure Automation :
Ansible playbooks, tasks/roles, inventory design, configuration management.
Distributed Deployment Architecture :
Multi-site replication, node selection by IP, dynamic service routing.
Database Stack Experience :
PostgreSQL, MySQL, MariaDB operations & migrations.
Observability & Logging :
CloudWatch monitoring, log collection, Prometheus, Grafana, reporting & metrics.
Version Control & Build Systems :
Azure Devops, Git, Git submodules, artifact storage, registry solutions, Secrets Management.
Nice to have AI knowledge/experience and willingness to learn.
EDUCATION & EXPERIENCE REQUIREMENTS
~ BS degree in Electrical/Computer Engineering, Computer Science or related field. MS preferred.
~7+ years experience in a software devops/development/test capacity with enterprise server, storage or networking products.
At Northrop Grumman, our employees have incredible opportunities to work on revolutionary systems in air and space that impact people’s lives around the world today, and for generations to come. Our work preserves freedom, democracy, and advances human discovery and our understanding of the universe. We look for people who have bold new ideas, courage and a pioneering spirit to join forces to invent the future and have a lot of fun along the way. Our culture thrives on intellectual curiosity, cognitive diversity and bringing your whole self to work — and we have an insatiable drive to do what others think is impossible. Our employees are not only part of history, they’re making history.
Northrop Grumman has an opening for a Principal or Senior Principal DevOps Engineer to join our team of qualified, diverse individuals. This position can be located in Roy, UT, Bellevue, NE, or Huntsville, AL.
As an DevOps Engineer, you will
Develop scripts, workflows, and playbooks for storage provisioning, automation, and backup integration.
Design, build, maintain and own SDP‑compliant CI/CD pipelines, ensuring they incorporate security checks, automated patching, and governance requirements.
Develop containers with Podman and orchestrate deployments on Kubernetes (or similar platforms).
Create and maintain IaC using Ansible (or comparable tools) to provision and configure cloud resources, containers, and networking components.
Write automation scripts and playbooks in Python, Go, Bash, or other languages for building images, running SAST scans, and automating repetitive tasks.
Develop and evolve build, deployment, and release processes, including versioning, artifact storage, and promotion across environments.
Basic Qualifications
Must have an active U.S. Government DoD Secret security clearance at time of application, current and within scope, with an ability to obtain and maintain Special Access Program (SAP) approval as determined by the company to meet its business need
Hold or have the ability to obtain Security + CE (or other DoD 8570/8140 certification)
Level 3 (T3): 5 Years with Bachelors in Science from an accredited university; 3 Years with Masters; 1 Year with PhD or
4 additional years of relevant experience in lieu of a degree.
Level 4 (T4): 8 Years with Bachelors in Science from an accredited university; 6 Years with Masters; 4 Years with PhD or
4 additional years of relevant experience in lieu of a degree.
Experience with developing CI/CD workflows and utilizing tools such as Nexus, Maven, Jira, GitLab, and Release Management
Hands-on experience with Infrastructure as Code tools (e.g., Puppet, Chef, Ansible.)
Programming and scripting experience in a UNIX environment (GoLang, C++, Perl, Python, Bash, Ruby, Shell, Scripts)
Preferred Qualifications
Experience with Kubernetes, Docker, and/or other cloud orchestration tools and technologies
Experience with Podman, Buildah, Skopeo and/or other container tools and technologies
Experience with CI/CD best practices, automated builds and tests, quality gates, software quality, and CI tools, i.e., Jenkins
Experience with configuration management tools, i.e., Git, GitHub,
GitLab, Bitbucket, others Familiarity with branching strategies, gated commits, source-controlled management, etc. Familiarity with the principle of DevSecOps
Familiarity with Atlassian Tool Suite Jira, Confluence Familiarity with using a Nexus Repository
Familiarity with security coding standard best practices, static and dynamic scanning tools, i.e., SonarQube, Fortify, Coverity, PCLint, Anchore, Nexus Lifecycle, etc.
Familiarity with Agile Development
We are seeking a Senior Lead Developer to lead the development and deployment of our backend services. In this role, you will be the bridge between our PostgreSQL database and React frontend, responsible not only for writing high-performance Python code but also for architecting the CI/CD pipelines that bring our applications to life. You will ensure our integration layers are scalable, secure, and automatically deployed.
Key Responsibilities
• API & Backend Development: Design and maintain production-grade RESTful APIs using Python (FastAPI, Flask) with a focus on asynchronous processing.
• Database Engineering: Architect relational schemas and write optimized SQL in PostgreSQL, ensuring data integrity and query performance.
• React Integration: Partner with frontend teams to define API contracts, handle state-consistent data fetching, and implement secure authentication (JWT/OAuth2).
• CI/CD & Deployment: Build and manage automated deployment pipelines (e.g., Azure DevOps or Jenkins) to move code from local environments to staging and production.
• Containerization & Cloud: Package applications using Docker and manage deployments on cloud platforms or container orchestrators (Kubernetes/ECS).
• System Reliability: Implement automated testing (PyTest), logging, and monitoring to ensure high availability of integration services.
Technical Requirements
• Experience: 10+ years of professional backend development with a heavy emphasis on Python and API architecture.
• PostgreSQL Expert: Advanced SQL knowledge, including indexing strategies, migrations (Alembic/Flyway), and performance profiling.
• DevOps Tooling: Hands-on experience with Docker and building CI/CD pipelines for Python applications.
• Frontend Literacy: Solid understanding of React (Hooks, Context API) and how it consumes complex JSON structures.
• Infrastructure as Code (Bonus): Familiarity with Terraform or AWS CloudFormation is a significant plus.
The "Lead" Expectation
At the 10-year mark, we expect more than just "feature delivery." We are looking for a candidate who:
• Automates Everything: If a task is done twice, they write a script or a CI job for it.
• Designs for Failure: Implements proper error handling, retries, and health checks in the API layer.
• Collaborates Across the Stack: Can jump into a React component or a Postgres execution plan to find the root cause of a bottleneck.
AI Innovation Architect
Location: Hybrid; Ashburn, VA; Springfield, VA; Washington, D.C.
Clearance: U.S. Citizen; Must have an active Top-Secret Clearance or DHS Public Trust Clearance.
InDev is seeking a senior strategic and technical AI Architect responsible for designing, building, and deploying artificial intelligence solutions that support mission outcomes across the homeland security market. In this role you will bridge advanced AI capabilities - including machine learning, natural language processing, and data engineering - with operational requirements, ensuring solutions are secure, scalable, and aligned with the homeland security mission.
YOUR FUTURE DUTIES AND RESPONSIBILITIES
- Define overall system architecture, selecting and governing Artificial Intelligence / Machine Learning (AI/ML) and platform technologies, and ensuring solutions are scalable, secure, and production-ready
- Lead end-to-end technical design, development, and implementation of an agentic AI system to orchestrate user queries across enterprise data sources
- Partner closely with development, DevOps and data engineering teams to translate project requirements into an extensible AI architecture
- Create and promote AI strategies that align with business objectives
- Develop and coordinate POCs to test new technologies
- Evaluate and select appropriate AI tools, frameworks, and platforms (i.e., AWS, Azure, Google) to drive innovation
QUALIFICATIONS
- U.S. Citizen; Active Top-Secret Clearance or DHS Public Trust Clearance
- 8+ years of experience delivering AI solutions across federal agencies
- Bachelor’s degree in Computer Science, Engineering, or Data Science
- Deep understanding of machine learning (ML), deep learning, Natural Language Processing (NLP), and neural networks
- Experience with cloud platforms (AWS, Google Cloud, Azure) and container orchestration tools like Kubernetes and Docker
- Ability to identify high-impact AI use cases and translate them into technical requirements
- Experience designing, building, and deploying advanced AI systems including Generative AI, AI Agents, LLMs, Reinforcement Learning, and computer vision models
- Ability to apply cloud and engineering expertise across AWS, GCP, Kubernetes, Docker, Terraform, Helm, Linux, and AI services, such as SageMaker, Vertex AI, Bedrock, or Gemini
- Experience with Python, agent frameworks, data engineering, APIs/microservices, vector databases, SQL engines, distributed systems, cloud services, RAG
- Experience developing and maintaining AI/ML roadmaps, performing Analysis of Alternatives, and making defensible technical tradeoff decisions
- Experience leading multidisciplinary teams, including data scientists, engineers, and business stakeholders
- Excellent written and oral communication skills
- Ability to tailor and present information across multiple stakeholders
NICE TO HAVES
- Experience integrating AI solutions with SaaS/PaaS platforms (e.g., ServiceNow, Salesforce, etc.)
- Experience implementing virtual agents within SaaS/PaaS platforms (e.g., ServiceNow Virtual Agent, Salesforce Agentforce, etc.)
- Experience with Google Gemini
ABOUT US
At InDev, we’re not just a company; we’re a trailblazing force transforming the way data shapes the future. As a dynamic player in the federal government sector, we’re on a mission to empower agencies with cutting-edge data solutions that drive innovation, efficiency, and progress. Our team thrives on collaboration, innovation, and embracing challenges head-on to create a meaningful impact on the world around us.
WHY INDEV
- Innovative Environment: Join a team that thrives on creativity and innovation, where your ideas are not only heard but encouraged.
- Meaningful Impact: Contribute to projects that directly impact federal agencies, driving positive change on a national scale.
- Dynamic Collaboration: Work alongside diverse experts who are passionate about pushing boundaries and making a difference.
- Agile Mindset: Embrace Agile methodologies that encourage flexibility, adaptability, and rapid growth.
- Learning Culture: Enjoy ongoing learning opportunities and professional development to expand your skill set.
- Cutting-edge Tech: Engage with the latest technologies and tools in the data integration landscape.
If you’re ready to embark on a journey of innovation, collaboration, and impact, InDev welcomes you to join our team. Let’s shape the future together.
DevOps Architect
Los Angeles, CA - Onsite (Day 1)
Long Term Contract
Skills Required:
- AWS & GCP
- Docker & Kubernetes
- Pulumi
Job Description
We are seeking a highly skilled Senior DevOps Architect with deep expertise across multi‑cloud environments and modern DevOps tooling. The ideal candidate is an SME with strong hands‑on experience building, automating, deploying, and optimizing infrastructure at scale.
Key Responsibilities
- Serve as a DevOps SME with 8+ years of multi‑cloud experience, including AWS, GCP, and hypervisor frameworks.
- Expertise in managed cloud services such as Lambda, Cloud Functions, S3 (large volumes), Elasticsearch, Step Functions, DynamoDB, Aurora, and other RDS services.
- Strong background in Docker-based container platforms and CI/CD workflows.
- Advanced scripting and automation capabilities with Terraform, IaC, and Pulumi.
- Ability to write reusable modules and infrastructure code (Python preferred).
- Strong SQL skills and understanding of relational and non-relational databases; proficiency in database tuning.
- Experience working with multiple build systems: npm, Maven, Poetry, Mono, ReactJS, VueKit.
- Proficient in all aspects of Kubernetes, including deployment automation using Helm and Kustomize.
- Ability to understand APIs, create reusable CI/CD modules, and document work in GitHub.
- Experience leading offshore DevOps/System Engineers and enforcing IaC adoption.
- Collaborate with AWS, GCP, Apple Cloud/Hybrid Cloud teams to troubleshoot 3PC issues.
- Skilled in observability tooling, incident management, and performance optimization.
- Strong knowledge of networking (DNS, load balancers, VPNs, VPCs, firewalls, access control).
- Experience building modules using Terraform, AWS CDK, or Pulumi.
- Knowledge of Java is a plus.
Must Have Qualifications
- Proven leadership and mentoring experience.
- Deep understanding of security best practices, vulnerability mitigation, and risk management.
- Performance tuning and optimization expertise.
- Experience with disaster recovery and backup strategies.
- Strong experience in hybrid cloud environments.
- Includes Static Application Security Testing (SAST), Dynamic Application Security Testing (DAST), API security testing, AI/ML platforms, and penetration testing
- Ensuring compliance with industry standards such as OWASP Top10, CWE, CVE, and NIST guidelines
Required Technical Knowledge& Competencies
- Expertise in SAST, DAST, API security testing, and penetration testing.
- Strong programming knowledge (Java, .NET, Python, JavaScript) for code level analysis,
- Background of Development
- Build, maintain, and secure automation pipelines using tools like Jenkins, GitLab CI, or GitHub Actions, ensuring security scans occur at every code commit.
- Implement and manage security tools, including Static Application Security Testing (SAST), Dynamic Application Security Testing (DAST), Container Security (e.g., Trivy), and dependency scanning
- Use tools like Terraform or Ansible to deploy secure, compliant infrastructure.
- Proactively identify, prioritize, and remediate security vulnerabilities in application code and infrastructure.
- Ensure compliance with industry standards (e.g., PCI-DSS, GDPR) by embedding compliance-as-code into the development workflow.
- Act as a security advocate, working with DevOps and Development teams to foster a \"security first\" culture. Familiarity with cloud security testing (AWS, Azure, GCP),
- Experience with container security (Docker, Kubernetes),
- Excellent communication and stakeholder management skills.
Qualifications
- Bachelor’s degree in computer science, Information Security, or related field,
- 6-8 years of IT experience, with at least 5+ years in application security testing.
- Preferred certifications: OSCP, CEH, GWAPT, CISSP
Role – Senior AWS PySpark Developer
Location – Hybrid – South San Francisco, CA
We are seeking an experienced Sr. AWS PySpark Developer with 8 -10 years of experience to design, build, and optimize our data pipelines and analytics architecture. The ideal candidate will have a strong background in data wrangling and analysis, with a deep understanding of AWS data services.
Key Responsibilities:
- Design, build, and optimize robust data pipelines and data architecture on the AWS cloud platform.
- Wrangle, explore, and analyze large datasets to identify trends, answer business questions, and pinpoint areas for improvement.
- Develop and maintain a next-generation analytics environment, providing a self-service, centralized platform for all data-centric activities.
- Formulate and implement distributed algorithms for effective data processing and trend identification.
- Configure and manage Identity and Access Management (IAM) on the AWS platform.
- Collaborate with stakeholders to understand data requirements and deliver effective solutions.
Required Skills & Experience:
- 8-10 years of experience as a Data Engineer or Developer.
- Proven experience building and optimizing data pipelines on AWS.
- Proficiency in scripting with Python.
- Strong working knowledge of:
- Big Data Tools: AWS Athena.
- Relational & NoSQL Databases: AWS Redshift and PostgreSQL.
- Data Pipeline Tools: AWS Glue, AWS Data Pipeline, or AWS Lake Formation.
- Container Orchestration: Kubernetes, Docker, Amazon ECR/ECS/EKS.
- Experience with wrangling, exploring, and analyzing data.
- Strong organizational and problem-solving skills.
Preferred Skills:
- Experience with machine learning tools (SageMaker, TensorFlow).
- Working knowledge of stream processing (Kinesis, Spark-Streaming).
- Experience with analytics and visualization tools (Tableau, Power BI).
- Knowledge of optimizing AWS Redshift performance.
Education
- Bachelor’s or Master’s Degree in Information Technology, Computer Science or relevant field.
Work Environment
This job operates in a professional office environment. This role routinely uses standard office equipment, including but not limited to, computers, phones, and photocopiers.
Physical Demands
This position requires the frequent and repetitive use of a computer, keyboard, and mouse. Hand and finger dexterity is required.
Other Duties
Please note this job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee for this job. Duties, responsibilities, and activities may change at any time with or without notice.
EEO
Saama provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation, and training.
As a Hivemind Build and CI engineer, you will design and implement engineering centric automation across the organization. You will work closely with the product development teams implementing policies and guidelines into the continuous integration and delivery systems. This role requires you to be very hands on and contribute to discussions with cross-functional teams across the organization. We embrace an attitude that focuses on solving the root cause of problems efficiently. A large part of your day to day will be in our build pipelines, build configuration management, and focusing on making changes to increase developer iteration time.
What you'll do:
- Own the Hivemind CI/CD system, engaging with autonomy developers and the Developer Experience team daily to support as needed.
- Manage and own the monitoring tool stack, focusing on build infrastructure metrics
- Manage and own internal build containers used by all of the Hivemind organization
- Support AI engineers with build issues, and educate on best practices
- Work closely with the Developer Experience team to improve engineering efficiency
Required qualifications:
- BS in computer science or related engineering field with 3+ years of professional experience.
- Experience with configuration management tools (Makefile, CMake, Conan, Bazel, etc.)
- Strong demonstrated proficiency in continuous integration/delivery (e.g. Github Actions, ADO, TeamCity, etc.).
- Strong understanding of C++ (or other compiled language), Linux and CMake
- Strong knowledge of APIs, web services, and identity access management
- Strong knowledge of containers (e.g. Docker, Podman, etc.).
- Strong knowledge of scripting languages (Bash, Python, PowerShell).
- Strong knowledge of Git.
- Strong system administration in Linux (w/ Windows a bonus).
- Strong desire to learn and grow on the job.
Preferred qualifications:
- Strong Experience with Conan Package Manager
- Experience with Rust in a production environment.
- Experience with Hardware in the Loop build/deploy/test systems
- Experience owning build infrastructure
- Experience with NVIDIA Jetson products
$120,000 - $180,000 a year
Full-time regular employee offer package:
Pay within range listed + Bonus + Benefits + Equity
Temporary employee offer package:
Pay within range listed above + temporary benefits package (applicable after 60 days of employment)
Salary compensation is influenced by a wide array of factors including but not limited to skill set, level of experience, licenses and certifications, and specific work location. All offers are contingent on a cleared background and possible reference check. Military fellows and part-time employees are not eligible for benefits. Please speak to your talent acquisition representative for more information.
Shield AI is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, marital status, disability, gender identity or Veteran status. If you have a disability or special need that requires accommodation, please let us know.