Terraform Github Module Jobs in Usa
1,367 positions found — Page 10
We're building safety-enhancing technology for aviation that will save lives. Automated aviation systems will enable a future where air transportation is safer, more convenient and fundamentally transformative to the way goods - and eventually people - move around the planet. We are a team of mission-driven engineers with experience across aerospace, robotics and self-driving cars working to make this future a reality.
As a Senior Software Engineer - Engineering Productivity at Reliable Robotics, you will design, and implement software to support the development, analysis, and certification of automated aircraft systems. You will work closely with product owners and end users to develop solutions that enable and optimize engineering development workflows. The software you produce will be critical to the development and certification of the first fully autonomous aircraft.
Responsibilities
In your role as an internal tool developer, you will develop applications, infrastructure, and tools used by engineering to capture product requirements and interface definitions, model the product architecture and design, and reduce and analyze flight and lab test data. You will supercharge the engineering organization's efficiency and effectiveness by streamlining tools and processes. You will work with other teams and stakeholders to establish technical and UX design requirements for these projects and own the "plan, code, build, test, release, deploy" lifecycle of these applications and services.
Basic Success Criteria
Bachelor's degree in Computer Science, Computer Engineering, or equivalent experience
5+ years experience with professional full stack web development in a team setting
Professional experience with core browser technologies (JavaScript, HTML, CSS) and TypeScript
Experience structuring dynamic, model-driven data and determining data relationships
Experience working with SQL, NoSQL, and time series databases
Experience designing software architecture for both new and existing projects
Preferred Criteria
Experience using Python and libraries such as pandas, matplotlib, and django
Experience integrating with cloud platforms and infrastructure tools such as AWS, Terraform, and Docker
Experience designing and implementing ingestion pipelines for high-throughput streams of real-time telemetry
Experience integrating business intelligence and data visualization tools such as Tableau, Power BI, Superset, Metabase
Experience developing React components and reusable libraries/tools for developers
At Reliable Robotics, we believe that our internal tools are key ingredients to our success. Aircraft design, integration, and certification are highly complex processes requiring diligent management of data and their relationships. Traditionally a paper process, our tools enable our system designers to move faster, conduct more thorough and comprehensive analyses, and design safer aircraft systems. Come be a part of taking our products to the next level.
This position requires access to information that is subject to U.S. export controls. An offer of employment will be contingent upon the applicant's capacity to perform in compliance with U.S. export control laws.
All applicants are asked to provide documentation that legally establishes status as a U.S. person or non-U.S. person (and nationalities in the case of a non-U.S. person). Where the applicant is not a U.S. person, meaning not a (i) U.S. citizen or national, (ii) U.S. lawful permanent resident, (iii) refugee under 8 U.S.C. * 1157, or (iv) asylee under 8 U.S.C. * 1158, or not otherwise permitted to access the export-controlled technology without U.S. government authorization, the Company reserves the right not to apply for an export license for such applicants whose access to export-controlled technology or software source code requires authorization and may decline to proceed with the application process and any offer of employment on that basis.
At Reliable Robotics, our goal is to be a diverse and inclusive workforce. As an Equal Opportunity Employer, we do not discriminate on the basis of race, religion, color, creed, ancestry, sex, gender (including pregnancy, childbirth, breastfeeding, or related medical conditions), gender identity, gender expression, sexual orientation, age, non-disqualifying physical or mental disability or medical conditions, national origin, military or veteran status, genetic information, marital status, or any other basis covered by applicable law. All employment and promotion is decided on the basis of qualifications, merit, and business need.
If you require reasonable accommodation in completing an application, interviewing, completing any pre-employment testing, or otherwise participating in the employee selection process, please direct your inquiries to
Compensation Range: $215K - $300K
Apply for this JobSr. Data Engineer (Hybrid)
Chicago, IL
The American Medical Association (AMA) is the nation's largest professional Association of physicians and a non-profit organization. We are a unifying voice and powerful ally for America's physicians, the patients they care for, and the promise of a healthier nation. To be part of the AMA is to be part of our Mission to promote the art and science of medicine and the betterment of public health.
At AMA, our mission to improve the health of the nation starts with our people. We foster an inclusive, people-first culture where every employee is empowered to perform at their best. Together, we advance meaningful change in health care and the communities we serve.
We encourage and support professional development for our employees, and we are dedicated to social responsibility. We invite you to learn more about us and we look forward to getting to know you.
We have an opportunity at our corporate offices in Chicago for a Sr. Data Engineer (Hybrid) on our Information Technology team. This is a hybrid position reporting into our Chicago, IL office, requiring 3 days a week in the office.
As a Sr. Data Engineer, you will play a key role in implementing
and maintaining AMA's enterprise data platform to support analytics,
interoperability, and responsible AI adoption. This role partners closely with
platform engineering, data governance, data science, IT security, and business
stakeholders to deliver highquality, reliable, and secure data products. This
role contributes to AMA's modern lakehouse architecture, optimizing data
operations, and embedding governance and quality standards into engineering
workflows. This role serves as a
senior technical contributor within the team-providing mentorship to junior
engineers and implementing engineering best practices within the data platform function,
in alignment with architectural direction set by leadership.
RESPONSIBILITIES:
Data Engineering & AI Enablement
- Build and maintain scalable data pipelines and
ETL/ELT workflows supporting analytics, operational reporting, and AI/ML use
cases. - Implement best practice patterns for ingestion,
transformation, modeling, and orchestration within a modern lakehouse
environment (e.g., Databricks, Delta Lake, Azure Data Lake). - Develop highperformance
data models and curated datasets with strong attention to quality, usability,
and interoperability; create reusable engineering components and automation. - Collaborate with the Architecture Team, the Data
Platform Lead, and federated IT teams to optimize storage, compute, and
architectural patterns for performance and costefficiency. - Build model-ready data sets and feature
pipelines to support AI/ ML use cases; serve as a technical coordination point
supporting business units' AI-related infrastructure needs. - Collaborate with data scientists and AI Working
Group to operationalize models responsibly and maintain ongoing monitoring
signals.
Governance, Quality & Compliance
- Embed data governance, metadata standards,
lineage tracking, and quality controls directly into engineering workflows;
ensure technical implementation and alignment within engineering workflows. - Work with the Data Governance Lead and business
stakeholders to operationalize stewardship, classification, validation,
retention, and access standards. - Implement privacybydesign and securitybydesign
principles, ensuring compliance with internal policies and regulatory
obligations. - Maintain documentation for pipelines, datasets,
and transformations to support transparency and audit requirements.
Platform Reliability, Observability & Optimization
- Monitor and troubleshoot pipeline failures,
performance bottlenecks, data anomalies, and platformlevel issues. - Implement observability tooling, alerts,
logging, and dashboards to ensure endtoend reliability. - Support cost governance by optimizing compute
resources, refining job schedules, and advising on efficient architecture. - Collaborate with the Data Platform Lead on
scaling, configuration management, CI/CD pipelines, and environment management. - Collaborate with business units to understand
data needs, translate them into engineering requirements, and deliver
fit-for-purpose data solutions; share and apply best practices and emerging
technologies within assigned initiatives. - Work with IT Security and Legal/ Compliance to
ensure platform and datasets meet risk and regulatory standards.
Staff Management
- Lead, mentor, and provide management oversight
for staff. - Responsible for setting objectives, evaluating
employee performance, and fostering a collaborative team environment. - Responsible for developing staff knowledge and
skills to support career development.
May include other responsibilities as assigned
REQUIREMENTS:
- Bachelor's degree in Computer Science, Engineering, Information Systems, or related field preferred or equivalent work experience and HS diploma/equivalent education required.
- 5+ years of experience in data engineering within cloud environments
- Experience in people management preferred.
- Demonstrated hands-on experience with modern data platforms (Databricks preferred).
- Proficiency in Python, SQL, and data
transformation frameworks. - Experience designing and operationalizing
ETL/ELT pipelines, orchestration workflows (Airflow, Databricks Workflows), and
CI/CD processes. - Solid understanding of data modeling,
structured/unstructured data patterns, and schema design. - Experience implementing governance and quality
controls: metadata, lineage, validation, stewardship workflows. - Working knowledge of cloud architecture, IAM,
networking, and security best practices. - Demonstrated ability to collaborate across
technical and business teams. - Exposure to AI/ML engineering concepts, feature
stores, model monitoring, or MLOps patterns. - Experience with infrastructureascode
(Terraform, CloudFormation) or DevOps tooling.
The American Medical Association is located at 330 N. Wabash Avenue, Chicago, IL 60611 and is convenient to all public transportation in Chicago.
This role is an exempt position, and the salary range for this position is $115,523.42-$150,972.44. This is the lowest to highest salary we believe we would pay for this role at the time of this posting. An employee's pay within the salary range will be determined by a variety of factors including but not limited to business consideration and geographical location, as well as candidate qualifications, such as skills, education, and experience. Employees are also eligible to participate in an incentive plan. To learn more about the American Medical Association's benefits offerings, please click here.
We are an equal opportunity employer, committed to diversity in our workforce. All qualified applicants will receive consideration for employment. As an EOE/AA employer, the American Medical Association will not discriminate in its employment practices due to an applicant's race, color, religion, sex, age, national origin, sexual orientation, gender identity and veteran or disability status.
THE AMA IS COMMITTED TO IMPROVING THE HEALTH OF THE NATION
Apply NowShare Save JobRemote working/work at home options are available for this role.
About Pinterest:
Millions of people around the world come to our platform to find creative ideas, dream about new possibilities and plan for memories that will last a lifetime. At Pinterest, we're on a mission to bring everyone the inspiration to create a life they love, and that starts with the people behind the product.
Discover a career where you ignite innovation for millions, transform passion into growth opportunities, celebrate each other's unique experiences and embrace theflexibility to do your best work. Creating a career you love? It's Possible.
At Pinterest, AI isn't just a feature, it's a powerful partner that augments our creativity and amplifies our impact, and we're looking for candidates who are excited to be a part of that. To get a complete picture of your experience and abilities, we'll explore your foundational skills and how you collaborate with AI.
Through our interview process, what matters most is that you can always explain your approach, showing us not just what you know, but how you think. You can read more about our AI interview philosophy and how we use AI in our recruiting process here.
About tvScientific
tvScientific is the first and only CTV advertising platform purpose-built for performance marketers. We leverage massive data and cutting-edge science to automate and optimize TV advertising to drive business outcomes. Our solution combines media buying, optimization, measurement, and attribution in one, efficient platform. Our platform is built by industry leaders with a long history in programmatic advertising, digital media, and ad verification who have now purpose-built a CTV performance platform advertisers can trust to grow their business.
tvScientific is looking for a Senior MLOps Engineer! You'll be working with a distributed engineering team on our Connected TV ad-buying platform, as we scale our Machine Learning practice. We've cracked the code on optimizing TV ad campaigns. We're scaling massively and we need your help to make that scale sustainable.
An Idealab company, tvScientific was co-founded by executives with deep roots in programmatic advertising and digital media. tvScientific helps our clients buy ads across the CTV universe, from Hulu to PlutoTV to the ad-supported tier of Disney+ and (HBO) Max. Since our acquisition by Pinterest, we're expanding our work on CTV to lift search & social advertising performance.
What you'll do:
- Scale the decisionmaking process for tools for the tvScientific AI team, from our workflows to our training infrastructure to our Kubernetes deployments
- Improve the developer experience for the data science team
- Upgrade our observability tooling
- Serve as a technical lead and mentor to the team
- Make every deployment smooth as our infrastructure evolves.
What we're looking for:
- Deep understanding of Linux
- Excellent writing skills
- A systems-oriented mindset
- Experience in high-performance software (RTB, HFT, etc.)
- Software engineering experience + reliability (e.g. CI/CD) expertise
- Strong observability instincts
- Nice-To-Haves:
- Reverse-engineering experience
- Terraform, EKS, or MLOps experience
- Python, Scala, or Zig experience
- NixOS experience
- Adtech or CTV experience
- Experience deploying a distributed system across multiple clouds
- Experience in hard real-time low-latency (
Location: Remote
Duration: 12 Months (Possibility of extension)
Job Description:
Your Team & Role As an Infrastructure Security Engineer for the Global Technology Workforce Identity and Access Management (IAM) Team, you will partner with technology leads and engineers to develop and improve IAM solutions. You will design, test, and engineer new and existing solutions and streamline our day-to-day workflows, and help the Identity engineering team to meet project timelines and complete net new business as usual (BAU) work. The work will be around Security and Engineering of our Tier Zero Identity platform and Active Directory Domain Services (AD DS) systems.
Here is What You Can Expect on a Typical Day:
- Develop high quality, well documented engineering configuration and infrastructure solutions that adhere to all applicable clients security standards
- Ensure product and infrastructure security is maintained throughout the system lifecycle, integrating new security features, patches, and updates into existing environments
- Collaborate with tech leads in understanding system requirements, defining stories, creation of technical designs, and deployment of solutions
- Support engineering and other team members to understand systems end-to-end and deliver robust solutions that support positive business impact
- Write scripts and automation code to support areas such as operational excellence, production validation, and security for our Windows servers and identity infrastructure platform
- Research problems discovered internally or by stakeholders and consumers, ideate and develop solutions to mitigate without negative impact to the business
- Bring an applied understanding of relevant and emerging technologies, begin to identify opportunities to provide input to the team and coach others, and embed learning and innovation in the day-to-day
- Work on complex problems in which analysis of situations or data requires an in-depth evaluation of various factors
- Use programming languages including but not limited to PowerShell, Terraform, ARM templates, etc.
The work requires a minimum of 5-8 years of experience in the following areas:
- Infrastructure: Hyper-V and Windows Server Operating systems
- Observability: System Center Operations Manager (SCOM) / Azure Monitoring
- Configuration: System Center Configuration Manager (SCCM) / Azure Arc
- Identity: Active Directory Domain Services (AD DS) / Microsoft Entra ID
- Automation: PowerShell, Infrastructure and Configuration as Code (e.g. Chef, Ansible)
- Databases: SQL Server
Experience in:
Triaging and troubleshooting identity and infrastructure issues
Development of Automation and Deployments for Hyper-V and Windows Server Infrastructure
Automated testing and validation to support non-production and production changes
Writing clear engineering and system documentation (e.g. in Confluence)
Granite delivers advanced communications and technology solutions to businesses and government agencies throughout the United States and Canada. We provide exceptional customized service with an emphasis on reliability and outstanding customer support and our customers include over 85 of the Fortune 100. Granite has over $1.85 Billion in revenue with more than 2,100 employees and is headquartered in Quincy, MA. Our mission is to be the leading telecommunications company wherever we offer services as well as provide an environment where the value of each individual is recognized and where each person has the opportunity to further their growth and achieve success.
Granite has been recognized by the Boston Business Journal as one of the "Healthiest Companies" in Massachusetts for the past 15 consecutive years.
Our offices have onsite fully equipped state of the art gyms for employees at zero cost.
Granite's philanthropy is unparalleled with over $300 million in donations to organizations such as Dana Farber Cancer Institute, The ALS Foundation and the Alzheimer's Association to name a few.
We have been consistently rated a "Fastest Growing Company" by Inc. Magazine.
Granite was named to Forbes List of America's Best Employers 2022, 2023 and 2024.
Granite was recently named One of Forbes Best Employers for Diversity.
Our company's insurance package includes health, dental, vision, life, disability coverage, 401K retirement with company match, childcare benefits, tuition assistance, and more.
If you are a highly motivated individual who wants to grow your career with a fast paced and progressive company, Granite has countless opportunities for you.
EOE/M/F/Vets/Disabled
General Summary of Position:
Granite is a leading provider of managed security and networking platforms. The NetOps team supports a wide range of technologies and services including Fortinet SD-WAN, Palo Alto, Cisco, and Juniper. Within this team, the Application Automation practice focuses on improving operational efficiency and scalability through scripting, automation, and tool development. This position plays a key role in supporting large enterprise environments by developing automation frameworks, streamlining complex deployments, and reducing manual intervention in day-to-day operations. Responsibilities include solution design, automation development, support for new product rollouts, Tier 3 escalations, and collaboration with engineering teams to build repeatable, efficient processes.
Duties and Responsibilities:
- Design, develop, and maintain automation scripts and tools to support network deployments, configuration management, and troubleshooting.
- Collaborate with engineering and operations teams to identify repetitive tasks and build automated solutions to reduce manual workload.
- Support new technology rollouts by creating scalable deployment frameworks and configuration templates.
- Respond to Tier 3 technical escalations, particularly those involving process inefficiencies, integration challenges, or repeat issues solvable via automation.
- Research and test emerging network automation technologies to improve toolsets and practices.
- Document automation procedures, scripts, and best practices for operational consistency and knowledge sharing.
- Ensure all automated solutions align with security and compliance standards.
- Participate in audits and reviews to ensure reliability and maintainability of automation systems.
Required Qualifications:
- Bachelor's degree in computer science, Network Engineering, or a related field.
- Proficiency in scripting languages such as Python, Bash, or PowerShell.
- Strong understanding of APIs, network configuration models (e.g., NETCONF, RESTCONF), and configuration automation tools (e.g., Ansible, Terraform).
- Experience supporting enterprise-grade networks, preferably in a service provider or managed services environment.
- Demonstrated ability to troubleshoot complex network issues and implement automated solutions.
- Excellent communication and documentation skills
At MVP Health Care, our commitment to meeting the needs of our customers goes beyond our health plans. We're shaping the future of health care-and as an intern, you'll be part of it! Dive into a world of innovation, working alongside experienced professionals who are passionate about making a difference.
This is more than an internship; it's an opportunity to build skills, gain confidence, and make a meaningful impact while discovering what a career in a transforming industry can look like.
What's in it for you:
Our internship program is designed to provide a comprehensive learning experience.
- Build Real Skills: Gain hands‑on experience, practical skills, and industry knowledge through meaningful work and targeted learning opportunities.
- Work on Impactful Projects: Contribute to real projects that support business priorities and address real‑world health care challenges.
- Grow Your Network: Connect with leaders, mentors, and fellow interns through networking events and everyday collaboration.
- Learn from Mentors: Receive guidance and feedback from experienced professionals who are invested in your growth.
- Give Back: Participate in community service initiatives and be part of an organization committed to making a difference.
- Support Your Well‑Being: Experience a supportive culture with programs that promote balance and well‑being.
- Launch Your Career: Join an award‑winning, inclusive workplace and explore a future in a growing, evolving industry.
Qualifications you'll bring:
- Currently pursuing a degree in Computer Science, Information Technology, or a related field.
- The availability to work full-time, hybrid in our Schenectady, NY or Rochester, NY Office. Program Duration: Mon. 6/1 - Fri. 8/7
- Basic understanding of network and server infrastructure.
- Familiarity with operating systems (Windows, Linux).
- Strong problem-solving skills.
- Excellent communication and teamwork abilities.
- Commitment to being the difference for our customers in every interaction
Your key responsibilities:
- Cloud Migration Support - Participate in migration execution tasks, including validation, testing, and post‑migration checks to ensure systems meet operational standards.
- Azure Administration and Support - Assist with the administration and operational support of Azure environments, including virtual machines, storage, and networking components.
- On-Prem Environment Support - Assist with on‑premises infrastructure support, including server builds, decommissions, and basic configuration tasks.
- Infrastructure Automation - Implement automation practices using Terraform, Ansible, and Semaphore to assist the team in building the next generation of infrastructure deployments on premise or in Azure cloud
- Flexera Cost Optimization & Azure Tagging - Assist with tagging Azure resources to better optimize costs and improve our tagging strategy in Azure.
- Penetration Test Remediation - Assist in addressing vulnerabilities found in recent security penetration testing to better protect and harden MVP assets.
- Data Center Modernization - Assist in upgrading and optimize data center infrastructure, working with a team of Cloud Infrastructure Engineers.
- Infrastructure Technical Debt Reduction - Identify and remediate legacy infrastructure issues to move forward with modernizing MVP technology and services.
Where you'll be:
Hybrid- Schenectady or Rochester, NY
Pay Transparency
MVP Health Care is committed to providing competitive employee compensation and benefits packages. The base pay range provided for this role reflects our good faith compensation estimate at the time of posting. MVP adheres to pay transparency nondiscrimination principles. Specific employment offers and associated compensation will be extended individually based on several factors, including but not limited to geographic location; relevant experience, education, and training; and the nature of and demand for the role.
We do not request current or historical salary information from candidates.
Pay Rate: $18 - $25 per hour
MVP's Inclusion Statement
At MVP Health Care, we believe creating healthier communities begins with nurturing a healthy workplace. As an organization, we strive to create space for individuals from diverse backgrounds and all walks of life to have a voice and thrive. Our shared curiosity and connectedness make us stronger, and our unique perspectives are catalysts for creativity and collaboration.
MVP is an equal opportunity employer and recruits, employs, trains, compensates, and promotes without discrimination based on race, color, creed, national origin, citizenship, ethnicity, ancestry, sex, gender identity, gender expression, religion, age, marital status, personal appearance, sexual orientation, family responsibilities, familial status, physical or mental disability, handicapping condition, medical condition, pregnancy status, predisposing genetic characteristics or information, domestic violence victim status, political affiliation, military or veteran status, Vietnam-era or special disabled Veteran or other legally protected classifications.
To support a safe, drug-free workplace, pre-employment criminal background checks and drug testing are part of our hiring process. If you require accommodations during the application process due to a disability, please contact our Talent team at .
Are you an experienced Back End Developer with a desire to excel? If so, then Talent Software Services may have the job for you! Our client is seeking an experienced Back End Developer to work at their company in Richfield, MN.
Position Summary: We are seeking a DevOps Engineer to join our Enterprise API Management team. The successful candidate will be responsible for building, deploying, and operating our platform and services, with an emphasis on automation, reliability, and secure-by-default delivery. This role partners closely with engineers through collaboration and pair programming to improve CI/CD pipelines, troubleshooting practices, and operational readiness, and it also requires some software development experience (e.g., ability to read/debug code and contribute when needed). Technologies involved include Kubernetes, Helm charts, Java/Spring Boot, AWS offerings (certification preferred), and API platform capabilities such as API Gateway and security.
Qualifications:
- Kubernetes 5+ Years
- Helm Charts 4+ Years
- AWS (core offerings; certification preferred) 5+ Years
- Java and Spring Boot 5+ Years
- API Security 5+ Years
Preferred:
- CI/CD and GitOps
- Infrastructure-as-Code experience (e.g., Terraform/CloudFormation)
- API Gateway/API Management experience
- Observability tooling (logging/metrics/tracing) and on-call readiness
- Lua Programming
- Collaborative delivery practices (pair programming, code reviews)
Job Title: Site Reliability Engineer (SRE) – DataHub & GraphQL
Location: Austin, TX & Sunnyvale, CA '
Looking For Only Independent Visa
Role Overview
We are seeking a highly skilled Site Reliability Engineer (SRE) with strong expertise in DataHub ingestion pipelines and GraphQL APIs. The ideal candidate will be responsible for designing, building, and maintaining scalable data ingestion frameworks, ensuring reliability and performance of enterprise data platforms, and enabling seamless integration with downstream applications. This role requires a balance of software engineering, systems reliability, and data platform knowledge.
Key Responsibilities
- Design, implement, and optimize DataHub ingestion pipelines for large-scale enterprise data systems.
- Develop and maintain GraphQL APIs to support data discovery, metadata management, and integration.
- Ensure high availability, scalability, and performance of data services across cloud and on-prem environments.
- Collaborate with data engineering, product, and infrastructure teams to deliver reliable data solutions.
- Automate monitoring, alerting, and incident response processes to improve system resilience.
- Drive best practices in observability, logging, and distributed system reliability.
- Troubleshoot complex production issues and implement long-term fixes.
Must-Have Skills
- 5+ years of experience as an SRE, DevOps Engineer, or Software Engineer with a focus on reliability and scalability.
- Strong hands-on experience with DataHub ingestion frameworks and metadata pipelines.
- Proficiency in GraphQL API design and implementation.
- Solid understanding of cloud platforms (AWS, GCP, or Azure) and container orchestration (Kubernetes, Docker).
- Expertise in monitoring tools (Prometheus, Grafana, ELK, Datadog, etc.).
- Strong programming skills in Python, Java, or Go.
- Experience with CI/CD pipelines and infrastructure-as-code (Terraform, Ansible).
Good-to-Have Skills
- Familiarity with data governance and metadata management tools.
- Experience integrating with data platforms like Kafka, Spark, or Snowflake.
- Knowledge of REST APIs and microservices architecture.
- Exposure to security and compliance practices in data systems.
Qualifications
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
- Proven track record of delivering reliable, scalable data infrastructure solutions.
Now Hiring: Senior Splunk Engineer
Location: Irving, TX (75063)
Duration: 12 Months (Potential Extension)
Role: Contract
About the Role
We’re looking for an experienced Senior Splunk Engineer to lead the administration and optimization of Splunk Enterprise Security in a cloud-hosted environment. If you’re passionate about SIEM operations, security monitoring, and building scalable Splunk architectures, this opportunity is for you!
Required Skills & Experience
5+ years of hands-on Splunk platform administration
Active Splunk Enterprise Certified Admin and/or Splunk ES Certified Admin certification
Experience managing Splunk in AWS / Azure / GCP environments
Strong knowledge of SIEM operations, log management, and event correlation
Advanced SPL (Search Processing Language) skills
Experience with Splunk components:
• Indexers
• Search Heads
• Heavy/Universal Forwarders
• Deployment Servers
• Cluster Management
Familiarity with compliance frameworks: PCI DSS, SOX, NIST CSF
Strong communication skills for collaborating with technical & non-technical stakeholders
Nice to Have
Experience in large-scale retail or high-transaction environments
Knowledge of Splunk SOAR (Phantom) and security automation workflows
Background in Threat Hunting, SOC Operations, or Detection Engineering
Certifications such as CISSP, GIAC (GCIA/GCIH), AWS Security Specialty, AZ-500
Experience with Infrastructure as Code (Terraform, Ansible)
Scripting skills in Python, Bash, or PowerShell
Key Responsibilities
Lead end-to-end administration of Splunk Enterprise Security
Design & manage notable events, risk-based alerting, and threat intelligence integrations
Build and optimize correlation searches, dashboards, and investigations
Onboard enterprise log sources and ensure CIM compliance
Support PCI DSS, SOX, and NIST CSF audit and reporting requirements
Monitor environment health: indexing, search performance, forwarders, licensing
Maintain documentation, runbooks, and troubleshooting guides
Serve as the escalation point for complex Splunk issues
Collaborate with security architecture teams to enhance the overall security ecosystem
We are seeking a Senior Lead Developer to lead the development and deployment of our backend services. In this role, you will be the bridge between our PostgreSQL database and React frontend, responsible not only for writing high-performance Python code but also for architecting the CI/CD pipelines that bring our applications to life. You will ensure our integration layers are scalable, secure, and automatically deployed.
Key Responsibilities
• API & Backend Development: Design and maintain production-grade RESTful APIs using Python (FastAPI, Flask) with a focus on asynchronous processing.
• Database Engineering: Architect relational schemas and write optimized SQL in PostgreSQL, ensuring data integrity and query performance.
• React Integration: Partner with frontend teams to define API contracts, handle state-consistent data fetching, and implement secure authentication (JWT/OAuth2).
• CI/CD & Deployment: Build and manage automated deployment pipelines (e.g., Azure DevOps or Jenkins) to move code from local environments to staging and production.
• Containerization & Cloud: Package applications using Docker and manage deployments on cloud platforms or container orchestrators (Kubernetes/ECS).
• System Reliability: Implement automated testing (PyTest), logging, and monitoring to ensure high availability of integration services.
Technical Requirements
• Experience: 10+ years of professional backend development with a heavy emphasis on Python and API architecture.
• PostgreSQL Expert: Advanced SQL knowledge, including indexing strategies, migrations (Alembic/Flyway), and performance profiling.
• DevOps Tooling: Hands-on experience with Docker and building CI/CD pipelines for Python applications.
• Frontend Literacy: Solid understanding of React (Hooks, Context API) and how it consumes complex JSON structures.
• Infrastructure as Code (Bonus): Familiarity with Terraform or AWS CloudFormation is a significant plus.
The "Lead" Expectation
At the 10-year mark, we expect more than just "feature delivery." We are looking for a candidate who:
• Automates Everything: If a task is done twice, they write a script or a CI job for it.
• Designs for Failure: Implements proper error handling, retries, and health checks in the API layer.
• Collaborates Across the Stack: Can jump into a React component or a Postgres execution plan to find the root cause of a bottleneck.