Prometheus Sql Examples Jobs in Usa

1,615 positions found — Page 6

Data Scientist
✦ New
Salary not disclosed
Newark, NJ 1 day ago
Job Title: Data Scientist

Duration: 12 Months (Temp to Hire)

Location: Newark, NJ 07102


Job Description:

Are you interested in building capabilities that enable the organization with innovation, speed, agility, scalability and efficiency? When you join our organization at Prudential, you'll unlock an exciting and impactful career - all while growing your skills and advancing your profession at one of the world's leading financial services institutions.

As a Data Scientist on/in the US Businesses PruAdvisors Data Science Team you will partner with Machine Learning Engineers, Data Engineers, Business Leaders and other professionals to build GenAI and ML models to improve advisor experience, perform lead scoring, and increase sales revenue. You will implement AI and machine learning models that will deliver stability, scalability and integration with other advisor products and services. You will implement capabilities to solve sophisticated business problems, deploy innovative products, services and experiences to delight our customers! In addition to deep technical expertise and experience, you will bring excellent problem solving, communication and teamwork skills, along with agile ways of working, strong business insight, an inclusive leadership attitude and a continuous learning focus to all that you do.

Responsibilities:


  • Provide deep technical leadership to a portfolio of high impact data science initiatives involving sales and advisor experience. Identify the optimal sets of data, models, training, and testing techniques required for successful product delivery. Remove complex technical impediments
  • Leverage your experience and skills to identify new opportunities where data science and AI can improve experiences, gain efficiencies, and generate sales.
  • Manage team members in AI/ML and model development, testing, training, and tuning. Apply hands-on experience to ensuring best-in-class model development. Mentor team members in technical skill development and product ownership.
  • Communicate clearly and concisely, in writing and verbally, all facets of model design and development. Continuously look for insights in models developed and generate new ideas for model improvement.
  • Manage external vendors in the execution of parts of the data science development process as needed.
  • Leverage continuous integration and continuous deployment best practices, including test automation and monitoring, to ensure successful deployment of ML models and application code on Prudential's AI/ML platform.
  • Bring a deep understanding of relevant and emerging technologies, give technical direction to team members and embed learning and innovation in the day-to-day.
  • Work on significant and unique issues where analysis of situations or data requires an evaluation of intangible variables and may impact future concepts, products or technologies.
  • Familiarity with Python, SQL, AWS, and JIRA.
  • Familiarity with LLMs, deployment of LLMs, RAG, LangChain, LangGraph, and Agentic AI concepts.

The Skills and expertise you bring:


  • Applied Statistics, Computer Science, or Engineering or experience in related fields with a focus on machine learning, AI, and LLMs.
  • Junior category industry experience with responsibility for developing and delivering advanced quantitative, AI/ML, analytical and statistical solutions.
  • Ability to lead a small team with minimal guidance and effectively leverage diverse ideas, experiences, thoughts and perspectives to the benefit of the organization to deliver AI products.
  • Ability to influence business stakeholders and to drive adoption of AI/ML solutions.
  • Experience with agile development methodologies, Test-Driven Development (TDD), and product management.
  • Knowledge of business concepts, tools and processes that are needed for making sound decisions in the context of the company's business
  • Demonstrated ability to mentor and operational management of data science team based on project requirements, resourcing requirements, and planning dependencies as appropriate, anticipate risks and bottlenecks and proactively takes actions
  • Excellent problem solving, communication and collaboration skills, and stakeholder management
  • Significant experience and/or deep expertise with several of the following:
  • Machine Learning and AI: Understanding of machine learning theory, including the mathematics underlying machine learning algorithms. Expertise in the application of machine learning theory to building, training, testing, interpreting and monitoring machine learning models. Expertise in traditional machine learning models (unsupervised, XGBoost, etc.) and Large Language Models (OpenAI, Claude).
  • Model Deployment: Understanding of model development life cycle, CI/CD/CT pipelines (using tools like Jenkins, CloudBees, Harness, etc.), A/B testing, and pipeline frameworks such as AWS SageMaker, and newer AWS/Azure Agentic AI infrastructure products.
  • Data Acquisition and Transformation: Acquiring data from disparate data sources using APIs and SQL. Transform data using SQL and Python. Visualizing data using a diverse tool set including but not limited to Python.
  • Database Management Systems: Knowledge of how databases are structured and function in order use them efficiently. May include multiple data environments, cloud/AWS, primary and foreign key relationships, table design, database schemas, etc.
  • Data Analysis and Insights: Analyzing structured and unstructured data using data visualization, manipulation, and statistical methods to identify patterns, anomalies, relationships, and trends.
  • Programming Languages: Python and SQL
Not Specified
Procurement Consultant
✦ New
Salary not disclosed
Sunnyvale, California 14 hours ago

We're Hiring: Ivalua Techno-Functional Lead Consultant

Location: Sunnyvale, CA (Onsite – 3 days/week)

We are looking for an experienced Ivalua Techno-Functional Lead Consultant with strong expertise in procurement systems, SQL, and integrations. The ideal candidate will have deep hands-on experience working with the Ivalua platform, supporting procurement processes, and collaborating with cross-functional teams to deliver scalable solutions.

This role requires a strong understanding of sourcing and procurement workflows, technical integration capabilities, and the ability to work closely with stakeholders and third-party solution providers.

Key Responsibilities

  • Lead and support Ivalua implementation, configuration, and enhancement projects.
  • Write and optimize SQL queries to generate reports, retrieve data, and support integrations.
  • Build and support API integrations and EAI connections with other enterprise applications.
  • Configure Ivalua modules, including workflows, callbacks, and solution design.
  • Provide production support, including troubleshooting priority bugs and fixes.
  • Work with stakeholders to improve sourcing and procurement processes.
  • Participate in project release processes and system improvements.
  • Maintain strong collaboration with clients and third-party solution providers.

Required Skills & Experience

  • 8+ years of hands-on experience with Ivalua in a project role within industry or consulting.
  • L2 (preferred) or L3 certification in any of the following areas from Ivalua: P2P, INT, or SQL.
  • Strong experience working with SQL queries to build reports, retrieve data, and support integrations.
  • Deep understanding of Ivalua solutions and API development.
  • Strong knowledge of sourcing and procurement processes and their business value drivers.
  • Experience with configuration modules including design, workflows, and callback creation.
  • Experience supporting production issues, priority bugs, and fixes.
  • Knowledge of project release processes.
  • Excellent analytical, problem-solving, and communication skills.
  • Experience building strong client relationships and collaborating with third-party solution providers.
Not Specified
Data Engineer - GCP
✦ New
Salary not disclosed
Phoenix, Arizona 14 hours ago

Job Summary

We are seeking a skilled Data Engineer with 5+ years of hands-on experience designing, building, and maintaining scalable data pipelines and data platforms. The ideal candidate has strong experience working with DAG-based orchestration, cloud technologies (preferably Google Cloud Platform), SQL-driven data processing, Apache Spark, and Python-based API development using Fast API. You will play a key role in enabling reliable data ingestion, transformation, and quality assurance across enterprise systems.

Key Responsibilities

  • Design, develop, and maintain DAG-based data pipelines (Airflow or similar orchestration tools).
  • Build and optimize SQL-based data transformations for analytics and reporting.
  • Develop and manage batch and streaming data pipelines using Apache Spark.
  • Implement Python-based REST APIs using FastAPI for data services and integrations.
  • Perform data quality checks, validation, reconciliation, and anomaly detection.
  • Work with cloud platforms (preferably Google Cloud Platform) for storage, compute, and orchestration.
  • Architect and implement cloud-native data platforms on GCP, leveraging services such as BigQuery, BigTable, Dataflow, Dataproc, Pub/Sub, and Cloud Storage.
  • Monitor pipeline performance, troubleshoot failures, and optimize processing efficiency.
  • Collaborate with analytics, application, and business teams to understand data requirements.
  • Ensure best practices around security, scalability, and maintainability.
  • Ensure data quality, reliability, security, governance, and compliance with enterprise standards

Required Skills & Experience

  • 5 + years of experience as a Data Engineer
  • Strong experience with DAG orchestration (e.g., Apache Airflow).
  • Solid understanding of cloud technologies, preferably Google Cloud Platform (GCP).
  • Advanced proficiency in SQL for data processing and transformations.
  • Hands-on experience running and tuning Apache Spark jobs.
  • Experience developing APIs using Python and FastAPI.
  • Strong understanding of data quality frameworks, checks, and validation techniques.
  • Proficiency in Python, Java, Scala, or PySpark, with strong SQL expertise.
  • Hands-on experience with GCP data services, including BigQuery, BigTable, Dataproc, Dataflow, and cloud-native ETL patterns.
  • Experience with software delivery methodologies such as Agile, Scrum, and CI/CD practices.
  • Strong analytical and problem-solving skills.
  • Ability to work independently and in cross-functional teams.
  • Good communication and documentation skills.
Not Specified
Databricks Data Engineer
✦ New
Salary not disclosed

**Must be able to be onsite in Farmington, CT 2 days a week for collaboration**

The Opportunity: We are seeking a software engineer/developer or ETL/data integration/big data developer with experience in projects emphasizing data processing and storage. This person will be responsible for supporting the data ingestion, transformation, and distribution to end consumers. Candidate will perform requirements analysis, design/develop process flow, unit and integration tests, and create/update process documentation.

· Work with the Business Intelligence team and operational stakeholders to design and implement both the data presentation layer available to the user community, as well as the underlying technical architecture of the data warehousing environment. · Develop scalable and reliable data solutions to move data across systems from multiple sources in real time as well as batch modes. · Design and develop database objects, tables, stored procedures, views, etc. · Independently analyze, solve, and correct issues in real time, providing problem resolution end-to-end · Design and develop ETL Processes that will transform a variety of raw data, flat files, xl spreadsheets into SQL Databases · Understands the concept of Data marts and Data lakes and experience with migrating legacy systems to data marts/lake · Uses additional cloud technologies (e.g., understands concept of Cloud services like Azure SQL server) · Maintain comprehensive project documentation · Aptitude to learn new technologies and the ability to perform continuous research, analysis, and process improvement. · Strong interpersonal and communication skills to be able to work in a team environment to include customer and contractor technical, end users, and management team members. · Manage multiple projects, responsibilities and competing priorities.

Requirements Experience Needed: · Programming languages, frameworks, and file formats such as: Python, SQL, PLSQL, and VB · Database platforms such as: Oracle, SQL Server, MySQL · Big data concepts and technologies such as Synapse & Databricks · AWS and Azure cloud computing · HVR data replication

Not Specified
Data Analyst
Salary not disclosed
Knoxville, TN 2 days ago

Summary:

The Data Analyst is a key member of the IT department, responsible for delivering accurate, actionable insights through data reporting and analysis. This role focuses on developing and maintaining Power BI reports, ensuring data accuracy, and supporting reporting needs using SQL. The position embraces the use of AI-enabled tools to enhance productivity, improve analysis, and support more efficient reporting workflows. The ideal candidate has a strong foundation in information technology, excellent organizational skills, and the ability to work independently while managing priorities in a fast-paced environment.


Key Responsibilities:

· Partner with business users and department stakeholders to gather reporting needs and translate them into clear Power BI reports and dashboards

· Build, maintain, and update Power BI reports using existing datasets and models, ensuring reports are accurate and easy to use

· Write and maintain basic SQL queries against SQL Server to support reporting, validate data, and investigate discrepancies

· Review report outputs with stakeholders to confirm accuracy, explain results, and make iterative improvements

· Assist with light automation using Power Automate (e.g., report notifications, scheduled emails, simple workflows tied to reporting)


Qualifications:

· Associate or Bachelor’s degree in Information Systems, Information Technology, Data Management, Analytics, or a related field

· Hands-on experience with Power BI (academic projects, internships, or 1–2 years professional experience acceptable)

· Working knowledge of SQL and relational databases (SELECT statements, JOINs, filtering, aggregations)

· Strong attention to detail with an emphasis on data accuracy and validation

· Ability to communicate clearly with both technical and non-technical users

· Willingness to learn internal systems, data sources, and reporting standards

· Exposure to Power Automate or other Power Platform tools is a plus, but not required


Candidates who accept an offer of employment must successfully complete a pre-employment physical examination. This examination is conducted by a certified medical professional to ensure the candidate is physically capable of safely performing the essential duties of the position.


Completion and clearance of the physical are mandatory steps in the hiring process.

Not Specified
Oracle EBS Functional Consultant (Finance modules)
✦ New
Salary not disclosed
Redmond, WA 8 hours ago

Role: Oracle EBS Functional Consultant (Finance modules)

Term: Fulltime-Permanent

Location: Redmond, WA (Onsite)


You will join the Enterprise Applications Finance Systems team, responsible for supporting and enhancing Oracle E-Business Suite (EBS) Finance solutions that power critical financial operations across the organization. The team partners closely with Finance, Accounting, and IT teams to ensure reliable system performance, strong financial controls, and efficient business processes across the Order-to-Cash (O2C) and Procure-to-Pay (P2P) cycles.


Working within a collaborative environment, the team focuses on optimizing Oracle EBS R12 functionality, improving financial reporting accuracy, and enabling smooth period-close operations through well-designed system configurations and integrations.


As an Oracle EBS Functional Consultant, you will serve as a key functional expert supporting and enhancing the Oracle EBS Finance platform. You will work closely with finance stakeholders and technical teams to design, configure, and optimize financial modules to support business operations and reporting requirements.


Key responsibilities include:

  • Provide functional expertise across Oracle EBS Finance modules including General Ledger (GL), Accounts Payable (AP), Accounts Receivable (AR), Fixed Assets (FA), Cash Management (CM), and Subledger Accounting (SLA).
  • Analyze and support core financial processes such as period close, Procure-to-Pay (P2P), and Order-to-Cash (O2C) workflows.
  • Configure Oracle EBS modules, define system parameters, and document solutions through functional design documents such as MD50 and BR100.
  • Partner with technical teams to translate business requirements into system configurations and enhancements.
  • Support System Integration Testing (SIT) and User Acceptance Testing (UAT) activities to ensure solution quality and business readiness.
  • Participate in troubleshooting, data analysis, and issue resolution using SQL and PL/SQL when required.
  • Collaborate with cross-functional teams to support system improvements and ongoing Finance transformation initiatives.


What You’ll Bring

  • 8–10+ years of hands-on experience working with Oracle EBS R12 Finance modules.
  • Deep functional knowledge of GL, AP, AR, FA, CM, and Subledger Accounting (SLA).
  • Strong understanding of financial processes including period close, O2C, and P2P lifecycles.
  • Experience configuring Oracle EBS modules and developing functional design documentation such as MD50 and BR100.
  • Working knowledge of SQL and PL/SQL for data validation, analysis, and troubleshooting.
  • Experience working within SDLC environments, including participation in SIT and UAT testing cycles.
  • Strong communication and stakeholder management skills with the ability to bridge technical teams and finance business users.
  • Proven ability to support complex enterprise financial systems in a fast-paced business environment.
Not Specified
Site Reliability Engineer
✦ New
Salary not disclosed
Austin, TX 1 day ago

Job Title: Site Reliability Engineer (SRE) – DataHub & GraphQL

Location: Austin, TX & Sunnyvale, CA '


Looking For Only Independent Visa


Role Overview

We are seeking a highly skilled Site Reliability Engineer (SRE) with strong expertise in DataHub ingestion pipelines and GraphQL APIs. The ideal candidate will be responsible for designing, building, and maintaining scalable data ingestion frameworks, ensuring reliability and performance of enterprise data platforms, and enabling seamless integration with downstream applications. This role requires a balance of software engineering, systems reliability, and data platform knowledge.

Key Responsibilities

  • Design, implement, and optimize DataHub ingestion pipelines for large-scale enterprise data systems.
  • Develop and maintain GraphQL APIs to support data discovery, metadata management, and integration.
  • Ensure high availability, scalability, and performance of data services across cloud and on-prem environments.
  • Collaborate with data engineering, product, and infrastructure teams to deliver reliable data solutions.
  • Automate monitoring, alerting, and incident response processes to improve system resilience.
  • Drive best practices in observability, logging, and distributed system reliability.
  • Troubleshoot complex production issues and implement long-term fixes.

Must-Have Skills

  • 5+ years of experience as an SRE, DevOps Engineer, or Software Engineer with a focus on reliability and scalability.
  • Strong hands-on experience with DataHub ingestion frameworks and metadata pipelines.
  • Proficiency in GraphQL API design and implementation.
  • Solid understanding of cloud platforms (AWS, GCP, or Azure) and container orchestration (Kubernetes, Docker).
  • Expertise in monitoring tools (Prometheus, Grafana, ELK, Datadog, etc.).
  • Strong programming skills in Python, Java, or Go.
  • Experience with CI/CD pipelines and infrastructure-as-code (Terraform, Ansible).

Good-to-Have Skills

  • Familiarity with data governance and metadata management tools.
  • Experience integrating with data platforms like Kafka, Spark, or Snowflake.
  • Knowledge of REST APIs and microservices architecture.
  • Exposure to security and compliance practices in data systems.

Qualifications

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
  • Proven track record of delivering reliable, scalable data infrastructure solutions.
Not Specified
Cassandra Database Engineer/Administrator
✦ New
Salary not disclosed
Austin, TX 8 hours ago

Location: Database Engineer

Duration: 11-12 months

Location : Austin , TX ( 78759) Hybrid role - In office Mon, Wed, Thurs is a must. (No flexibility on these days)


Job Description:


The Cassandra Database Engineer is an expert across NOSQL database technologies, but specifically a specialist on Cassandra database administration.

For this position, NOSQL database expertise is mandatory with a primary focus on Cassandra databases, as well as expertise in Public Cloud technology (AWS and/or GCP).

For this mission, the engineer will primarily be responsible for database operational activities.


Essential Functions / Key Areas of Responsibility

The Database Engineer primary responsibility footprint:

· Database performance analysis and operations review for production database platforms

· Manage database operations activities including incident response, database alert resolution, and managing third party support engagement

· Deploy and maintain database monitoring solutions.

· Test and build database restore and recovery procedures

· Database platform deployment, installation, patching, change management, and third-party software upgrades.

· Responsible for database hardening procedure identification and deployment on public cloud, hosted, and on-premises platforms.

· Responsible for providing database expertise and operations support to the technical support teams and project delivery teams.

· Responsible for participating in database platform review, bench and tuning exercises, security evaluation, provide technical analysis and proactive recommendations for improvements and/or design changes for production platforms


Minimum Requirements: Skills, Experience & Education

· HS diploma with 8+ experience in Cassandra administration (NOT architecture or design)

· College degree in Computer Science preferred + 8-10 years’ experience

· NOSQL Database: 8-10 years Cassandra administration

· Extensive background with public cloud database deployment, management and migration.

· Expertise in database concepts, defining standards, processes, and procedures in database deployment methodologies

· Expert in operations of high-profile production database platforms with high SLA and high-performance expectation

· High level of experience in managing change on production database platform on hosted, on premise, and cloud database platforms

· Expert in deploying high availability database architectures

· Proactive, team player, and leadership qualities with strong technical background

· Excellent verbal and written communication skills


Preferred Qualifications

· Highly skilled in Cassandra database administration

· DataStax enterprise Cassandra administration a plus

· Strong production operations and troubleshooting skills

· Linux operating system background

· Skilled in Public Cloud deployment methods/tools (Gitlab, Terraform, Datadog)

· Knowledge of Kubernetes and Docker.

· Database performance evaluation and platform bench participation


Special Position Requirements:

Candidate will need to be able to multitask and quickly switch if needed to work on emergency incidents on production platforms. The position requires the ability to be able to manage tight deadlines and have visibility on project delivery goals and the ability to communicate effectively to project teams and management. The candidate will be able to thrive in fast paced work environment.

  1. Looking for a candidate that is currently in the position of maintaining Cassandra clusters today (avoid those that have worked in past, or a couple years ago...)
  2. How many clusters are maintained today
  3. How many nodes
  4. What Cassandra version are they
  5. How many years have you worked on Cassandra (ideally 5+)
  6. Candidate has operations experience and can speak to challenges in his environment today
  7. manages patching / upgrades
  8. is called upon in crisis to manage
  9. delivers new environments
  10. Performance tuning experience with Cassandra
  11. familiar with backup and recovery
  12. Familiar with monitoring Cassandra (Prometheus or Datadog a plus)
  13. is go to for other teams on Cassandra database topics
  14. Candidate is adaptable to work in fast paced environment, context switching is normal
  15. Candidate is ok to be in stressful/challenging situations
  16. Outages
  17. Crises team
  18. War room
Not Specified
Principal Software Engineering Lead (AI)
✦ New
Salary not disclosed
Chicago, IL 4 hours ago

Be a part of our success story. Launch offers talented and motivated people the opportunity to do the best work of their lives in a dynamic and growing company. Through competitive salaries, outstanding benefits, internal advancement opportunities, and recognized community involvement, you will have the chance to create a career you can be proud of. Your new trajectory starts here at Launch!


The Role:

Launch is actively seeking a visionary Solutions Architect / Principal Software Engineering Lead (AI) to design and deliver modern engineering and applied AI solutions across client engagements. This role blends deep hands‑on engineering, architectural leadership, AI system design, and client advisory. You will operate across system design, production‑grade engineering, multi‑agent architectures, cloud platform strategy, and the development of Launch’s AI practice.



Responsibilities Include:


Architecture & Technical Strategy

  • Define the technical direction for client engagements end-to-end: discovery, design, build, and production hardening.
  • Assess client technology ecosystems and identify high-impact opportunities for AI/ML integration.
  • Lead architecture reviews, design sessions, and technology selection across cross-functional stakeholder groups.
  • Translate ambiguous business problems into concrete engineering plans with clear scope, milestones, and risk callouts.


AI Engineering & Delivery

  • Architect production agentic systems including multi-agent orchestration, agent harnesses, skill/tool composition, human-in-the-loop checkpoints, and inter-agent communication protocols (e.g., A2A, MCP).
  • Build and govern MCP server ecosystems: design, deploy, and secure Model Context Protocol integrations connecting AI agents to enterprise data sources, internal APIs, and third-party platforms.
  • Define agent skill and capability frameworks including reusable skill libraries, prompt engineering standards, and evaluation harnesses for consistent agent behavior across engagements.
  • Architect RAG pipelines, fine-tuning workflows, and model lifecycle infrastructure (training, serving, experiment tracking) as foundational components of agentic systems.
  • Integrate AI platforms and APIs (Azure OpenAI, Amazon Bedrock, Anthropic, Vertex AI) into production systems with enterprise-grade reliability, cost governance, and observability.
  • Establish AI-native development practices: embed tools such as Claude Code, Cursor, and GitHub Copilot into team workflows with standards for AI-assisted code review, test generation, and documentation.
  • Design evaluation and observability infrastructure including LLM eval frameworks, red-teaming, behavioral drift detection, and production monitoring across tool call chains, latency, and failure modes.
  • Apply responsible AI governance: define guardrails, access controls, and audit patterns for agentic workflows in enterprise environments including scope containment and escalation paths.


Hands-On Engineering

  • Write production code and lead by example — this role requires someone who is still close to the code.
  • Design cloud-native architectures across multiple hyperscalers (AWS and Azure primarily) microservices, event-driven systems, serverless, and containerized workloads.
  • Define and implement infrastructure-as-code using tools such as Terraform, Pulumi, CloudFormation, or Bicep.
  • Design and optimize CI/CD pipelines, GitOps workflows, and container orchestration using Docker and Kubernetes.
  • Establish observability and reliability practices using tools such as Datadog, Prometheus, Grafana, CloudWatch, or Azure Monitor.
  • Drive security-by-design across the delivery lifecycle including IAM, network architecture, secrets management, and compliance automation.


Leadership & Client Advisory

  • Lead engineering teams ranging from small squads to 10+ person delivery teams, scaling leadership approach to the needs of each engagement.
  • Mentor and develop engineers at all levels through code reviews, pairing, and design coaching.
  • Operate as a trusted advisor to client technical leadership and executive stakeholders. Communicate trade-offs clearly and build confidence.
  • Influence without direct authority — driving alignment across cross-functional teams through technical credibility and stakeholder management.
  • Lead discovery and requirements elicitation, surfacing the underlying business need beyond the stated request.
  • Produce clear written artifacts: technical proposals, architecture decision records, SOWs, and executive-level status communication.
  • Grow client relationships and identify follow-on opportunities through proposal contributions and delivery-driven account expansion.
  • Contribute to Launch's growth — practice development, thought leadership, and hiring.


Qualifications:


Must-Haves:

  • Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field.
  • 10+ years in software engineering with demonstrated experience in architecture and technical leadership roles.
  • 3+ years hands-on with AI/ML in production. Broad fluency across generative AI (LLMs, RAG, fine-tuning, agents), MLOps (model serving, pipelines, experiment tracking), and AI-integrated product development.
  • Consulting or client-facing delivery experience with a proven ability to integrate into client organizations and establish credibility with technical and executive stakeholders.
  • Full-stack engineering capability across frontend, backend, infrastructure, and data layers. Proficiency in multiple modern languages (e.g., Python, TypeScript/Node.js, C#/.NET, Java, or Go) with the ability to move between them as engagements require.
  • Multi-hyperscaler depth across AWS and Azure, including their respective AI/ML service ecosystems (Bedrock, SageMaker, Azure OpenAI, Azure ML). GCP experience is a plus.
  • Strong fundamentals in distributed systems, event-driven architecture, API design, and DevOps/platform engineering.
  • Experience leading engineering teams in agile delivery environments.
  • Business acumen with the ability to connect architecture decisions to cost, timeline, and organizational impact.
  • Executive presence and communication skills effective with both technical and non-technical audiences.
  • Proven ability to operate in ambiguous environments and adapt to diverse client cultures.


Strong Differentiators

  • Experience contributing to the development of AI engineering practices, reusable frameworks, or internal accelerators within a consulting or enterprise environment.
  • Experience advising C-suite or VP-level stakeholders on AI strategy, investment prioritization, and organizational readiness.
  • Depth with agentic AI frameworks (LangChain, LangGraph, LangSmith, LlamaIndex, Semantic Kernel, CrewAI) and emerging standards like MCP (Model Context Protocol).
  • Experience with enterprise data platforms (Databricks, Snowflake, BigQuery) in the context of AI/ML workloads.
  • Cloud architecture certifications across AWS and Azure (AWS SA Professional, Azure Solutions Architect Expert).
  • Published writing, open-source contributions, or conference speaking that demonstrates thought leadership in AI or software architecture.
  • Domain depth in industries such as healthcare, financial services, retail, or public sector.



Compensation & Benefits:

As an employee at Launch, you will grow your skills and experience through a variety of exciting project work (across industries and technologies) with some of the top companies in the world! Our employees receive full benefits—medical, dental, vision, short-term disability, long-term disability, life insurance, and matched 401k. We also have an uncapped, take-what-you-need PTO policy. The anticipated base wage range for this role is $190,000 to $230,000. Education and experience will be highly considered, and we are happy to discuss your wage expectations in more detail throughout our internal interview process.

Not Specified
Site Reliability Engineer II
🏢 Spectraforce Technologies
Salary not disclosed
Alpharetta, GA 3 days ago
Title: Site Reliability Engineer II

Location: Alpharetta, GA (3 days a week onsite)

Duration: 6 months


Job Description:

We are seeking a skilled Site Reliability Engineer to join our team and help build, maintain, and scale our cloud-native infrastructure. You will work closely with development and operations teams to ensure our systems are reliable, scalable, and efficient. The ideal candidate is passionate about automation, observability, and infrastructure-as-code, and thrives in a collaborative, fast-paced environment.

Key Responsibilities



  • Design, implement, and manage cloud infrastructure on Azure using Terraform and Terragrunt.


  • Maintain and optimize Kubernetes clusters on Azure Kubernetes Service (AKS).


  • Build and manage CI/CD pipelines using GitHub Actions/Workflows and ArgoCD for GitOps deployments.


  • Enhance system reliability by implementing monitoring, alerting, and observability solutions with Grafana.


  • Automate operational tasks to reduce toil and improve team efficiency.


  • Participate in on-call rotations, incident response, and post-mortem analysis.


  • Collaborate with development teams to improve application performance, scalability, and resilience.


  • Implement and advocate for SRE best practices, including SLIs, SLOs, and error budgets.


  • Continuously improve system performance, cost efficiency, and security.



Required Skills & Qualifications



  • 3+ years of experience in an SRE, DevOps, or cloud infrastructure role.


  • Strong experience with Azure cloud services and infrastructure.


  • Hands-on experience with java and Terraform and Terragrunt for infrastructure-as-code.


  • Proficiency with Kubernetes (preferably AKS and container orchestration.


  • Experience with CI/CD tools, especially GitHub Workflows/Actions and ArgoCD.


  • Solid understanding of observability tools like Grafana (Prometheus, Loki, Tempo experience is a plus).

    Education Requirements Bachelor's degree required, (Masters preferred)

Not Specified
jobs by JobLookup
✓ All jobs loaded