Get Deepgram Api Key Jobs in Usa
7,102 positions found — Page 14
IFBF is Iowa's largest farm organization, established in 1918.
We remain a statewide, non-profit, grassroots farm organization dedicated to creating a vibrant future for agriculture, farm families, and rural communities.
The Information Resources department is responsible for creating systems to manage memberships and support the ongoing business of Iowa Farm Bureau.
Key Responsibilities: UI/UX Design & Development: Design and implement modern, visually appealing user interfaces using Angular.
Ensure adherence to UI/UX best practices, including color theory, typography, and layout design.
Work closely with designers to translate wireframes and prototypes into functional front-end code.
Front-End Development: Develop scalable and maintainable front-end applications using Angular, TypeScript, HTML, and CSS.
Implement responsive design to ensure cross-platform and cross-device compatibility.
Optimize performance by employing the best coding practices, lazy loading, and caching techniques.
Backend Development Support (.NET): Collaborate with backend developers to integrate APIs and ensure seamless data flow.
Work with C# and .NET for minor backend modifications and API enhancements.
Assist in debugging and troubleshooting front-end and backend interactions.
Code Quality & Testing: Write clean, maintainable, and well-documented code following best practices.
Conduct unit testing using frameworks like Jasmine/Karma to ensure code stability.
Perform cross-browser and accessibility testing to meet WCAG compliance.
Collaboration & Continuous Learning: Work with cross-functional teams, including UX designers, product managers, and backend engineers.
Stay up to date with the latest Angular updates, UI trends, and best practices.
What It Takes to Join Our Team: Required Skills & Experience: Expertise in Angular (components, modules, services, routing, RxJS).
State Management: Experience with Redux or NgRx for efficient state handling.
Build Tools: Knowledge of Webpack, Gulp, or other bundling tools.
Strong knowledge of HTML, CSS, JavaScript, and TypeScript.
UI/UX Design Principles: Experience with design tools and usability best practices.
Responsive Web Development: Ability to create adaptive and mobile-friendly applications.
API Integration: Experience working with RESTful APIs and handling authentication.
Version Control: Proficiency in Git and collaborative workflows.
Testing Frameworks: Familiarity with Jasmine/Karma for unit testing.
Desired Skills (Nice to Have): Backend Development: Familiarity with C#/.NET, basic API development, and SQL.
Accessibility Standards: Understanding of WCAG and ARIA for accessible web development.
Azure Experience: Familiarity with Azure DevOps, CI/CD pipelines, and cloud deployment.
About Pinterest:
Millions of people around the world come to our platform to find creative ideas, dream about new possibilities and plan for memories that will last a lifetime. At Pinterest, we're on a mission to bring everyone the inspiration to create a life they love, and that starts with the people behind the product.
Discover a career where you ignite innovation for millions, transform passion into growth opportunities, celebrate each other's unique experiences and embrace theflexibility to do your best work. Creating a career you love? It's Possible.
At Pinterest, AI isn't just a feature, it's a powerful partner that augments our creativity and amplifies our impact, and we're looking for candidates who are excited to be a part of that. To get a complete picture of your experience and abilities, we'll explore your foundational skills and how you collaborate with AI.
Through our interview process, what matters most is that you can always explain your approach, showing us not just what you know, but how you think. You can read more about our AI interview philosophy and how we use AI in our recruiting process here.
About tvScientific
tvScientific is the first and only CTV advertising platform purpose-built for performance marketers. We leverage massive data and cutting-edge science to automate and optimize TV advertising to drive business outcomes. Our solution combines media buying, optimization, measurement, and attribution in one, efficient platform. Our platform is built by industry leaders with a long history in programmatic advertising, digital media, and ad verification who have now purpose-built a CTV performance platform advertisers can trust to grow their business.
As aStaff Data Engineer at tvScientific, you will be a key player in implementing the robust data infrastructure to power our data-heavy company. You will collaborate with our cross-functional teams to evolve our core data pipelines, design for efficiency as we scale, and store data in optimal engines and formats.This is anindividual contributor role, where you will work to define and implement a strategic vision for data engineering within the organization.
What you'll do:
- Design and implement robust data infrastructure in AWS, using Spark with Scala
- Evolve our core data pipelines to efficiently scale for our massive growth
- Store data in optimal engines and formats, matching your designs to our performance needs and cost factors
- Collaborate with our cross-functional teams to design data solutions that meet business needs
- Design and implement knowledge graphs, exposing their functionality both via Batch Processing and APIs
- Leverage and optimize AWS resources while designing for scale
- Collaborate closely with our Data Science and Product teams
- How we'll define success:
- Successful design and implementation of scalable and efficient data infrastructure
- Timely delivery and optimization of data assets and APIs
- High attention to detail in implementation of automated data quality checks
- Effective collaboration with cross-functional teams
What we're looking for:
- Production data engineering experience
- Proficiency in Spark and Scala, with proven experience building data infrastructure in Spark using Scala
- Experience in delivering significant technical initiatives and building reliable, large scale services
- Experience in delivering APIs backed by relationship-heavy datasets
- Familiarity with data lakes, cloud warehouses, and storage formats
- Strong proficiency in AWS services
- Expertise in SQL for data manipulation and extraction
- Excellent written and verbal communication skills
- Bachelor's degree in Computer Science or a related field
- Nice-to-haves:
- Experience in adtech
- Experience implementing data governance practices, including data quality, metadata management, and access controls
- Strong understanding of privacy-by-design principles and handling of sensitive or regulated data
- Familiarity with data table formats like Apache Iceberg, Delta
- Previous experience building out a Data Engineering function
- Proven experience working closely with Data Science teams on machine learning pipelines
In-Office Requirement Statement:
- We recognize that the ideal environment for work is situational and may differ across departments. What this looks like day-to-day can vary based on the needs of each organization or role.
Relocation Statement:
- This position is not eligible for relocation assistance. Visit ourPinFlexpage to learn more about our working model.
#LI-SM4
#LI-REMOTE
At Pinterest we believe the workplace should be equitable, inclusive, and inspiring for every employee. In an effort to provide greater transparency, we are sharing the base salary range for this position. The position is also eligible for equity. Final salary is based on a number of factors including location, travel, relevant prior experience, or particular skills and expertise.
Information regarding the culture at Pinterest and benefits available for this position can be found here.
US based applicants only$155,584—$320,320 USDOur Commitment to Inclusion:
Pinterest is an equal opportunity employer and makes employment decisions on the basis of merit. We want to have the best qualified people in every job. All qualified applicants will receive consideration for employment without regard to race, color, ancestry, national origin, religion or religious creed, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, age, marital status, status as a protected veteran, physical or mental disability, medical condition, genetic information or characteristics (or those of a family member) or any other consideration made unlawful by applicable federal, state or local laws. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. If you require a medical or religious accommodation during the job application process, please completethis formfor support.
HCLTech is looking for a highly talented and self- motivated Sr Java Architect role to join it in advancing the technological world through innovation and creativity.
Job Title: Sr Java Architect role
Req ID: 73747
Position Type: Full-time
Location: Raleigh, NC
Onsite: Onsite
Experience level:
15+ yrs
Job Description:
Must have skills:
Extensive experience in Java based enterprise application architecture
Strong expertise in API design and enterprise integrations
Strong experience in Microservices based development and domain driven design
Proven experience architecting solutions on AWS
Experience with workflow orchestration and event driven architectures
Preferred skills:
Experience with Drools or rules based decisioning engines
Exposure to insurance or underwriting platforms
Detailed Job Description:
The Technical Architect is accountable for defining and governing the end-to-end technical architecture for the project. The role ensures a scalable, secure, API driven, and cloud native solution aligned with client's workflows, enterprise integration standards, and AWS cloud strategy.
Key Responsibilities
Define and validate the overall solution architecture across Java services, APIs, workflow orchestration, and rules based decisioning.
Design and develop system architecture, technology architecture, integration architecture, and deployment architecture.
Design and govern API first integration patterns (sync/async) with administration systems, portals, CRM, external KYC vendors, and downstream platforms.
Provide architectural guidance for rules engine integration, including Drools based or equivalent decisioning frameworks, ensuring proper separation of orchestration and rules logic.
Define AWS cloud architecture leveraging API Gateway, workflow orchestration, messaging, security, monitoring, and logging services.
Oversee detailed solution design and guide development teams, ensuring adherence to coding standards, architectural principles, and enterprise security requirements.
Own nonfunctional architecture considerations including performance, scalability, resiliency, security, auditability, and regulatory compliance.
Identify technical risks early and drive mitigation strategies to ensure smooth transition from POC to production ready implementation.
Pay and Benefits
Pay Range Minimum: $135,000 per year
Pay Range Maximum: $158,000 per year
HCLTech is an equal opportunity employer, committed to providing equal employment opportunities to all applicants and employees regardless of race, religion, sex, color, age, national origin, pregnancy, sexual orientation, physical disability or genetic information, military or veteran status, or any other protected classification, in accordance with federal, state, and/or local law. Should any applicant have concerns about discrimination in the hiring process, they should provide a detailed report of those concerns to for investigation.
Compensation and Benefits
A candidate’s pay within the range will depend on their work location, skills, experience, education, and other factors permitted by law. This role may also be eligible for performance-based bonuses subject to company policies. In addition, this role is eligible for the followi14520ng benefits subject to company policies: medical, dental, vision, pharmacy, life, accidental death & dismemberment, and disability insurance; employee assistance program; 401(k) retirement plan; 10 days of paid time off per year (some positions are eligible for need-based leave with no designated number of leave days per year); and 10 paid holidays per year.
How You’ll Grow
At HCLTech, we offer continuous opportunities for you to find your spark and grow with us. We want you to be happy and satisfied with your role and to really learn what type of work sparks your brilliance the best. Throughout your time with us, we offer transparent communication with senior-level employees, learning and career development programs at every level, and opportunities to experiment in different roles or even pivot industries. We believe that you should be in control of your career with unlimited opportunities to find the role that fits you best.
Role : Gemini Enterprise SME
Location: Remote
Position Type : Contract
Role Summary
- Seeking a Gemini Enterprise Experience Engineer to design, build, and operationalize enterprise‑grade Gemini‑powered solutions on Cloud Platform (GCP).
- This role focuses on Gemini APIs, Vertex AI, and agentic AI frameworks to deliver secure, scalable, and production‑ready AI experiences for enterprise users.
Key Responsibilities
- Design and implement Gemini Enterprise solutions using Gemini APIs and Vertex AI
- Build and deploy agentic AI workflows using Agent Builder, ADK, and LangGraph‑style orchestration [Manideep M...prise 3-10 | PDF], [Clo...iew_Jerome | PowerPoint]
- Integrate Gemini with enterprise data sources, APIs, and business systems
- Productionize AI experiences on GCP with strong focus on security, governance, and observability
- Collaborate with engineering and customer teams to translate business needs into scalable AI experiences
Required Skills
- Hands‑on experience with Gemini Enterprise / Gemini APIs
- Strong experience with GCP, especially Vertex AI
- Proficiency in Python and API‑based AI integration
- Experience building enterprise‑grade GenAI applications
Nice to Have
- Experience with Agent Builder, Agent Development Kit (ADK), or Agent Engine
- Familiarity with RAG patterns, structured outputs, and tool‑calling
- Experience with secure or privacy‑sensitive enterprise data
- Exposure to CI/CD and cloud‑native deployment on GCP
Experience
- 5+ years in cloud, AI/ML, or platform engineering
- Prior experience delivering enterprise AI solutions on Cloud preferred
Best Regards,
Bismillah Arzoo (AB)
Turnkey is a key management API optimized for security, flexibility, and ease-of-use. Founded by the leaders who scaled Coinbase Custody from 0 to a $100M+ ARR business, Turnkey is rewriting the rules of crypto infrastructure by tackling core security at its foundational level. We are building a platform that empowers developers to unlock the next wave of mass-market crypto applications, solving the challenge of secure, flexible key management.
Much like AWS transformed computing with the advent of cloud, Turnkey is the trustless, transparent, and decentralized infrastructure that will transform development in crypto.
Your RoleAs an early product hire at Turnkey, you will play a pivotal role in the evolution of our existing product, and entry into new product verticals, including:
- Establish 0 to 1 vision for new product verticals
- Evolve our existing developer-first product offering across APIs, SDKs, and our developer dashboard
- Work closely with customers to understand their goals, criteria, and future plans
- Work hand-in-hand with our engineering and leadership team to execute on our product vision
- 5+ years of relevant product experience; crypto and dev tooling experience a plus
- Comfort with technical challenges, and an opinionated view on the evolution of key management, wallets, and crypto UX
- Ability to stretch outside of the traditional product role to do whatever it takes to ensure our product is successful
- Experience driving both individual work and managing others
- Direct and open written and verbal communication
- Willingness to challenge the status quo and preconceived notions of what's possible
- People who think that Web3 / cryptocurrency has the potential to radically change the world for the better and a sincere desire to help facilitate that change
- A self-proclaimed crypto degen who actively tracks developments in the crypto ecosystem
- Prior entrepreneurial experience
- Full benefits, including medical, dental, vision, life, disability, HSA/FSA, 401(k) - detailed benefits overview available as we get further in the process
- Paid parental leave
- Unlimited PTO (and we will force you to take time off!)
- $3,000/yr learning and development budget to attend industry conferences
- Multiple team offsites per year
- Macbook Pro laptop
- Lunch stipend (for those physically in the New York City office)
Please note that while the team is remote, we are only considering candidates who are physically based in the United States and Canada with a strong preference for those who are able to work onsite in our New York City HQ.
Turnkey is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, protected veteran status, or any other characteristic protected by law. We encourage individuals of all backgrounds to apply.
- Onsite 12 Months Contract Looking only Locals who can do Onsite Interview We are seeking an Enterprise Agentic Platform Specialist to lead the design, development, and delivery of enterprise-scale Data Science and Generative AI (GenAI) solutions.
This role will drive the implementation of AI agents, LLM orchestration frameworks, and enterprise automation pipelines, working cross-functionally with business stakeholders, data engineers, data scientists, DevOps teams, and UI developers.
The ideal candidate will combine hands-on GenAI engineering expertise with strong program delivery capabilities, ensuring solutions deliver measurable business outcomes while meeting enterprise standards for governance, security, and Responsible AI.
Key Responsibilities AI Agent and GenAI Development Lead the end-to-end delivery of enterprise data science and GenAI solutions.
Design, develop, and deploy AI agents using Microsoft Copilot Studio, Claude agent frameworks, and enterprise LLM orchestration patterns.
Implement prompt engineering strategies, grounding techniques, and Retrieval-Augmented Generation (RAG) pipelines.
Architecture and Integration Define architecture standards for agentic systems, including tool calling schemas, prompt frameworks, grounding flows, and RAG pipelines.
Translate complex workflows into modular, automated, event-driven pipelines.
Integrate AI solutions with enterprise systems via REST APIs, Power Platform connectors, and enterprise data services.
Connect Copilot Studio agents to enterprise data sources such as: SharePoint Dataverse SQL SAP Enterprise Platform Integration Oversee system integration across enterprise platforms including: ServiceNow SharePoint Microsoft Teams Power Automate Azure APIs Design MCP-based agent architectures, integration layers, authentication flows (OAuth / Microsoft Entra), and system messaging frameworks.
Governance, Security and Responsible AI Collaborate with AI architects, MLOps teams, and security teams to enforce: Responsible AI standards Data governance policies Security and access control frameworks Model safety guidelines Implement agent observability frameworks including logging, telemetry instrumentation, latency metrics, error tracking, and automated remediation workflows.
Delivery and Program Management Lead cross-functional teams delivering AI solutions across data engineering, data science, DevOps, and UX teams.
Manage delivery using Agile / Scrum or hybrid PM methodologies.
Track dependencies, risks, sprint alignment, and release orchestration.
Metrics and Performance Monitoring Define KPIs and operational dashboards for AI automation, including: Cycle time reduction Accuracy improvements Governance compliance Agent uptime and reliability Required Qualifications Hands-on experience delivering enterprise Data Science and GenAI solutions.
Experience designing and deploying AI agents using Microsoft Copilot Studio or similar agent frameworks.
Strong knowledge of LLMs, prompt engineering, and Retrieval-Augmented Generation (RAG).
Experience integrating AI solutions with enterprise platforms and APIs.
Understanding of MLOps, governance frameworks, and Responsible AI standards.
Experience working in Agile delivery environments with cross-functional teams.
Preferred Qualifications Hands-on expertise building agents in Microsoft Copilot Studio.
Familiarity with agent frameworks such as: LangChain AutoGen CrewAI OpenAI Assistants / Functions APIs Experience implementing enterprise AI observability and monitoring frameworks.
Strong understanding of enterprise security, authentication, and governance models.
Thanks Sri Vardhan Chilakamukku Infobahn SoftWorld Inc.
Generative AI Developer Location: Dallas TX/ Tampa FL/New Jersey
- Hybrid Fulltime/FTE Salary: Market Client: Bank Role Overview We are seeking an experienced Senior Generative AI Developer to design and implement cutting-edge AI solutions leveraging Retrieval-Augmented Generation (RAG) techniques.
The ideal candidate will have strong expertise in Python programming, FastAPI, and cloud platforms (AWS, Azure, or GCP).
This role requires a deep understanding of system architecture design, scalable APIs, and end-to-end AI solution development.
Key Responsibilities Architect and develop Generative AI applications using RAG frameworks for enterprise-scale solutions.
Design and implement robust system architectures for AI-driven platforms ensuring scalability, security, and performance.
Build and optimize APIs using FastAPI for seamless integration with AI models and data pipelines.
Collaborate with cross-functional teams to integrate AI solutions into existing systems and workflows.
Implement data ingestion, preprocessing, and retrieval mechanisms for large-scale knowledge bases.
Ensure compliance with best practices for cloud deployment (AWS, Azure, or GCP).
Conduct performance tuning and optimization of AI models and APIs.
Stay updated with the latest advancements in Generative AI, LLMs, and RAG methodologies.
Required Skills & Qualifications 8+ years of professional experience in software development and system design.
Strong proficiency in Python and experience with FastAPI for API development.
Hands-on experience with Generative AI frameworks and RAG architectures.
Solid understanding of system and architecture design principles for distributed applications.
Experience deploying solutions on any major cloud platform (AWS, Azure, GCP).
Familiarity with vector databases, embedding models, and retrieval pipelines.
Strong problem-solving skills and ability to work in a fast-paced environment.
Preferred Qualifications Experience with LLM fine-tuning, prompt engineering, and model evaluation.
Knowledge of containerization (Docker) and orchestration (Kubernetes).
Exposure to CI/CD pipelines and DevOps practices.
Email:
Location: Remote
Duration: 8+ months
Marketplace Platform Lead
Job Overview
The Marketplace Platform Lead is responsible for driving the end?to?end technical architecture and implementation of the enterprise Data Marketplace platform. This role spans stakeholder engagement, architectural definition, integration design, and hands-on leadership throughout implementation. The ideal candidate is a seasoned technical leader with deep experience designing integration patterns, building scalable platforms, and guiding engineering teams through complex cross-system solutions.
Key Responsibilities
Lead stakeholder meetings to gather business requirements, align on platform objectives, and clarify workflows and user journeys.
Conduct tool evaluations, build scoring frameworks, and make recommendations on platforms, vendors, and integration technologies.
Define end-to-end Marketplace architecture, including data flows, APIs, domain models, integration strategies, and platform components.
Design and lead the implementation of integration patterns, including API-based integrations, event-driven patterns, workflow orchestration, and cross-system interoperability.
Develop technical designs, architectural documents, and standards for Marketplace workflows, user flows, and extensibility patterns.
Provide hands-on architectural guidance to engineering teams throughout solution design, development, and delivery.
Oversee technical quality, scalability, performance, and security across Marketplace components and integrations.
Collaborate with product, engineering, data, and security teams to ensure compliance with enterprise data governance, privacy, and reliability standards.
Lead technical reviews, drive design decisions, and ensure alignment across cross-functional stakeholders.
Required Skills & Qualifications
8+ years of experience in software engineering, platform development, or technical architecture roles.
Strong expertise in designing and implementing integration architectures, including REST/GraphQL APIs, event-driven patterns, synchronous/asynchronous messaging, and workflow engines.
Deep understanding of distributed systems, microservices, and cloud-native solutions (Azure, AWS, or GCP).
Proficiency with API design, messaging systems, and enterprise integration frameworks.
Experience defining technical architecture, data flows, and workflow designs for complex platforms.
Ability to translate business requirements into technical designs, user flows, and actionable engineering plans.
Demonstrated leadership in guiding engineering teams through architectural decisions and implementation.
Strong communication skills with the ability to influence technical and non-technical partners.
Experience evaluating and scoring platforms, tools, or vendor solutions.
Solid knowledge of DevOps practices, CI/CD, infrastructure-as-code, observability, and security best practices.
Preferred Qualifications
Experience building or leading a Data Marketplace platform.
Familiarity with workflow orchestration platforms, rules engines, BPM tools, or catalog management systems.
Experience with enterprise identity systems (OAuth, SAML, SSO), access governance, and data privacy frameworks.
Background working with enterprise data platforms, data governance, or cross-domain integration patterns.
Prior experience leading architectural governance or serving as a platform architect in an enterprise environment.
Length of Contract: 6 months
Location: Remote (Eastern time zone)
What are the top 3-5 skills, experience or education required for this position:
a. Proficiency in databases (SQL) and coding in R/Python
b. Experience with API development
c. Familiarity with AI techniques and strong curiosity for new technologies
d. Experience managing and curating bioinformatics datasets (BulkRNAseq, Proteomics, scRNAseq, CRISPR)
e. Code management, documentation, and version control (e.g., GitHub)
Job Overview: As a Data Analyst, you'll drive data quality and consistency in our central hub for storing OMICS data, address impactful data loading and curation projects and help improve and automate processes using agentic AI. Working closely with researchers, you'll ensure their data needs are met and help accelerate scientific discovery.
Key Responsibilities:
- Contribute to important data loading and curation projects for the departments Omics data server
- Address data quality and consistency issues in the CRISPR database.
- Apply agentic AI approaches for data loading and querying OMICS data
- Database Interaction: Use PostgreSQL to build, manage, and query large genomic datasets.
- API Development: Design and implement APIs for improved data accessibility and integration across platforms.
- Automation: Use Python and R to automate and optimize data workflows, prioritizing data quality and integrity.
- ETL Process Management: Develop and execute ETL processes to integrate high-value datasets in line with organizational standards.
- Collaboration: Work with cross-functional teams and research scientists to gather requirements, align to common data model standards, and facilitate effective data management.
- Documentation: Maintain comprehensive documentation and version control for reproducibility and teamwork.
Required qualifications:
- Master's degree in computer science, bioinformatics, or a related field, with 3+ years of relevant experience.
- Proven experience working with databases (PostgreSQL proficiency).
- Advanced skills in Python and R for automation and data manipulation.
- Experience handling and curating bioinformatics datasets (BulkRNAseq, Proteomics, scRNAseq, CRISPR).
- Code management, documentation, and usage of Github.
- Curiosity and basic knowledge of AI techniques applicable to data loading and querying.
- Excellent communication skills and a collaborative mindset.
- Demonstrated experience with AWS resources.
- Experience in API
Job Title: Site Reliability Engineer (SRE) – DataHub & GraphQL
Location: Austin, TX & Sunnyvale, CA '
Looking For Only Independent Visa
Role Overview
We are seeking a highly skilled Site Reliability Engineer (SRE) with strong expertise in DataHub ingestion pipelines and GraphQL APIs. The ideal candidate will be responsible for designing, building, and maintaining scalable data ingestion frameworks, ensuring reliability and performance of enterprise data platforms, and enabling seamless integration with downstream applications. This role requires a balance of software engineering, systems reliability, and data platform knowledge.
Key Responsibilities
- Design, implement, and optimize DataHub ingestion pipelines for large-scale enterprise data systems.
- Develop and maintain GraphQL APIs to support data discovery, metadata management, and integration.
- Ensure high availability, scalability, and performance of data services across cloud and on-prem environments.
- Collaborate with data engineering, product, and infrastructure teams to deliver reliable data solutions.
- Automate monitoring, alerting, and incident response processes to improve system resilience.
- Drive best practices in observability, logging, and distributed system reliability.
- Troubleshoot complex production issues and implement long-term fixes.
Must-Have Skills
- 5+ years of experience as an SRE, DevOps Engineer, or Software Engineer with a focus on reliability and scalability.
- Strong hands-on experience with DataHub ingestion frameworks and metadata pipelines.
- Proficiency in GraphQL API design and implementation.
- Solid understanding of cloud platforms (AWS, GCP, or Azure) and container orchestration (Kubernetes, Docker).
- Expertise in monitoring tools (Prometheus, Grafana, ELK, Datadog, etc.).
- Strong programming skills in Python, Java, or Go.
- Experience with CI/CD pipelines and infrastructure-as-code (Terraform, Ansible).
Good-to-Have Skills
- Familiarity with data governance and metadata management tools.
- Experience integrating with data platforms like Kafka, Spark, or Snowflake.
- Knowledge of REST APIs and microservices architecture.
- Exposure to security and compliance practices in data systems.
Qualifications
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
- Proven track record of delivering reliable, scalable data infrastructure solutions.