Suno Ai Api Documentation Jobs in Usa
16,394 positions found
Propy is revolutionizing the real estate industry by building the world's first AI-powered Title and Escrow platform onchain. We have processed over $5B in transactions, and we are on a mission to make closing on a home as easy as buying a stock.
We combine blockchain for security with advanced AI to automate the heavy lifting of closing documents. We aren't just "using" AI; we are building the infrastructure that allows AI agents to securely manage escrow, eliminate fraud, and run 24/7.
We are looking for a pragmatic Applied AI Engineer to join our engineering team.
The role is not about training models and does not involve academic Machine Learning research. It is about building the rails that make AI usable in a high-stakes financial environment. You will bridge the gap between our robust C#/.NET architecture and the probabilistic world of LLMs.
Title and Escrow is a document-heavy industry with zero room for error. Your mission is to use AI to clean up the messiness of real-world real estate data.
You will solve problems like:
- Structured Data Extraction: Converting messy, unstructured data (like emails, PDFs, documents) from various sources into strictly validated JSON schemas with as close to 100% accuracy as possible.
- Escrow Automation: Designing workflows that reduce human intervention by 50% by intelligently routing tasks based on AI analysis.
- Fraud Detection: Implementing deterministic logic checks on bank and financial documents to detect fraud patterns before they happen.
- Engineer the Integration: Writing production-grade code that interacts with external AI APIs
- "Prompt Engineering" as Code: You won't just write prompts; you will version, test, and optimize them. You will define strict schemas to ensure the AI speaks the language of our internal tools.
- Orchestrate & Validate: Help in building the logic that parses AI responses, validates them against our database (MongoDB), and flags inconsistencies before they reach the user.
- Full-Stack Implementation: Work to visualize AI-aided services and data for user review and approval.
- Collaborate: Work closely with the other senior engineers and product owners to translate complex "Title & Escrow" schemas into technical constraints that an AI can understand.
- Developer DNA: You are a software engineer first. You have strong experience in Python (C# / .NET is an advantage) and understand programming in depth.
- Applied AI Experience: You have integrated LLMs into applications via API. Have experience with not only models but also AI frameworks. Experience with workflows, AI agent building and orchestration. You understand context windows, token limits, temperature, and guardrails.
- Data Handling: Experience with handling complex data structures.
- The "Glue" Mindset: You enjoy writing the code that connects different services ( like the AWS, AI APIs, and Database) to make a seamless features.
- Collaborative Autonomy: You will own the AI domain, but you won't be on an island. You will be embedded in a senior engineering team that supports you with architecture, code reviews, and best practices.
- Experience with AWS infrastructure.
- Familiarity with the US Real Estate, Title, or Escrow process.
- Working in a transparent environment which focuses on solving problems and getting things done.
- The opportunity to work with very smart and driven people.
- The ability to grow your talents and career in a high-growth sector.
- A remuneration package that is based on the candidate's motivation, skills, and experience.
Please submit your resume to this job ad along with a portfolio of your AI-related experience, GitHub account and anything else you find applicable.
Position Title: Applied AI Systems Engineer
Location: Orange County, California (Hybrid)
Reports To: Head of Operations
Position Summary
This role is responsible for architecting, building, and deploying a production-grade AI operating system that automates core workflows across leasing, property management, accounting, construction coordination, and asset management.
The engineer will design and implement AI agents, document intelligence systems, and workflow automation pipelines that reduce manual processing, improve accuracy, and increase operational scalability across a commercial real estate portfolio.
This position requires strong systems thinking, rigorous technical execution, and the ability to translate complex operational processes into reliable automation.
Core Objectives
- Build an internal AI platform that automates high-volume operational workflows
- Reduce manual processing time and administrative overhead
- Improve accuracy, speed, and decision visibility across departments
- Establish scalable systems that support portfolio growth without proportional staffing increases
Primary Responsibilities
- AI Platform Architecture & Development
- Design and deploy AI agents to automate operational and administrative workflows
- Build LLM-powered systems for document review, data extraction, and decision support
- Develop retrieval-based systems leveraging leases, financial data, contracts, and SOPs
- Implement evaluation, monitoring, and continuous improvement frameworks
Lease & Document Intelligence Automation
- Build tools to extract key lease terms, obligations, and risk clauses
- Automates lease abstraction and document comparison workflows
- Develop compliance and deadline tracking systems
- Enable searchable knowledge retrieval across lease and legal documents
Leasing & Asset Management Automation
- Automate LOI comparison and deal workflow summaries
- Build dashboards summarizing tenant performance, lease milestones, and risk exposure
- Support market intelligence and tenant prospecting research
- Develop underwriting support and reporting tools
Property Management & Financial Workflow Automation
- Automate CAM reconciliation data processing and variance detection
- Streamline tenant reporting and communication workflows
- Track vendor contracts, compliance deadlines, and service obligations
- Extract and structure financial data from operational documents
Data Infrastructure & Knowledge Systems
- Structure internal documents and data for AI retrieval and automation
- Build document ingestion, indexing, and retrieval pipelines
- Implement vector search and knowledge retrieval systems
- Maintain data integrity, access control, and auditability
Systems Integration & Deployment
- Integrate AI tools with property management, accounting, CRM, and document platforms
- Deploy systems within secure cloud environments
- Implement logging, monitoring, performance, and cost controls
- Ensure reliability and scalability of deployed systems
Collaboration & Implementation
- Translate operational workflows into technical automation solutions
- Work directly with leadership to prioritize automation opportunities
- Train teams and implement adoption workflows
- Establish standards for responsible and secure AI usage
Required Qualifications
- Bachelor’s or advanced degree in Computer Science, Engineering, Mathematics, Statistics, or related quantitative discipline
- Demonstrated success in a rigorous academic or research environment
- 3–7+ years building production software, automation systems, or applied AI solutions
- Strong Python development and API integration experience
- Experience working with structured and unstructured data
- Experience deploying systems in cloud environments
- Strong understanding of system architecture and data pipelines
- Exceptional analytical and problem-solving ability
Preferred Qualifications
- Experience building document intelligence or contract analysis systems
- Experience with retrieval systems and vector databases
- Experience automating financial or operational workflows
- Experience integrating AI into business operations environments
- Experience in real estate, finance, logistics, or operations-heavy industries
- Evidence of research, technical publications, competitive programming, or open-source contributions
Technical Environment (Representative)
- Python and API-based architectures
- LLM platforms and agent orchestration frameworks
- Cloud infrastructure (AWS, Azure, or GCP)
- SQL and vector databases
- Workflow orchestration and automation tools
- Version control, logging, and monitoring systems
Success Metrics
- Performance in this role will be evaluated by:
- Reduction in manual administrative workload
- Automation coverage across operational workflows
- Accuracy and reliability of AI-driven outputs
- Adoption and usage across departments
- Operational efficiency gains and cost reductions
Work Environment
- Hybrid work model with in-person collaboration in Orange County
- Direct collaboration with executive leadership and operational teams
- High autonomy in system architecture and implementation decisions
Job description:-
Position: Gen AI Engineer
Location: Irving TX
Duration: 12+ months
Job Overview
In this role, you will be responsible for translating AI strategy into tangible, production-ready capabilities that enhance operational efficiencies and drive business value. We're looking for someone who combines deep technical expertise in generative AI with a proven track record of successfully delivering complex technology projects.
Required Technical Skills
Deep knowledge of LLMs and advanced fine-tuning techniques. Proficient in Parameter-Efficient Fine-Tuning (PEFT) methods (LoRA, QLoRA, Adapter Tuning, Prefix Tuning), full fine-tuning, instruction tuning, and agentic AI techniques (RLHF, multi-task learning).
Expertise in model compression and quantization methods (AWQ, GPTQ, GPTQ-for-LLaMA). Proficiency with optimized inference engines such as vLLM, DeepSpeed, and FP6-LLM.
Adept at advanced prompt engineering techniques and best practices. Familiarity with frameworks that facilitate effective prompt design and management.
Advanced knowledge of RAG techniques, including hybrid search, multi-vector retrieval, Hypothetical Document Embeddings (HyDE), self-querying, query expansion, re-ranking, and relevance filtering.
Proficiency in TensorFlow, PyTorch, and Keras. Knowledge of distributed training, parallel processing, and extensive hands-on experience with AWS services for AI/ML.
Advanced NLP skills (NER, Dependency Parsing, Text Classification, Topic Modeling). Experience with Transfer Learning, Few-shot, and Zero-shot learning. Expertise in containerization (Docker), orchestration (Kubernetes), and CI/CD pipelines for MLOps.
Strong proficiency in data preprocessing, feature engineering, and handling large-scale datasets. Experience with real-time AI applications, streaming data, and designing RESTful APIs for model integration.
Experienced with LangGraph, Autogen, , LangChain, LlamaIndex, and Hugging Face Transformers. Familiarity with Gen AI APIs (OpenAI, Gemini, Claude) and version control systems like Git.
Knowledge of AI compliance frameworks and best practices. Experience implementing guardrails to ensure ethical AI usage and mitigate risks (e.g., Microsoft's AI Guidance Framework).
Dexian is a leading provider of staffing, IT, and workforce solutions with over 12,000 employees and 70 locations worldwide. As one of the largest IT staffing companies and the 2nd largest minority-owned staffing company in the U.S., Dexian was formed in 2023 through the merger of DISYS and Signature Consultants. Combining the best elements of its core companies, Dexian's platform connects talent, technology, and organizations to produce game-changing results that help everyone achieve their ambitions and goals.
Dexian's brands include Dexian DISYS, Dexian Signature Consultants, Dexian Government Solutions, Dexian Talent Development and Dexian IT Solutions. Visit to learn more.
Dexian is an Equal Opportunity Employer that recruits and hires qualified candidates without regard to race, religion, sex, sexual orientation, gender identity, age, national origin, ancestry, citizenship, disability, or veteran status.
Note: Dexian Canada will, on request, provide accommodations for disabilities to support your participation in all aspects of our Recruitment and Assessment/Selection Processes.
Generative AI Developer Location: Dallas TX/ Tampa FL/New Jersey
- Hybrid Fulltime/FTE Salary: Market Client: Bank Role Overview We are seeking an experienced Senior Generative AI Developer to design and implement cutting-edge AI solutions leveraging Retrieval-Augmented Generation (RAG) techniques.
The ideal candidate will have strong expertise in Python programming, FastAPI, and cloud platforms (AWS, Azure, or GCP).
This role requires a deep understanding of system architecture design, scalable APIs, and end-to-end AI solution development.
Key Responsibilities Architect and develop Generative AI applications using RAG frameworks for enterprise-scale solutions.
Design and implement robust system architectures for AI-driven platforms ensuring scalability, security, and performance.
Build and optimize APIs using FastAPI for seamless integration with AI models and data pipelines.
Collaborate with cross-functional teams to integrate AI solutions into existing systems and workflows.
Implement data ingestion, preprocessing, and retrieval mechanisms for large-scale knowledge bases.
Ensure compliance with best practices for cloud deployment (AWS, Azure, or GCP).
Conduct performance tuning and optimization of AI models and APIs.
Stay updated with the latest advancements in Generative AI, LLMs, and RAG methodologies.
Required Skills & Qualifications 8+ years of professional experience in software development and system design.
Strong proficiency in Python and experience with FastAPI for API development.
Hands-on experience with Generative AI frameworks and RAG architectures.
Solid understanding of system and architecture design principles for distributed applications.
Experience deploying solutions on any major cloud platform (AWS, Azure, GCP).
Familiarity with vector databases, embedding models, and retrieval pipelines.
Strong problem-solving skills and ability to work in a fast-paced environment.
Preferred Qualifications Experience with LLM fine-tuning, prompt engineering, and model evaluation.
Knowledge of containerization (Docker) and orchestration (Kubernetes).
Exposure to CI/CD pipelines and DevOps practices.
Email:
Must be local to TX
Role Overview
- He’s ideally looking for someone with 13+ years of experience, strong architecture depth, and the ability to clearly explain designs.
- Must have experience using AI is used in day‑to‑day development.
- Must have experience as a API Developer to lead the development and deployment of our backend services. In this role, you will be the bridge between our PostgreSQL database and React frontend, responsible not only for writing high-performance Python code but also for architecting the CI/CD pipelines that bring our applications to life. You will ensure our integration layers are scalable, secure, and automatically deployed.
Job Summary
We are seeking a Principal-level Full Stack Lead Developer with 13+ years of experience to drive high-priority engineering workstreams. This role is for a technical heavyweight who can lead new projects in parallel with existing leadership while maintaining exceptional architecture depth. You will be responsible for the full lifecycle of high-performance FastAPI and React applications, ensuring they are resilient, observable, and scalable. We expect a leader who views AI development tools as a force multiplier for velocity and can clearly articulate complex design decisions to stakeholders.
Key Responsibilities
- Project Sovereignty: Independently lead and deliver new, complex workstreams from inception to launch, acting as a technical peer to existing leadership (e.g., Sai).
- System Architecture: Design and defend distributed microservices and event-driven architectures. You must be able to clearly whiteboard and communicate design patterns to both technical and non-technical audiences.
- Hands-on Execution: Maintain high-velocity output of clean, production-grade code using FastAPI (Python) and React (TypeScript).
- Platform Reliability: Architect and implement global Error Handling frameworks, centralized Logging (e.g., OpenTelemetry, ELK), and API Management strategies including Rate Limiting and versioning.
- Event-Driven Messaging: Oversee the implementation of asynchronous service communication using ActiveMQ or AWS EventBridge.
- AI-Augmented SDLC: Deeply integrate AI coding tools (e.g., CloudCode, Cursor, GitHub Copilot) into daily workflows to accelerate prototyping, refactoring, and automated testing.
- Engineering Mentorship: Foster a culture of excellence through rigorous code reviews and by unblocking senior engineers on complex technical hurdles.
- Product Collaboration: Work closely with Product Managers to turn high-level roadmaps into technical reality, providing accurate estimates and identifying technical risks early.
Required Skills & Qualifications
- Experience:13+ years of professional software development with a proven track record of leading large-scale products.
- Tech Stack Mastery: Expert-level FastAPI (Async Python) and modern React (Hooks, TypeScript, Performance Profiling).
- Advanced Governance: Hands-on experience with API Gateway patterns, request throttling, and securing distributed systems (OAuth2/JWT).
- Observability & Messaging: Deep knowledge of structured logging, distributed tracing, and message brokers (ActiveMQ or EventBridge).
- AI Tooling: Advanced proficiency in using AI tools for Fast Development to reduce manual overhead and multiply team output.
- Database & Infrastructure: Expert-level PostgreSQL (tuning/indexing), Redis (for caching/rate-limiting), and container orchestration (Kubernetes/Docker).
- Communication: Exceptional ability to translate technical "scars" and architectural risks into clear business impact.
At Rite-Hite, your work makes an impact. As the global leader in loading dock and door equipment, we design and deliver solutions that keep our customers safe, secure, and productive. Here, you'll find innovation, stability, and the chance to grow your career as part of a team that's always looking ahead.
ESSENTIAL DUTIES AND RESPONSIBILITIES
To perform this job successfully, an individual must be able to perform each essential duty satisfactorily.
- Design and build AI-powered applications using Large Language Models (LLMs) for enterprise use cases.
- Develop Retrieval-Augmented Generation (RAG) solutions using structured and unstructured enterprise data such as documents, manuals, tickets, ERP data, and knowledge bases.
- Build and orchestrate AI agents that can reason, plan, and interact with tools, APIs, and workflows.
- Implement guardrails for AI systems including prompt safety, data protection, hallucination mitigation, access control, and output validation.
- Work with multimodal AI models including text, image, and video use cases such as video analysis, summarization, and optimization.
- Integrate AI solutions with existing enterprise systems such as Salesforce, ERP platforms, data lakes, APIs, and internal applications.
- Partner with security and compliance teams to ensure responsible AI usage, data privacy, and governance.
- Prototype quickly, then harden solutions for production with monitoring, logging, evaluation, and performance optimization.
- Mentor and upskill existing developers on AI concepts, patterns, and best practices.
Required Skills & Experience
- 5+ year of full stack development experience.
- Strong software engineering background with experience building production-grade applications.
- Hands-on experience with modern LLM platforms such as OpenAI, Azure OpenAI, Anthropic, or similar.
- Practical experience building RAG pipelines using vector databases and embedding models.
- Experience with prompt engineering, prompt versioning, and evaluation techniques.
- Solid Python experience for AI development.
- Experience integrating AI services with REST APIs, microservices, and cloud-native architectures.
- Familiarity with cloud platforms such as AWS or Azure, including deployment, scaling, and security concepts.
- Understanding of data formats such as JSON, XML, and document-based data.
- Ability to translate business problems into AI-driven technical solutions.
Preferred Qualifications
- Experience with vector databases such as Pinecone, FAISS, Weaviate, or similar.
- Familiarity with frameworks such as LangChain, LlamaIndex, Semantic Kernel, or equivalent orchestration tools.
- Experience implementing AI safety controls, policy enforcement, and evaluation frameworks.
- Exposure to video or image models and multimodal AI use cases.
- Experience working in enterprise environments with security, compliance, and change management considerations.
- Prior experience mentoring or leading developers in new technical domains.
What We Offer
At Rite-Hite, we take care of our people - because when you're supported, you can do your best work. Our benefits are designed to support your health, your future and your life outside of work:
Health & Well-being: Comprehensive medical, dental, and vision coverage, plus life and disability insurance. A robust well-being program with an opportunity to receive an extra day off and more.
Financial Security: A strong retirement savings program with 401(k), company match, and profit sharing.
Time for You: Paid holidays, vacation time, and personal/sick days each year.
Join us and build a career where you're supported - at work and beyond.
Rite-Hite is proud to be an Equal Opportunity Employer. We consider all qualified applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, veteran status, or any other protected characteristic under federal, state, or local law.In accordance with VEVRAA, we are committed to providing equal employment opportunities for protected veterans.We are also committed to maintaining a drug-free workplace for the safety of our employees and customers.
Proper Hospitality is seeking an AI Workflow Fellow for a three month, execution focused program embedded with the CEO, President, and Chief of Staff. This role is responsible for building and deploying production ready AI workflows across our hotel portfolio, automating real operational processes tied to clear ROI, and integrating across systems including PMS, Snowflake, Microsoft 365, and guest experience platforms. This is hands on applied AI with live impact on property operations, not research or prototype work.
What This Is
We're not exploring AI at Proper Hotels. We're deploying it. Right now, AI runs our executive briefings, triages guest reviews across 11 properties, automates reporting pipelines, and handles operational workflows that used to eat hundreds of hours. We need someone who can build more of this, faster.
This is a single fellowship seat. You'll work directly with our CEO, President, Chief of Staff and executive team for three months and will be shipping production workflows from week one.
You are the execution engine inside Proper’s broader Workflow & AI operating model.
What You'll Actually Do
Building (80%)
Build & Ship
- Design and deploy agent-based workflows that automate real operational tasks (not demos, not prototypes that sit on a shelf)
- Build within clearly defined ROI hypotheses approved by the Head of Workflow & AI (not speculative experiments)
- Wire together APIs across our stack: PMS, Snowflake, Notion, Microsoft 365, Google Workspace, Revinate, STR
- Build multi-agent systems that handle overnight operations, reporting rollups, task accountability, and guest experience triage
- Create tools that General Managers and department heads actually use daily
Identify & Automate
- Audit departmental workflows across the portfolio and find the manual processes burning the most hours
- Build the automation, test it on-property, iterate based on real feedback
- Transition tasks from "someone does this by hand" to "this runs itself" without losing the human touch that defines Proper
Strategy (20%)
Strategic Input
- Evaluate frontier capabilities weekly, but only deploy those that map to defined operational ROI
- Translate what's happening at the AI frontier into specific, actionable opportunities for luxury hospitality
- Help shape our internal AI skill-building program so the culture evolves with the technology
Who You Are
- You build agents and workflows, not just prompts. Show us something you've built that runs without you babysitting it
- You've shipped applied AI into production environments. Side projects count if they're real and running
- You can wire APIs together before lunch and present to the C-suite after it
- You navigate ambiguity without freezing. If a tool doesn't exist, you build it
- You understand that technology in a hotel should be invisible but felt. "High Tech / High Touch" isn't a slogan to you
- You're hands-on with LLMs (OpenAI, Anthropic, open-source), API orchestration, agent frameworks (eg. Openclaw), and data pipelines
- Bonus: experience with hospitality systems, revenue management, or guest experience platforms
Education
CS, Data Science, or MBA with a strong technical background preferred but not required. Non-traditional paths welcome if your portfolio speaks for itself
Program Details
- Duration: 3 months with potential to extend
- Experience: 0-2 years
- Compensation: $7,000 - $10,000/month depending on experience and location
- Access: Direct seat at the table with the CEO, President, and Chief of Staff
- Impact: Your work goes live on-property, affecting real guests and real revenue. This isn't a sandbox.
In your application please include two additional items:
- Something you've built that automates a real workflow (link, repo, or demo)
- A short note on what you'd build first if you had access to a luxury hotel portfolio's entire data stack
Why Join Proper Hospitality
At Proper, we build experiences that move people — and that begins with the team behind them. As a best-in-class employer, we’re committed to creating one of the Best Places to Work in hospitality by nurturing a culture where creativity, excellence, and humanity thrive together.
Everything we do is grounded in the belief that hospitality is more than a profession - it’s an opportunity to care for others and make lives better. Guided by the Pillars of Proper, we show up with warmth and authenticity (Care Proper), strive for excellence in everything we do (Achieve Proper), think creatively and resourcefully (Imagine Proper), and take pride in the style and culture that make us who we are (Present Proper).
We believe our people are our greatest strength, and we invest deeply in their wellbeing, growth, and sense of belonging. From comprehensive benefits to meaningful development programs, Proper is designed to help you build a career, and a life, that feels as inspiring as the experiences we create for our guests.
Our Commitment: Building the Best Place to Work
Our Best Place to Work initiative is a living commitment — a continuous investment in our people, our culture, and our purpose. We listen, learn, and evolve together to create an environment where everyone feels empowered to imagine boldly, achieve confidently, care deeply, and present themselves authentically.
At Proper, joining the team means more than finding a job — it means joining a community that believes in building beautiful experiences together, for our guests and for one another.
About Wakefern
Wakefern Food Corp. is the largest retailer-owned cooperative in the United States and supports its co-operative members' retail operations, trading under the ShopRite®, Price Rite®, The Fresh Grocer®, Dearborn Markets®, and Gourmet Garage® banners.
Employing an innovative approach to wholesale business services, Wakefern focuses on helping the independent retailer compete in a big business world. Providing the tools entrepreneurs need to stay a step ahead of the competition, Wakefern’s co-operative members benefit from the company’s extensive portfolio of services, including innovative technology, private label development, and best-in-class procurement practices.
The ideal candidate will have a strong background in designing, developing, and implementing complex projects, with focus on automating data processes and driving efficiency within the organization. This role requires a close collaboration with application developers, data engineers, data analysts, data scientists to ensure seamless data integration and automation across various platforms. The Data Integration & AI Engineer is responsible for identifying opportunities to automate repetitive data processes, reduce manual intervention, and improve overall data accessibility.
Essential Functions
- Participate in the development life cycle (requirements definition, project approval, design, development, and implementation) and maintenance of the systems.
- Implement and enforce data quality and governance standards to ensure the accuracy and consistency.
- Provide input for project plans and timelines to align with business objectives.
- Monitor project progress, identify risks, and implement mitigation strategies.
- Work with cross-functional teams and ensure effective communication and collaboration.
- Provide regular updates to the management team.
- Follow the standards and procedures according to Architecture Review Board best practices, revising standards and procedures as requirements change and technological advancements are incorporated into the >tech_ structure.
- Communicates and promotes the code of ethics and business conduct.
- Ensures completion of required company compliance training programs.
- Is trained – either through formal education or through experience – in software / hardware technologies and development methodologies.
- Stays current through personal development and professional and industry organizations.
Responsibilities
- Design, build, and maintain automated data pipelines and ETL processes to ensure scalability, efficiency, and reliability across data operations.
- Develop and implement robust data integration solutions to streamline data flow between diverse systems and databases.
- Continuously optimize data workflows and automation processes to enhance performance, scalability, and maintainability.
- Design and develop end-to-end data solutions utilizing modern technologies, including scripting languages, databases, APIs, and cloud platforms.
- Ensure data solutions and data sources meet quality, security, and compliance standards.
- Monitor and troubleshoot automation workflows, proactively identifying and resolving issues to minimize downtime.
- Provide technical training, documentation, and ongoing support to end users of data automation systems.
- Prepare and maintain comprehensive technical documentation, including solution designs, specifications, and operational procedures.
Qualifications
- A bachelor's degree or higher in computer science, information systems, or a related field.
- Hands-on experience with cloud data platforms (e.g., GCP, Azure, etc.)
- Strong knowledge and skills in data automation technologies, such as Python, SQL, ETL/ELT tools, Kafka, APIs, cloud data pipelines, etc.
- Experience in GCP BigQuery, Dataflow, Pub/Sub, and Cloud storage.
- Experience with workflow orchestration tools such as Cloud Composer or Airflow
- Proficiency in iPaaS (Integration Platform as a Service) platforms, such as Boomi, SAP BTP, etc.
- Develop and manage data integrations for AI agents, connecting them to internal and external APIs, databases, and knowledge sources to expand their capabilities.
- Build and maintain scalable Retrieval-Augmented Generation (RAG) pipelines, including the curation and indexing of knowledge bases in vector databases (e.g., Pinecone, Vertex AI Vector Search).
- Leverage cloud-based AI/ML platforms (e.g., Vertex AI, Azure ML) to build, train, and deploy machine learning models on a scale.
- Establish and enforce data quality and governance standards for AI/ML datasets, ensuring the accuracy, completeness, and integrity of data used for model training and validation.
- Collaborate closely with data scientists and machine learning engineers to understand data requirements and deliver optimized data solutions that support the entire machine learning lifecycle.
- Hands-on experience with IBM DataStage and Alteryx is a plus.
- Strong understanding of database design principles, including normalization, indexing, partitioning, and query optimization.
- Ability to design and maintain efficient, scalable, and well-structured database schemas to support both analytical and transactional workloads,
- Familiarity with BI visualization tools such as MicroStrategy, Power BI, Looker, or similar.
- Familiarity with data modeling tools.
- Familiarity with DevOps practices for data (CI/CD pipelines)
- Proficiency in project management software (e.g., JIRA, Clarizen, etc.)
- Familiarity with DevOps practices for data (CI/CD pipelines)
- Strong knowledge and skills in data management, data quality, and data governance.
- Strong communication, collaboration, and problem-solving skills.
- Ability to work on multiple projects and prioritize tasks effectively.
- Ability to work independently and in a team environment.
- Ability to learn new technologies and tools quickly.
- The ability to handle stressful situations.
- Highly developed business acuity and acumen.
- Strong critical thinking and decision-making skills.
Working Conditions & Physical Demands
This position requires in-person office presence at least 4x a week.
Compensation and Benefits
The salary range for this position is $75,868 - $150,644. Placement in the range depends on several factors, including experience, skills, education, geography, and budget considerations.
Wakefern is proud to offer a comprehensive benefits package designed to support the health, well-being, and professional development of our Associates. Benefits include medical, dental, and vision coverage, life and disability insurance, a 401(k) retirement plan with company match & annual company contribution, paid time off, holidays, and parental leave.
Associates also enjoy access to wellness and family support programs, fitness reimbursement, educational and training opportunities through our corporate university, and a collaborative, team-oriented work environment. Many of these benefits are fully or partially funded by the company, with some subject to eligibility requirements
Join the team leading the next evolution of virtual care.
At Teladoc Health, you are empowered to bring your true self to work while helping millions of people live their healthiest lives.
Here you will be part of a high-performance culture where colleagues embrace challenges, drive transformative solutions, and create opportunities for growth. Together, we're transforming how better health happens.
Summary of Position
As a Staff Software Engineer, you are a senior individual contributor who leads the design and delivery of significant platform features and raises the bar for engineering quality across the team. You'll work handson in code-designing APIs and data flows, building services in Python/FastAPI and React frontends, and guiding solutions from idea to production. You'll mentor engineers, influence architecture and standards within and adjacent to your team, and partner closely with product and design to achieve clear, measurable outcomes. This role blends deep implementation work with pragmatic technical leadership by example.
Essential Duties and Responsibilities
Lead technical design for platform features and services, breaking ambiguous requirements into clear, incremental designs and stories for your team and adjacent partners.
Implement backend services in Python/FastAPI and React frontends end-to-end, owning a continuous stream of stories from idea to production.
Define and use clear API contracts and data flows between services and UIs, creating patterns and templates others can follow.
Champion high-quality engineering practices, including code reviews, documentation, and maintainable, testable designs.
Develop and improve automated testing (unit, integration, endtoend) and integrate these into everyday development and CI.
Improve CI/CD pipelines and release workflows for your team so the team can ship small, safe changes frequently and confidently.
Own the operational lifecycle of the features and services you build, including monitoring, observability, on-call participation, and incident follow-up.
Design and implement secure-by-default solutions, including robust authentication/authorization, input validation, and safe handling of sensitive data.
Identify and address reliability and performance risks early, proposing concrete technical improvements and sequencing them into the roadmap.
Mentor and unblock engineers through pairing, design discussions, and clear feedback; influence without formal authority.
Partners with product/design to shape requirements into incremental deliverables; escalates tradeoff decisions; proposes sequencing that optimizes value/risk.
The time spent on each responsibility reflects an estimate and is subject to change dependent on business needs.
Supervisory Responsibilities
No
Required Qualifications
Bachelor's degree in Computer Science, Engineering, or related field; equivalent work experience is acceptable.
7+ years of experience in software engineering.
Strong proficiency with Python and modern web backends (FastAPI, Flask, Django, or similar) and solid understanding of HTTP, API design, and data modeling.
Significant experience with React (or a comparable SPA framework) and building production frontends that talk to backend APIs.
Demonstrated ability to own features end-to-end in a small team: from shaping requirements through design, implementation, testing, deployment, and support.
Experience designing and working with distributed systems or multi-service architectures (e.g., service boundaries, async jobs, integration patterns).
Solid understanding of observability and operations for production systems (metrics, logs, traces, dashboards, alerting, incident response).
Strong understanding of security fundamentals (authentication, authorization, secure data handling) and how they apply to web services and UIs.
Deep familiarity with automated testing and CI/CD, and a track record of improving engineering workflows and quality.
Excellent communication and collaboration skills; comfortable working closely with product, design, and other stakeholders.
Proven ability to provide technical leadership in a hands-on way: unblocking others, making clear decisions, and raising the bar through code and reviews.
Bonus Qualifications
Experience in early-stage or small platform teams where engineers wear multiple hats and balance shipping with building foundations.
Experience with Azure and containerized deployments (or similar cloud-native environments).
Experience building platforms (developer platforms, data platforms, or similar) that serve multiple product teams.
Exposure to AI/ML or data-intensive applications (e.g., integrating with model inference APIs, data pipelines, or analytical data stores).
The base salary range for this position is$180,000 - $200,000. In addition to a base salary, this position is eligible for a performance bonus and benefits (subject to eligibility requirements) listed here: Teladoc Health Benefits 2026.Total compensation is based on several factors including, but not limited to, type of position, location, education level, work experience, and certifications.This information is applicable for all full-time positions.
#LI-SS2 #LI-Remote
We follow a Flexible Vacation Policy, intended for rest, relaxation, and personal time. All time off must be approved by your manager prior to use. You will also receive 80 hours of Paid Sick, Safe, and Caregiver Leave annually. This applies to full-time positions only. If you are applying for a part-time role, your recruiter can provide additional details.
As part of our hiring process, we verify identity and credentials, conduct interviews (live or video), and screen for fraud or misrepresentation. Applicants who falsify information will be disqualified.
Teladoc Health will not sponsor or transfer employment work visas for this position. Applicants must be currently authorized to work in the United States without the need for visa sponsorship now or in the future.
Why join Teladoc Health?
Teladoc Health is transforming how better health happens. Learn how when you join us in pursuit of our impactful mission.
Chart your career path with meaningful opportunities that empower you to grow, lead, and make a difference.
Join a multi-faceted community that celebrates each colleague's unique perspective and is focused on continually improving, each and every day.
Contribute to an innovative culture where fresh ideas are valued as we increase access to care in new ways.
Enjoy an inclusive benefits program centered around you and your family, with tailored programs that address your unique needs.
Explore candidate resources with tips and tricks from Teladoc Health recruiters and learn more about our company culture by exploring #TeamTeladocHealth on LinkedIn.
As an Equal Opportunity Employer, we never have and never will discriminate against any job candidate or employee due to age, race, religion, color, ethnicity, national origin, gender, gender identity/expression, sexual orientation, membership in an employee organization, medical condition, family history, genetic information, veteran status, marital status, parental status, or pregnancy). In our innovative and inclusive workplace, we prohibit discrimination and harassment of any kind.
Teladoc Health respects your privacy and is committed to maintaining the confidentiality and security of your personal information. In furtherance of your employment relationship with Teladoc Health, we collect personal information responsibly and in accordance with applicable data privacy laws, including but not limited to, the California Consumer Privacy Act (CCPA). Personal information is defined as: Any information or set of information relating to you, including (a) all information that identifies you or could reasonably be used to identify you, and (b) all information that any applicable law treats as personal information. Teladoc Health's Notice of Privacy Practices for U.S. Employees' Personal information is available at this link.
AI Data & Python Tools Engineer
We're seeking an AI Data and Python Tools Engineer to develop and deploy intelligent tools that leverage big data infrastructure and modern AI architecture. This role combines strong software engineering fundamentals with the ability to build production-ready AI applications at speed, including integration with Model Context Protocol (MCP) systems.
Responsibilities:
- Develop and deploy AI-powered full-stack applications using Python, React, and modern machine learning frameworks
- Design and streamline data pipelines, train and validate ML models, and implement robust evaluation methods
- Collaborate with cross-functional teams to solve complex problems and integrate scalable, cloud-based AI solutions
- Rapidly prototype, test, and iterate on AI tools with a strong focus on performance, flexibility, and scalability
- Maintain clear technical documentation, perform code reviews, and support the full software development lifecycle
Software Engineering & AI/ML Data, Tools Development
- 3+ years of Python Development with a background in back end services and data processing
- Exposure to AI/ML algorithms
- Familiarity with ML frameworks (TensorFlow, PyTorch, scikit-learn)
- Understanding of LLMs, vector databases, and retrieval systems
- Experience with Model Context Protocol (MCP) integration and server development
Big Data & Cloud Infrastructure
- Knowledge of building and deploying cloud based applications
- Hands-on experience with cloud data platforms (AWS/GCP/Azure)
- Proficiency with big data technologies (Spark, Kafka, or similar streaming platforms)
- Experience with data warehouses (Snowflake, BigQuery, Redshift) and data lakes
- Knowledge of containerization (Docker/Kubernetes) and infrastructure as code
*Preferred Experience
- Experience building web applications with modern frameworks (React, Vue, or Angular)
- API development and integration experience
- Basic UX/UI design sensibilities for internal tooling
- Experience with real-time data processing and analytics
- Background in building developer tools or internal platforms
- Familiarity with AI/ML operations (MLOps) practices (Experience using airflow)
- Experience building MCP servers and integrating with AI assistants
- Knowledge of structured data exchange protocols and API design for AI systems.
Type: Full Time
Location: Austin, TX or Cupertino, CA (Monday- Friday onsite)
*Relocation assistance can be offered based on individual needs and circumstances*
Join the team leading the next evolution of virtual care.
At Teladoc Health, you are empowered to bring your true self to work while helping millions of people live their healthiest lives.
Here you will be part of a high-performance culture where colleagues embrace challenges, drive transformative solutions, and create opportunities for growth. Together, we're transforming how better health happens.
Summary of Position
The Principal AI Security Engineer is a senior technical leader on the AI Security team, responsible for designing, building, and operating security controls for generative AI and Machine Learning (ML) systems across their full lifecycle: data, training, deployment, and runtime.
This role is deeply hands-on: you will work directly with data science, MLOps, platform, devops and application teams to secure LLMs, RAG systems, AI agents, and AI-enabled products. You will also lead the intake and review process for AI use cases, helping the organization adopt AI safely and at scale in a highly regulated environment.
The ideal candidate combines:
* Strong security engineering and cloud architecture experience
* Deep, current familiarity with modern AI/LLM tooling and practices
* Familiar and can cover basic coding within the AI tooling space (python, others)
* The ability to communicate clearly with senior leadership and influence enterprise-wide strategy
Essential Duties and Responsibilities
Secure AI / ML platforms and workloads
* Lead security architecture and threat modeling for AI/ML systems, including LLMs, RAG pipelines, agents, and AI-powered applications.
* Design and implement security controls as code (services, libraries, infrastructure-as-code, policy-as-code) for AI/ML platforms and workloads.
* Lead and help setup the basic infrastructure needed to safely rollout AI - MCPs, LLMs, pipelines, Test harness for AI (ie: harmbench), intake automation.
* Partner with data science and MLOps teams to harden:
- Data ingestion and labeling
- Training and fine-tuning pipelines
- Model registries and deployment workflows
- Inference APIs, agents, and integrations
* Define and champion secure reference architectures and patterns for common AI use cases and focus on composable architecture.
AI use case intake & governance
* Design, implement, and continuously improve the intake, triage, and review process for AI/ML and generative AI use cases across the organization.
* Build and automate self-service workflows (e.g., request forms, risk questionnaires, routing, approvals) that balance speed of delivery with security, privacy, and compliance with a focus on risk scoring and scorecards.
* Define risk-based criteria for AI use case approval, including data sensitivity, model and vendor selection, integration patterns, and control requirements; this will involve in re-mapping the complete end to end lifecycle.
* Review proposed AI solutions from concept through deployment, providing clear, actionable guidance to product and engineering teams.
* Maintain visibility into the AI use case portfolio and risk posture, and provide regular reporting to leadership and governance bodies.
Monitoring, detection & assurance
* Establish and maintain monitoring and detection for AI-specific threats, such as:
- Prompt injection and jailbreak attempts
- Data exfiltration and sensitive data exposure
- Misuse or abuse of AI tools and agents
- Anomalous model or pipeline behavior
* Integrate AI/ML systems with existing logging, SIEM, and incident response processes.
* Lead or participate in AI-focused security assessments, red-teaming, and adversarial testing; drive remediation and verification.
Strategy, leadership & enablement
* Help define and evolve the organization's AI security strategy, standards, and roadmap in partnership with Security, Engineering, Data, Legal, Privacy, and Risk.
* Translate global privacy, data sovereignty, and regulatory requirements into practical technical controls for AI workloads across multiple cloud environments.
* Prepare and deliver executive-ready briefings and narratives on AI security risks, controls, and progress.
* Mentor other engineers and serve as THE internal subject matter expert on AI/ML security, generative AI, and LLM-based systems.
Qualifications Expected for Position
- 7+ years of experience in information security, security engineering, or related fields, including significant time building and securing production systems.
- 3+ years of hands-on experience with AI/ML technologies (such as LLMs, RAG, model training/fine-tuning, MLOps, or AI-powered products), including implementation of security controls or guardrails for these systems.
- Strong programming skills in one or more relevant languages (e.g., Python, TypeScript/JavaScript, Go, or similar), with a track record of contributing to production-grade tools, services, or libraries.
- Deep understanding of cloud security architecture and controls on at least one major cloud platform (AWS, Azure, or GCP), including identity, networking, secrets management, data protection, logging, and monitoring.
- Experience designing and implementing controls in a highly regulated environment; healthcare or financial services preferred.
- Demonstrated ability to lead complex technical initiatives across multiple teams, from problem definition through design, implementation, and adoption.
- Proven ability to communicate complex technical and risk topics clearly to both engineering teams and senior leadership.
Preferred Qualifications:
* Practical experience securing LLM- and genAI-based systems, such as:
- RAG architectures backed by internal data
- AI assistants, copilots, or agents integrated with enterprise tools
- Fine-tuned models and model hosting platforms
* Experience with AI IDE tools
- cursor, windsurfer, others
- Knows the security problems and has practical solutions that balances innovation with innovation.
* Familiarity with AI/ML frameworks and ecosystems (e.g., TensorFlow, PyTorch, Scikit-learn) and/or modern LLM development stacks and IDEs (e.g., API-based LLMs, self-hosted models, AI-enhanced coding tools).
* Experience with:
- Security for data pipelines, feature stores, and model registries
- Detection engineering or SIEM tuning for AI-related events
- Red-teaming or adversarial testing of AI systems
* Evidence of ongoing engagement with AI and security (such as side projects, open-source contributions, lab environments, publications, or conference talks).
* Familiarity with emerging AI security and safety standards and forward-looking industry guidance and horizon reports.
* Relevant certifications (e.g., cloud security, security engineering, or governance) are a plus.
* Strong analytical and problem-solving skills, with the ability to operate effectively in a fast-evolving technical and regulatory landscape.
* High level of integrity and ethical conduct.
This role is a fit if you:
* Regularly build, break, or secure AI/ML or LLM-based systems in your day-to-day work or personal projects.
* Are comfortable reading and writing code, experimenting with new AI tools, and wiring them into real systems.
* Enjoy turning ambiguous AI ideas and risks into concrete architectures, controls, and automation.
* Can move fluidly between deep technical discussions and concise, executive-level explanations.
This role is not a fit if you:
* Prefer to focus solely on policy, governance, or vendor assessments without hands-on technical work.
* Do not actively engage with current AI/LLM tooling, research, and emerging practices.
* "Describe a specific LLM or AI/ML system you have secured. What were the main risks and what controls did you implement?"
* "What AI tools, libraries, or environments do you actively use or experiment with today (work or personal), and for what?"
* "What do you see as the most important AI security or safety developments on the horizon over the next few years, and why?"
The base salary range for this position is$180,000 - $190,000. In addition to a base salary, this position is eligible for a performance bonus and benefits (subject to eligibility requirements) listed here: Teladoc Health Benefits 2026.Total compensation is based on several factors including, but not limited to, type of position, location, education level, work experience, and certifications.This information is applicable for all full-time positions.
We follow a Flexible Vacation Policy, intended for rest, relaxation, and personal time. All time off must be approved by your manager prior to use. You will also receive 80 hours of Paid Sick, Safe, and Caregiver Leave annually. This applies to full-time positions only. If you are applying for a part-time role, your recruiter can provide additional details.
As part of our hiring process, we verify identity and credentials, conduct interviews (live or video), and screen for fraud or misrepresentation. Applicants who falsify information will be disqualified.
Teladoc Health will not sponsor or transfer employment work visas for this position. Applicants must be currently authorized to work in the United States without the need for visa sponsorship now or in the future.
Why join Teladoc Health?
Teladoc Health is transforming how better health happens. Learn how when you join us in pursuit of our impactful mission.
Chart your career path with meaningful opportunities that empower you to grow, lead, and make a difference.
Join a multi-faceted community that celebrates each colleague's unique perspective and is focused on continually improving, each and every day.
Contribute to an innovative culture where fresh ideas are valued as we increase access to care in new ways.
Enjoy an inclusive benefits program centered around you and your family, with tailored programs that address your unique needs.
Explore candidate resources with tips and tricks from Teladoc Health recruiters and learn more about our company culture by exploring #TeamTeladocHealth on LinkedIn.
As an Equal Opportunity Employer, we never have and never will discriminate against any job candidate or employee due to age, race, religion, color, ethnicity, national origin, gender, gender identity/expression, sexual orientation, membership in an employee organization, medical condition, family history, genetic information, veteran status, marital status, parental status, or pregnancy). In our innovative and inclusive workplace, we prohibit discrimination and harassment of any kind.
Teladoc Health respects your privacy and is committed to maintaining the confidentiality and security of your personal information. In furtherance of your employment relationship with Teladoc Health, we collect personal information responsibly and in accordance with applicable data privacy laws, including but not limited to, the California Consumer Privacy Act (CCPA). Personal information is defined as: Any information or set of information relating to you, including (a) all information that identifies you or could reasonably be used to identify you, and (b) all information that any applicable law treats as personal information. Teladoc Health's Notice of Privacy Practices for U.S. Employees' Personal information is available at this link.
Join the team leading the next evolution of virtual care.
At Teladoc Health, you are empowered to bring your true self to work while helping millions of people live their healthiest lives.
Here you will be part of a high-performance culture where colleagues embrace challenges, drive transformative solutions, and create opportunities for growth. Together, we're transforming how better health happens.
Summary of Position
The AI SolutionsSpecialistis responsible forpartnering with business and technology stakeholders to design, integrate, and deliver AIpowered conversational agents and workflow automation solutions across the enterprise. This roleleads tothe technical implementation of AI platforms and agent development tools, ensuring secure, scalable, and compliant solutions that drive productivity and business value.Deep coding expertise is notrequired. However, the candidate must understand modern technology stacks, AI concepts, and system integration terminology.The ideal candidate will thrive inan evolving,fast-changingenvironment,where AI capabilities and standards continue to mature.Essential Duties and Responsibilities
- Work closely with business stakeholders toidentifyautomation opportunities.
- Lead the technical set up and integration ofconversational AI platform & agent development studiowithin the enterprise environment.- copilot agents preferred, deploying across enterprise not for personal use.
- Analyze business processes, data flows, and system architectures to support AI solution design.
- Support configuration and deployment of AI-powered agents,applications,and workflows.
- Design,build,and customize AI agents to automate workflows and improve productivity.
- Utilizedata platforms such asMicrosoft Fabric, Snowflake, Databricks, AWSfor data orchestration, governance, and compliance.
- Ensure seamless interoperabilityof agentsacrossMicrosoft and other enterprise applications asrequired.
- Evaluateand implement secure API integrationswith enterprise systems using APIsandconnectors to enable data exchange and workflow automation.
- Apply best practices for data security, identity management, and compliance with organizational and regulatory standards.
- Apply analytical judgment to assess feasibility, scalability, data readiness, and risks of AI use cases.
- Collaborate withcybersecurityand product teams to build robust AI solutions
- Test new AI agent enhancements, integrations, and fixes prior to release to ensure quality and expected behavior.
- Track and analyze performance metrics, including response quality, speed, reliability, andcost-effectivenessof AI agents and automated workflows.
- ContinuouslyoptimizeAI solutions based on performance data, user feedback, and evolving business needs.
- Document requirements, solution designs, architecture diagrams, and integration approaches in a clear and concise manner.
- Contribute to internal standards, reusable patterns, and best practices for AI agent and automation development.
- Support knowledge sharing and enablement across technical and business teams.
Qualifications Expected for Position
- Bachelor's degree in computer science, Information Systems, Engineering, Data Science, or a related fieldor equivalent combination of education and relevant professional experience.
- Advanced certifications or coursework in cloud platforms, data engineering, or AI/ML preferred.
- 3+years of experience in solution architecture, systems integration, automation engineering, or applied AI roles.
- 1+ year demonstrated ability to design, build, and deployAI-poweredagents, workflows, or conversational applications.
- Proven experience working directly with business stakeholders to translate operational needs into scalable technical solutions.
- Hands-on experience implementing enterprise automation or conversational AI solutions across multiple departments or use cases.
- Experienceoperatingin regulated orsecurity-consciousenvironments, supporting compliance and governance requirements.
- Strong experience designing and implementing enterprise system integrations using APIs, connectors, and automation frameworks.
- Experience working with modern data platforms (e.g.,Microsoft Fabric, Snowflake, Databricks, AWS) to support data orchestration, access control, and compliance.
- Solid understanding of identity management, access controls, and data security best practices.
- Ability to assess AI solution feasibility, including data readiness, scalability, performance, and cost considerations.
- Strong analytical andproblem-solvingskills with the ability to apply sound judgment to ambiguous or emerging AI use cases.
- Excellent written and verbal communication skills, with the ability to explain technical concepts to nontechnical audiences.
The base salary range for this position is$130,000 - $140,000. In addition to a base salary, this position is eligible for a performance bonus and benefits (subject to eligibility requirements) listed here: Teladoc Health Benefits 2026.Total compensation is based on several factors including, but not limited to, type of position, location, education level, work experience, and certifications.This information is applicable for all full-time positions.
We follow a Flexible Vacation Policy, intended for rest, relaxation, and personal time. All time off must be approved by your manager prior to use. You will also receive 80 hours of Paid Sick, Safe, and Caregiver Leave annually. This applies to full-time positions only. If you are applying for a part-time role, your recruiter can provide additional details.
As part of our hiring process, we verify identity and credentials, conduct interviews (live or video), and screen for fraud or misrepresentation. Applicants who falsify information will be disqualified.
Teladoc Health will not sponsor or transfer employment work visas for this position. Applicants must be currently authorized to work in the United States without the need for visa sponsorship now or in the future.
Why join Teladoc Health?
Teladoc Health is transforming how better health happens. Learn how when you join us in pursuit of our impactful mission.
Chart your career path with meaningful opportunities that empower you to grow, lead, and make a difference.
Join a multi-faceted community that celebrates each colleague's unique perspective and is focused on continually improving, each and every day.
Contribute to an innovative culture where fresh ideas are valued as we increase access to care in new ways.
Enjoy an inclusive benefits program centered around you and your family, with tailored programs that address your unique needs.
Explore candidate resources with tips and tricks from Teladoc Health recruiters and learn more about our company culture by exploring #TeamTeladocHealth on LinkedIn.
As an Equal Opportunity Employer, we never have and never will discriminate against any job candidate or employee due to age, race, religion, color, ethnicity, national origin, gender, gender identity/expression, sexual orientation, membership in an employee organization, medical condition, family history, genetic information, veteran status, marital status, parental status, or pregnancy). In our innovative and inclusive workplace, we prohibit discrimination and harassment of any kind.
Teladoc Health respects your privacy and is committed to maintaining the confidentiality and security of your personal information. In furtherance of your employment relationship with Teladoc Health, we collect personal information responsibly and in accordance with applicable data privacy laws, including but not limited to, the California Consumer Privacy Act (CCPA). Personal information is defined as: Any information or set of information relating to you, including (a) all information that identifies you or could reasonably be used to identify you, and (b) all information that any applicable law treats as personal information. Teladoc Health's Notice of Privacy Practices for U.S. Employees' Personal information is available at this link.
About Us:
Astiva Health, Inc., located in Orange, CA is a premier health plan provider specializing in Medicare and HMO services. With a focus on delivering comprehensive care tailored to the needs of our diverse community, we prioritize accessibility, affordability, and quality in all aspects of our services. Join us in our mission to transform healthcare delivery and make a meaningful difference in the lives of our members.
SUMMARY:
We are seeking a skilled and adaptable Junior AI/ML Engineer to join our fast-moving team building impactful AI solutions in healthcare. Our work focuses on extracting and interpreting data from unstructured medical documents, improving clinical coding accuracy, streamlining administrative processes, and enhancing patient outreach.
Projects will evolve rapidly, from fine-tuning large language models (LLMs) on specialized medical PDFs, to optimizing OCR pipelines in Azure, and new challenges emerge regularly. This role suits someone who thrives in ambiguity, enjoys hands-on model development, and wants to directly influence healthcare delivery through applied AI/ML.
ESSENTIAL DUTIES AND RESPONSIBILITIES include the following:
- Design, fine-tune, and optimize large language models (LLMs) and multimodal models for healthcare-specific NLP tasks, such as information extraction, classification, and summarization from clinical documents (e.g., medical charts, patient files, scanned forms).
- Develop and improve document understanding pipelines, including fine-tuning OCR / layout-aware models (especially in cloud environments like Azure AI, Azure Foundry) to handle real-world variability in medical forms, handwriting, and scanned PDFs.
- Build and iterate on end-to-end ML solutions that transform unstructured healthcare data into structured, actionable insights
- Collaborate closely with clinicians, product managers, data annotators, and engineers to define problems, curate/annotate datasets, evaluate model performance against clinical and business metrics, and iterate quickly.
- Deploy models into production environments (cloud-based inference, batch processing, or API endpoints) with attention to latency, cost, scalability, and healthcare compliance considerations (HIPAA, data privacy).
- Stay current with advancements in LLMs, vision-language models, efficient fine-tuning techniques (LoRA/QLoRA, PEFT), RAG, multimodal AI, and domain-specific healthcare AI research.
- Contribute to a culture of rapid prototyping, rigorous evaluation, and continuous improvement in a dynamic project landscape where priorities can shift based on new opportunities or stakeholder needs.
- Other duties as assigned
REQUIRED TECHNICAL SKILLS:
- Proficiency in Python and familiarity with common ML frameworks (e.g., PyTorch, TensorFlow, scikit-learn)
- Experience applying NLP techniques to unstructured text
- Hands-on experience working with LLMs, including:
- Prompt design and iteration
- Using pre-trained models for classification or extraction tasks
- Foundational understanding of model fine-tuning, such as:
- Fine-tuning transformer models or LLMs for classification or information extraction
- Adapting existing training scripts or examples to new datasets
- Familiarity with model evaluation metrics (precision, recall, F1) and basic error analysis
- Experience working with labeled datasets and annotation outputs, including reviewing label quality
- Understanding of common ML problem types, including binary and multi-label classification
- Awareness of model bias, label noise, and false positives, with the ability to discuss tradeoffs and mitigation strategies
- Basic understanding of production ML workflows (versioning, reproducibility, monitoring concepts)
OTHER SKILLS and ABILITIES:
- Hands-on fine-tuning experience with LLMs (e.g., Hugging Face, OpenAI fine-tuning, Azure Foundry), even if limited to small-scale or academic projects
- Exposure to cloud ML platforms (Azure ML, AWS SageMaker, or GCP)
- Familiarity with RAG architectures and retrieval-based grounding
- Experience with NLP libraries (spaCy, Hugging Face Transformers, NLTK)
- Introductory experience with weak supervision or noisy-label learning
- Interest in healthcare or biomedical NLP
- Curiosity about knowledge graphs, ontologies, or structured prediction
- Familiarity with secure data handling practices
- Willingness and ability to learn workflows for sensitive or regulated data (e.g., HIPAA-covered healthcare data), including privacy-aware data handling and secure ML workflows
EXPERIENCE:
- Bachelor’s Degree in related field
- 1–2 years of experience in machine learning, applied NLP, or software engineering
- Demonstrated some experience training or fine-tuning ML models, not just using APIs
- Ability to collaborate with senior engineers and domain experts and incorporate feedback
BENEFITS:
- 401(k)
- Dental Insurance
- Health Insurance
- Life Insurance
- Vision Insurance
- Paid Time Off
- Free catered lunches
Role:
Join project teams across the U.S. as the on-site catalyst who turns AI ideas into working reality. Partnering with each project’s AI Champion (Project Manager or Superintendent), you’ll uncover pain points, redesign workflows, and deploy AI agents that cut down reporting, accelerate RFIs, simplify lookahead planning, progress updates, materials tracking, and more. When needed, you will develop user stories and coordinate development with the central AI Studio. You’ll help advance the vision of the “Construction Site of the Future,” showing how agentic AI will transform project operations.
Location: New Haven, Connecticut
Responsibilities:
- Opportunity hunting and workflow redesign – Lead Lean/Six Sigma discovery workshops; map value streams, assess process and data maturity, and log low-effort/high-impact AI use cases.
- Process and data maturity assessment – Evaluate each jobsite’s current workflows and underlying data; surface gaps that block AI adoption and develop phased improvement plans with Operations Excellence to establish the right process baseline before deploying agents.
- Assess the market solutions – Evaluate off-the-shelf and platform tools; launch pilots, measure impact, and scale wins.
- Rapid AI-agent builds – Convert user stories into production-ready agents in Copilot Studio / Power Apps/Automate, ChatGPT Enterprise, or code-first frameworks within days; wire them to Teams/SharePoint on the front end and Databricks Lakehouse or other sources on the back end.
- Enterprise-grade engineering & LLMOps – Build RAG pipelines backed by Delta tables, Unity Catalog, and Databricks Vector Search; automate infra with GitHub Actions / Posit; monitor latency, cost, adoption, and drift.
- Data integrations – Partner with Data Engineering to design and maintain ETL pipelines, API integrations, and event-driven connectors feeding RAG and agents.
- Cross-cloud orchestration – Blend OpenAI, Azure OpenAI, and AWS Bedrock behind secure custom connectors; package agents for seamless rollout.
- Change enablement – Train crews, gather feedback, iterate, and track adoption and ROI metrics; apply influence model principles to embed agents into daily routines and SOPs, and track behavior change KPIs.
- Stakeholder communication – Brief project leadership and clients on agent impact in clear business terms; contribute use cases and playbooks for “Construction Site of the Future.”
- Escalation & hand-off – Draft clear user stories, data specs, and acceptance criteria for any complex solution that requires the central AI Solution Engineers or Data Engineering / Data Science team to lean in.
Qualifications:
- 3+ years in AI engineering / full-stack data applications or data science, including 2+ years building production LLM/RAG solutions.
- Bachelor’s in CS, Engineering, Physics, or a related field; Master’s preferred.
- Prior hands-on work in construction or heavy process industries (manufacturing, oil & gas, chemicals) is a significant plus.
- Demonstrated process excellence background (Lean/Six Sigma Green Belt or equivalent) with experience diagnosing process and data gaps and supporting change management plans with Operations Excellence.
- Strong facilitation and communication skills.
- Hands-on expertise with Copilot Studio, Power Apps/Automate, custom connectors, and CoE Toolkit governance.
- Programming & data stack: Python, SQL, Databricks Lakehouse, vector stores.
- DevOps & IaC: GitHub Actions (or Azure DevOps) and Posit Workbench/Connect automation or comparable CI/CD tooling; strong Git/GitHub workflow discipline.
- Integration & ETL skills: Foundational understanding of ETL/ELT design, Airflow or Databricks Workflows, and REST/GraphQL API development; proven collaboration with Data Engineering on source-to-lake and lake-to-agent pipelines.
- Willing and able to travel and work on active jobsites.
Duration: 11 Months (Contract to hire)
Location: Columbia, SC
Onsite Requirements: Partially onsite 3 days per week (Tue, Wed, Thurs) and as needed.
Standard work hours: 8:00 AM - 5:00 PM
**Credit check will be required**
Job Summary:
Day to Day:
- A typical day will involve a mix of hands-on coding, architectural design, and research.
- The engineer will spend a significant portion of their time in Python, building and optimizing agentic AI systems using frameworks like LangChain.
- This includes integrating these agents with our backend services and deploying them using CI/CD pipelines into our cloud environment.
- They will also be responsible for researching and testing new agentic models and frameworks, monitoring agent behavior in production, and collaborating with data scientists and business stakeholders to refine requirements and ensure the ethical deployment of AI solutions.
Team: The team is an innovative, collaborative, and empowering environment. We are building the next generation of AI solutions for the enterprise in a fast-paced, project-oriented setting. This is a multi-platformed environment that values creativity, continuous learning, and a customer-focused mindset. The new engineer will play a crucial role in shaping our AI strategy and building foundational tools and accelerators that will drive innovation across the company.
Job Requirements:
**This is a new role to establish a core competency in agentic AI systems. This engineer will be pivotal in designing and deploying advanced AI agents and will build the foundational frameworks for future AI use cases across the organization.**
Required Experience:
Required Software and Tools (Hands on experience required):
- Python
- JavaScript/TypeScript
- AI Tools and Libraries (e.g. LangGraph, LangChain, Deep Agents, Claude Skills, etc.)
- AI Models (e.g. Claude, OpenAI, etc.)
- AI Concepts (e.g. Prompt Engineering, RAG, Agentic AI, etc.)
- Distributed SDLC/DevOps (e.g. github, pipelines, VS Code, testing frameworks, etc.)
- Platforms (Container Platforms, Cloud Platforms, Document Databases, AWS)
- API Design
Python & AI/ML Libraries:
- Deep hands-on experience in Python for AI/ML development.
- Generative AI Development: Proven experience developing Gen AI or AI/ML solutions, from use case conceptualization to production deployment.
- Infrastructure & DevOps: Strong understanding of cloud environments (AWS preferred), LLM hosting, CI/CD pipelines, Docker, and Kubernetes.
- Agentic AI Concepts: Knowledge of agentic/autonomous systems (e.g., reasoning, planning, tool use).
Minimum Required Education: Bachelor's degree-in Computer Science, Information Technology or other job related degree or 4 years relevant experience or Associates degree + 2 years relevant experience
Minimum Required Work Experience: 6years-of application development, systems testing or other job related experience.
Required Technologies: 3-6 years of hands-on experience in Artificial Intelligence, Machine Learning, or related fields.
Nice to have/Preferred skills:
- Proficiency in Python development and FastAPI/Flask frameworks, along with SQL.
- Familiarity with agentic AI frameworks and concepts such as LangChain, LangGraph, AutoGen, Model Context Protocol (MCP), Chain of Thought prompting, knowledge stores, and embeddings.
- Experience developing autonomous agents using cloud-based AI services.
- Experience with prompt engineering techniques and model fine-tuning.
- Strong understanding of reinforcement learning, planning algorithms, and multi-agent systems.
- Experience working across cloud platforms (AWS, Azure, GCP) and deploying AI solutions at scale.
Primary Skills: Prompt Engineering(Expert), AI automation (Advanced), AI agents (Expert), Supply chain (Intermediate), no code & low code (Proficient).
Contract Type: W2
Duration: 6 Months with possible extension
Location: Boston, MA ()
Pay Range: $50.00-$58.49 Per Hour
#LP
Job Summary:
This is a dynamic role for a Business Analyst III, focusing on translating supply chain use cases into automated workflows and AI agents using enterprise no-code/low-code platforms. The ideal candidate will design, build, and maintain AI-powered solutions to streamline processes within a $1.8B supply chain operation, working directly with supply chain teams to co-develop solutions and conduct user acceptance testing. Expectations include managing 5-8 projects concurrently with high autonomy, optimizing AI agent performance, and ensuring solution longevity through detailed documentation.
Key Responsibilities:
- Design and implement automated workflows and AI agents for supply chain tasks.
- Conduct iterative testing and user acceptance testing with supply chain teams.
- Configure workflow logic, decision trees, automation sequences, and integration points for AI functionality.
- Develop hybrid solutions integrating analytics dashboards with AI workflows for process automation.
- Document workflow configurations, prompt patterns, and decisions in detail for non-technical user maintenance.
- Expertise in prompt engineering and AI platform management
- Proficiency in no-code/low-code workflow automation tools
- Deep understanding of AI agent training, context windows, model limitations, and hallucination mitigation.
- Basic technical understanding (APIs, data structures, integrations)
Knowledge of supply chain operations (procurement, inventory management, logistics) is strongly preferred.
ABOUT AKRAYA
Akraya is an award-winning IT staffing firm consistently recognized for our commitment to excellence and a thriving work environment. Most recently, we were recognized Stevie Employer of the Year 2025, SIA Best Staffing Firm to work for 2025, Inc 5000 Best Workspaces in US (2025 & 2024) and Glassdoor's Best Places to Work (2023 & 2022)!
Industry Leaders in Tech Staffing
As Talent solutions provider for Fortune 100 Organizations, Akraya's industry recognitions solidify our leadership position in the IT staffing space. We don't just connect you with great jobs, we connect you with a workplace that inspires!
Join Akraya Today!
Let us lead you to your dream career and experience the Akraya difference. Browse our open positions and join our team!
Purpose
As a foundational member of our AI Center of Excellence, you will serve as the data science lead for enterprise AI initiatives, architecting and deploying AI solutions that make a meaningful impact across our national retail footprint. The Data Scientist Lead will work with other members of the AI COE and business leadership to identify and execute the highest-impact initiatives, own the data science lifecycle, from hypothesis and feature engineering to model validation and performance monitoring; bridging the gap between cutting-edge AI capabilities and practical business applications.
This role requires a rare blend of deep technical mastery and sharp business acumen, to translate complex data into actionable insights and intelligent systems that enhance customer experience, optimize commercial operations, and enable smarter decision-making at scale. The ideal candidate is passionate about retail innovation, thrives in ambiguity, and is energized by the opportunity to shape AI strategy from the ground up.
You’ll Be Successful With
- Bachelor's degree in Data Science, Computer Science, Statistics, Mathematics, or a related quantitative field. Master's degree preferred.
- 5+ years in Data Science or Applied AI roles, preferably in retail or a customer-facing industry.
- Proven track record of moving models from development into production which deliver measurable impact to the business.
- Expert proficiency in Python and SQL. Comfortable with version control.
- Expertise in supervised/unsupervised learning and modern frameworks (e.g. scikit-learn, PyTorch, or TensorFlow).
- Hands-on experience building with LLMs, RAG architectures, prompt engineering, or AI agent development.
- Experience deploying and monitoring models at scale. Familiar with cloud data platforms (e.g. Databricks or Snowflake) and cloud infrastructure (Azure experience a plus).
- You understand how to apply AI to commercial problems such as demand forecasting, customer- and associate- facing applications, personalization, labor optimization, etc.
- The ability to translate "black box" model outputs into clear, actionable insights for business leadership.
- Proven ability to communicate complex technical concepts to non-technical stakeholders and influence decision-making through data-driven storytelling.
- Strong intellectual curiosity with a bias toward action and continuous improvement.
- Demonstrated ability to work autonomously while collaborating effectively within cross-functional teams.
Your Day Consists Of
- Buy vs Build leadership. Evaluate 3rd-party AI platforms and partnerships. Serve as the technical lead in vetting vendor methodologies, guiding the in-house vs external decisioning.
- Partner with business leadership to identify high-value AI opportunities, defining technical specifications and success metrics that align with enterprise strategy.
- Design, develop, and deploy custom machine learning models (impacting merchandising, commercial, labor, digital, etc.) within the Databricks environment.
- Lead experimentation design, including A/B testing and causal inference, to validate model performance and measure true incremental business lift (ROI).
- Collaborate with data engineers on feature development and with AI developers to wrap models into production-grade APIs and applications.
- Partner closely with the Customer Insights team to ensure model outputs are optimized for consumption within Power BI/DAX, turning complex predictions into actionable insights.
- Establish and enforce MLOps standards across the org, including model versioning, automated retraining, and drift monitoring.
- Serve as the primary ML subject matter expert for the broader Data Science & Insights team. Provide active coaching to analytics leaders, empowering them to identify ML-applicable use cases and effectively incorporate predictive outputs into their own functional workstreams.
- Contribute to the enterprise AI governance framework, ensuring ethical AI practices, data privacy compliance, and model transparency.
- Present findings, recommendations, and project updates to leadership and cross-functional partners in clear, compelling formats.
- Be compliant with all appropriate privacy and security protocols.
Working Conditions (travel, hours, environment)
- Limited travel required including air and car travel
- While performing the duties of this job, the employee is occasionally exposed to a warehouse environment and moving vehicles. The noise level in the work environment is typically quiet to moderate.
Physical/Sensory Requirements
Sedentary Work – Ability to exert 10 - 20 pounds of force occasionally, and/or negligible amount of force frequently to lift, carry, push, pull or otherwise move objects. Sedentary work involves sitting most of the time but may involve walking or standing for brief periods of time.
Benefits & Rewards
- Bonus opportunities at every level
- Non-traditional retail hours (we close at 7p!)
- Career advancement opportunities
- Relocation opportunities across the country
- 401k with discretionary company match
- Employee Stock Purchase Plan
- Referral Bonus Program
- 80 hrs. annualized paid vacation (full-time associates)
- 4 paid holidays per year (full-time hourly store associates only)
- 1 paid personal holiday of associate’s choice and Volunteer Time Off program
- Medical, Dental, Vision, Life and other Insurance Plans (subject to eligibility criteria)
Equal Employment Opportunity
Floor & Decor provides equal employment opportunities to all associates and applicants without regard to age, race, color, religion or creed, national origin or ancestry, sex (including pregnancy), sexual orientation, gender, gender identity, disability, veteran status, genetic information, ethnicity, citizenship, or any other category protected by law.
This policy applies to all areas of employment, including recruitment, testing, screening, hiring, selection for training, upgrading, transfer, demotion, layoff, discipline, termination, compensation, benefits and all other privileges, terms and conditions of employment. This policy and the law prohibit employment discrimination against any associate or applicant on the basis of any legally protected status outlined above.
AI Innovation Architect
Location: Hybrid; Ashburn, VA; Springfield, VA; Washington, D.C.
Clearance: U.S. Citizen; Must have an active Top-Secret Clearance or DHS Public Trust Clearance.
InDev is seeking a senior strategic and technical AI Architect responsible for designing, building, and deploying artificial intelligence solutions that support mission outcomes across the homeland security market. In this role you will bridge advanced AI capabilities - including machine learning, natural language processing, and data engineering - with operational requirements, ensuring solutions are secure, scalable, and aligned with the homeland security mission.
YOUR FUTURE DUTIES AND RESPONSIBILITIES
- Define overall system architecture, selecting and governing Artificial Intelligence / Machine Learning (AI/ML) and platform technologies, and ensuring solutions are scalable, secure, and production-ready
- Lead end-to-end technical design, development, and implementation of an agentic AI system to orchestrate user queries across enterprise data sources
- Partner closely with development, DevOps and data engineering teams to translate project requirements into an extensible AI architecture
- Create and promote AI strategies that align with business objectives
- Develop and coordinate POCs to test new technologies
- Evaluate and select appropriate AI tools, frameworks, and platforms (i.e., AWS, Azure, Google) to drive innovation
QUALIFICATIONS
- U.S. Citizen; Active Top-Secret Clearance or DHS Public Trust Clearance
- 8+ years of experience delivering AI solutions across federal agencies
- Bachelor’s degree in Computer Science, Engineering, or Data Science
- Deep understanding of machine learning (ML), deep learning, Natural Language Processing (NLP), and neural networks
- Experience with cloud platforms (AWS, Google Cloud, Azure) and container orchestration tools like Kubernetes and Docker
- Ability to identify high-impact AI use cases and translate them into technical requirements
- Experience designing, building, and deploying advanced AI systems including Generative AI, AI Agents, LLMs, Reinforcement Learning, and computer vision models
- Ability to apply cloud and engineering expertise across AWS, GCP, Kubernetes, Docker, Terraform, Helm, Linux, and AI services, such as SageMaker, Vertex AI, Bedrock, or Gemini
- Experience with Python, agent frameworks, data engineering, APIs/microservices, vector databases, SQL engines, distributed systems, cloud services, RAG
- Experience developing and maintaining AI/ML roadmaps, performing Analysis of Alternatives, and making defensible technical tradeoff decisions
- Experience leading multidisciplinary teams, including data scientists, engineers, and business stakeholders
- Excellent written and oral communication skills
- Ability to tailor and present information across multiple stakeholders
NICE TO HAVES
- Experience integrating AI solutions with SaaS/PaaS platforms (e.g., ServiceNow, Salesforce, etc.)
- Experience implementing virtual agents within SaaS/PaaS platforms (e.g., ServiceNow Virtual Agent, Salesforce Agentforce, etc.)
- Experience with Google Gemini
ABOUT US
At InDev, we’re not just a company; we’re a trailblazing force transforming the way data shapes the future. As a dynamic player in the federal government sector, we’re on a mission to empower agencies with cutting-edge data solutions that drive innovation, efficiency, and progress. Our team thrives on collaboration, innovation, and embracing challenges head-on to create a meaningful impact on the world around us.
WHY INDEV
- Innovative Environment: Join a team that thrives on creativity and innovation, where your ideas are not only heard but encouraged.
- Meaningful Impact: Contribute to projects that directly impact federal agencies, driving positive change on a national scale.
- Dynamic Collaboration: Work alongside diverse experts who are passionate about pushing boundaries and making a difference.
- Agile Mindset: Embrace Agile methodologies that encourage flexibility, adaptability, and rapid growth.
- Learning Culture: Enjoy ongoing learning opportunities and professional development to expand your skill set.
- Cutting-edge Tech: Engage with the latest technologies and tools in the data integration landscape.
If you’re ready to embark on a journey of innovation, collaboration, and impact, InDev welcomes you to join our team. Let’s shape the future together.
Onsite AI Engineer - Construction Industry Focus
New Haven, CT - Onsite 5 days per week
- Initial Assignment: Fully onsite 5 days per week at a construction site in Ft. Myers (FL) or New Haven (CT) for 1 year
- Post-Assignment: Relocation to one of the corporate offices for hybrid employment: Boston, MA (preferred), New York City (NY), New Haven (CT), Herndon (VA), West Palm Beach (FL), or Estero (FL)
Role Summary
As the on-site catalyst who turns AI ideas into working reality. Partnering with each project’s AI Champion (Project Manager or Superintendent), you’ll uncover pain points, redesign workflows, and deploy AI agents that cut down reporting, accelerate RFIs, simplify lookahead planning, progress updates, materials tracking, and more. When needed, you will develop user stories and coordinate development with the central AI Studio. You’ll help advance the vision of the “Construction Site of the Future,” showing how agentic AI will transform project operations.
Responsibilities
- Workflow discovery and redesign: Lead Lean/Six Sigma workshops; map value streams; log high-impact AI agent opportunities that improve field efficiency.
- AI agent development: Build and deploy multiple production-ready AI agents using Copilot Studio, Power Apps/Automate, ChatGPT Enterprise, or code-first frameworks. Integrate agents into Teams/SharePoint on the front end and Databricks Lakehouse or other enterprise data sources on the back end.
- RAG pipelines and LLMOps: Design and operate retrieval-augmented generation (RAG) pipelines with Databricks Delta Tables, Unity Catalog, and Vector Search (or Spark/Hadoop equivalents). Monitor cost, latency, adoption, and model drift.
- Cross-cloud orchestration: Blend OpenAI, Azure OpenAI, and AWS Bedrock services through secure custom connectors to maximize flexibility and adoption.
- Data integration: Partner with Data Engineering to deliver ETL/ELT pipelines, API integrations, and event-driven connectors that feed RAG pipelines and AI agents.
- Change management and adoption: Train field teams, gather feedback, iterate quickly, and embed agents into SOPs. Track usage and ROI with adoption metrics and behavior-change KPIs.
- Stakeholder communication: Translate technical results into business value for leadership and clients. Contribute use cases and playbooks for the “Construction Site of the Future.”
- Compliance and hand-offs: Ensure all AI solutions meet the company’s data governance and security standards. Draft clear user stories and specs for escalation to central AI/Data Engineering teams when necessary.
Qualifications
- 4+ years in AI engineering, data science, or ML-focused software engineering.
- Proven experience building multiple AI agents in production environments.
- 2+ years of hands-on experience with LLMs, RAG pipelines, and LLMOps practices.
- Must have strong traditional software engineering background in Python
Bonus Points
- Experience in construction, manufacturing, or other process-heavy industries.
- Advanced degree in a technical field.
The Dell Security & Resiliency organization manages the security risk across all aspects of Dell’s business. You will have an excellent opportunity to influence the security culture at Dell and further develop your career.
Join us as an AI Security Engineer (IAM), Senior Advisor on our Cybersecurity Engineering & Operations team in Round Rock, Texas to do the best work of your career and make a profound social impact.
What you’ll achieve
As an AI Security Engineer , you will be a member of our internal-facing, Cybersecurity organization with responsibility for contributing your advanced experience and technical skills into Dell security infrastructure environment. Your focus will be on engineering and operating the identity and access management AI tools which will include engaging and collaborating with internal stakeholders, customers, partners, and vendors.
You will:
Manage processes and technologies to implement identity lifecycle operations for AI agents and service principals, including creation, rotation, revocation, and decommissioning with strong auditability.
Administer RBAC and ABAC policies for agentic workflows and enforce guardrails across model endpoints, data stores, and tool integrations.
Manage secrets and credentials used by AI agents, including rotation schedules, vault policies, and detection of credential misuse.
Collaborate with product teams to capture use cases and translate them into concrete IAM controls for agents, models, and data access paths.
Assist in investigations and incident response involving agentic AI, correlating logs across prompts, actions, tool calls, and data access events.
Take the first step towards your dream career
Every Dell Technologies team member brings something unique to the table. Here’s what we are looking for with this role:
Essential Requirements
Own agent & service identity with modern auth (OAuth 2.0 + OIDC). Manage the full lifecycle for AI agents (Joiner/Mover/Leaver), implement machine to machine flows and wire up enterprise IdPs and API gateways (for example: Okta; Kong Gateways) you need to know about OAuth + OIDC flows and be able to understand AD/Entra group backed access (emit group claims in JWT/SAML with appropriate filtering) - fundamental IAM Knowledge is essential for this job role (authentication, authorization, PAM)
Preferred : light scripting (Python/Typescript) to automate integrations and reviews
Prove compliance: Align agent access with data residency/consent/retention and continuously produce evidence against Dell Agentic AI Standards ; work with AI/IAM Architects to maintain current IAM configs/runbooks/flows as capabilities evolve.
Hands‑on with agent frameworks (LangGraph/LangChain, CrewAI, AutoGen) and/or agent platforms (Lindy, ) to understand where policy decision points / policy enforcement points are applied in the right layer and to understand developers/platform teams.
Experience working with AI governance and MLOps platforms (e.g., DataRobot, Dataiku) supporting approvals, audit trails, and compliance sign‑offs. Strong cross‑functional collaboration skills and the ability to translate requirements into secure IAM and agent architectures in partnership with application, platform, security, and data teams.
Desirable Requirements
Bachelor’s degree in Computer Science, Management of Information Systems, Cybersecurity, Information Assurance, or a related field; or equivalent experience
12+ years of information security experience; 4+ years in Identity and Access Management or similar roles
Industry-standard cybersecurity certification from ISC (2), SANS, or similar entity
Compensation
Dell is committed to fair and equitable compensation practices. The compensation range for this position is $152,000 to $196,000.
Benefits and Perks of working at Dell Technologies
Your life. Your health. Supported by your benefits. You can explore the overall benefits experience that awaits you as a Dell Technologies team member - right now at
Who we are
We believe that each of us has the power to make an impact. That’s why we put our team members at the center of everything we do. If you’re looking for an opportunity to grow your career with some of the best minds and most advanced tech in the industry, we’re looking for you.
Dell Technologies is a unique family of businesses that helps individuals and organizations transform how they work, live and play. Join us to build a future that works for everyone because Progress Takes All of Us.
Dell Technologies is committed to the principle of equal employment opportunity for all employees and to providing employees with a work environment free of discrimination and harassment. Read the full Equal Employment Opportunity Policy here .
Job ID: R283934