Openai Jobs in Usa
76 positions found — Page 3
The Global Account Manager is responsible for developing and maintaining key customer accounts, working both independently and in collaboration with an account team. This role has both strategic and tactical responsibilities. The Global Account Manager ensures that Cohu is positioned for long‑term success with assigned customers by aligning internal resources and coordinating operational execution to meet or exceed customer requirements and expectations.
Essential Functions / Major Responsibilities
• Maintain overall responsibility for managing Cohu’s business and relationships with assigned customer accounts. Collaborate with global cross‑functional teams (Engineering, Manufacturing, Service, Sales, Finance, Operations) to communicate customer expectations and ensure Cohu is meeting bookings targets and achieving desired market share.
• Serve as the primary internal and external contact for customer issues. Lead meetings to define and present technical information and drive delivery schedule communications.
• Schedule and coordinate regular product reviews, management reviews, technology roadmap discussions, and other meetings to understand customer requirements, identify growth opportunities, and influence future business.
• Communicate regularly with customers as their primary point of escalation and incident management. Own customer issues, ensure timely resolution, and escalate to senior management when necessary.
• Build and cultivate strong relationships across multiple levels within customer organizations to achieve strategic selling objectives by influencing key stakeholders.
• Maintain visibility into customer operations and plans. Identify and address potential gaps in Cohu’s performance before they escalate. Monitor and communicate customer strategy shifts that may impact Cohu’s business.
• Create and deliver technical presentations as needed.
• Prepare and distribute regular reports documenting account activities, key events, status updates, and action items.
• Take ownership of customer satisfaction scorecards; address issues with urgency to maintain a high level of customer satisfaction.
• Prepare timely responses to RFQs and RFIs.
• Lead contract negotiations, collaborating with internal stakeholders to define negotiation strategies and achieve optimal results.
• Provide leadership in setting work priorities and schedules across the organization to support customer needs.
• Identify, define, and develop new business opportunities.
• Prepare accurate and timely forecasts.
• Coordinate and host customer meetings and conference calls. Lead or participate in Equipment User Group meetings as appropriate.
Qualifications
Education
• Bachelor’s degree, preferably in Engineering with emphasis in Mechanical, Electrical, or Mechatronics.
Experience
• Minimum of 5 years in the semiconductor equipment industry or related business, serving in a sales, service, or marketing capacity as a supplier or user of back‑end equipment.
• Experience working with customers manufacturing AI-, ML-, or HPC‑class semiconductor devices—such as Nvidia, Google, Microsoft, Apple, OpenAI (ChatGPT), or similar advanced computing chipmakers
Skills / Technical Requirements
• Ability to function successfully in a dynamic, high‑pressure environment while remaining calm, confident, and solutions‑focused.
• Strong interpersonal, communication (written and verbal), and negotiation skills.
• Demonstrated ability to apply situational leadership and collaborate effectively with all levels of internal and external stakeholders.
• Strong organizational and problem‑solving skills.
• Ability to maintain a sense of urgency and motivate cross‑functional teams to achieve objectives.
• Proficiency with Microsoft Office applications, particularly Excel and PowerPoint.
Job Conditions / Physical Demands
• Work is primarily performed in a typical office environment but includes regular time at customer sites and on factory floors.
• Domestic and international travel is required.
Protective Equipment
• Required in designated areas.
With more than 3000 employees worldwide, we offer challenging and rewarding work experiences, generous employee benefits and a strong company culture. If you are looking for a global publicly traded company that provides you with international experience and a challenging work environment, then Cohu is your choice.
Connect with Cohu…
Connect with your future…
Cohu firmly supports the U.S. national and various state and local policies of equal employment opportunity which are designed to provide equality of employment and advancement opportunities to every individual without regard to unlawful considerations of race, color, religion, national origin, citizenship status, ancestry, gender, gender identity or gender expression, age, marital status, sexual orientation, disability, medical conditions, pregnancy, genetic information, military or veteran status or any other legally protected category.
In addition, reasonable accommodations are available to qualified disabled individuals, upon request.
Globally, Cohu is committed to full compliance with all applicable laws and regulations governing employment, in the U.S. and in all other locations around the world where we have operations.
Senior Machine Learning Engineer (GenAI and Agentic Systems)
Location: Palo Alto, CA (Hybrid )
Role Type: Full-Time / Permanent
Summary
Our client, a pioneering HealthTech AI firm in the Bay Area, is seeking a high-calibre Senior Applied AI Engineer to bridge the gap between advanced Machine Learning and robust Software Engineering. This is an end-to-end ownership role: you will be responsible for designing the logic, building the architecture, and deploying the final services.
Core Responsibilities
- Architect AI Workflows: Design and implement sophisticated agentic workflows and automation sequences that power clinical decision-making.
- System Design & Integration: Build the backend infrastructure, scalable REST APIs, and data services required to support high-concurrency AI applications.
- Rapid Deployment: Maintain a high-velocity shipping cycle, moving from prototype to production-grade implementation in days.
- Model Orchestration: Select, fine-tune, and evaluate the performance of various LLMs (including OpenAI, Anthropic, and open-source models) for specific healthcare tasks.
- Full-Stack ML: Own the pipeline from data ingestion and time-series forecasting to real-time classification and model monitoring.
Technical Profile
- Computer Science Mastery: Expert knowledge of algorithms, data structures, and distributed systems.
- Software-Heavy Background: Professional-grade Python skills. You should be comfortable with software design patterns, testing, and CI/CD.
- Machine Learning Fundamentals: * Deep understanding of Core ML topics: classification, regression, and clustering.
- Specific experience in Time Series Forecasting and temporal data analysis.
- Proficiency in Generative AI: RAG architectures, prompt optimization, and agent frameworks.
- Infrastructure: Experience deploying services to cloud environments (GCP preferred) and a solid grasp of MLOps and pipeline automation.
- Education: BS in Computer Science or related field + 4 years of experience, or an MS + 2 years of experience.
Cultural Fit
- Startup Agility: You possess the "scrappiness" to solve problems with limited resources but the rigor to ensure those solutions are enterprise-grade.
- The "Generalist" Mindset: You enjoy working across the entire stack and are not afraid to dive into data engineering or infrastructure when needed.
- Mission-Oriented: You are motivated by the prospect of using AI to significantly improve patient outcomes and healthcare efficiency.
What’s Offered
Our client provides a highly competitive package, including a strong base salary, meaningful equity, and comprehensive premium healthcare benefits. You will join a world-class team of engineers in a collaborative, hybrid environment.
About Us:
Astiva Health, Inc., located in Orange, CA is a premier health plan provider specializing in Medicare and HMO services. With a focus on delivering comprehensive care tailored to the needs of our diverse community, we prioritize accessibility, affordability, and quality in all aspects of our services. Join us in our mission to transform healthcare delivery and make a meaningful difference in the lives of our members.
SUMMARY:
We are seeking a skilled and adaptable Junior AI/ML Engineer to join our fast-moving team building impactful AI solutions in healthcare. Our work focuses on extracting and interpreting data from unstructured medical documents, improving clinical coding accuracy, streamlining administrative processes, and enhancing patient outreach.
Projects will evolve rapidly, from fine-tuning large language models (LLMs) on specialized medical PDFs, to optimizing OCR pipelines in Azure, and new challenges emerge regularly. This role suits someone who thrives in ambiguity, enjoys hands-on model development, and wants to directly influence healthcare delivery through applied AI/ML.
ESSENTIAL DUTIES AND RESPONSIBILITIES include the following:
- Design, fine-tune, and optimize large language models (LLMs) and multimodal models for healthcare-specific NLP tasks, such as information extraction, classification, and summarization from clinical documents (e.g., medical charts, patient files, scanned forms).
- Develop and improve document understanding pipelines, including fine-tuning OCR / layout-aware models (especially in cloud environments like Azure AI, Azure Foundry) to handle real-world variability in medical forms, handwriting, and scanned PDFs.
- Build and iterate on end-to-end ML solutions that transform unstructured healthcare data into structured, actionable insights
- Collaborate closely with clinicians, product managers, data annotators, and engineers to define problems, curate/annotate datasets, evaluate model performance against clinical and business metrics, and iterate quickly.
- Deploy models into production environments (cloud-based inference, batch processing, or API endpoints) with attention to latency, cost, scalability, and healthcare compliance considerations (HIPAA, data privacy).
- Stay current with advancements in LLMs, vision-language models, efficient fine-tuning techniques (LoRA/QLoRA, PEFT), RAG, multimodal AI, and domain-specific healthcare AI research.
- Contribute to a culture of rapid prototyping, rigorous evaluation, and continuous improvement in a dynamic project landscape where priorities can shift based on new opportunities or stakeholder needs.
- Other duties as assigned
REQUIRED TECHNICAL SKILLS:
- Proficiency in Python and familiarity with common ML frameworks (e.g., PyTorch, TensorFlow, scikit-learn)
- Experience applying NLP techniques to unstructured text
- Hands-on experience working with LLMs, including:
- Prompt design and iteration
- Using pre-trained models for classification or extraction tasks
- Foundational understanding of model fine-tuning, such as:
- Fine-tuning transformer models or LLMs for classification or information extraction
- Adapting existing training scripts or examples to new datasets
- Familiarity with model evaluation metrics (precision, recall, F1) and basic error analysis
- Experience working with labeled datasets and annotation outputs, including reviewing label quality
- Understanding of common ML problem types, including binary and multi-label classification
- Awareness of model bias, label noise, and false positives, with the ability to discuss tradeoffs and mitigation strategies
- Basic understanding of production ML workflows (versioning, reproducibility, monitoring concepts)
OTHER SKILLS and ABILITIES:
- Hands-on fine-tuning experience with LLMs (e.g., Hugging Face, OpenAI fine-tuning, Azure Foundry), even if limited to small-scale or academic projects
- Exposure to cloud ML platforms (Azure ML, AWS SageMaker, or GCP)
- Familiarity with RAG architectures and retrieval-based grounding
- Experience with NLP libraries (spaCy, Hugging Face Transformers, NLTK)
- Introductory experience with weak supervision or noisy-label learning
- Interest in healthcare or biomedical NLP
- Curiosity about knowledge graphs, ontologies, or structured prediction
- Familiarity with secure data handling practices
- Willingness and ability to learn workflows for sensitive or regulated data (e.g., HIPAA-covered healthcare data), including privacy-aware data handling and secure ML workflows
EXPERIENCE:
- Bachelor’s Degree in related field
- 1–2 years of experience in machine learning, applied NLP, or software engineering
- Demonstrated some experience training or fine-tuning ML models, not just using APIs
- Ability to collaborate with senior engineers and domain experts and incorporate feedback
BENEFITS:
- 401(k)
- Dental Insurance
- Health Insurance
- Life Insurance
- Vision Insurance
- Paid Time Off
- Free catered lunches
Onsite AI Engineer - Construction Industry Focus
New Haven, CT - Onsite 5 days per week
- Initial Assignment: Fully onsite 5 days per week at a construction site in Ft. Myers (FL) or New Haven (CT) for 1 year
- Post-Assignment: Relocation to one of the corporate offices for hybrid employment: Boston, MA (preferred), New York City (NY), New Haven (CT), Herndon (VA), West Palm Beach (FL), or Estero (FL)
Role Summary
As the on-site catalyst who turns AI ideas into working reality. Partnering with each project’s AI Champion (Project Manager or Superintendent), you’ll uncover pain points, redesign workflows, and deploy AI agents that cut down reporting, accelerate RFIs, simplify lookahead planning, progress updates, materials tracking, and more. When needed, you will develop user stories and coordinate development with the central AI Studio. You’ll help advance the vision of the “Construction Site of the Future,” showing how agentic AI will transform project operations.
Responsibilities
- Workflow discovery and redesign: Lead Lean/Six Sigma workshops; map value streams; log high-impact AI agent opportunities that improve field efficiency.
- AI agent development: Build and deploy multiple production-ready AI agents using Copilot Studio, Power Apps/Automate, ChatGPT Enterprise, or code-first frameworks. Integrate agents into Teams/SharePoint on the front end and Databricks Lakehouse or other enterprise data sources on the back end.
- RAG pipelines and LLMOps: Design and operate retrieval-augmented generation (RAG) pipelines with Databricks Delta Tables, Unity Catalog, and Vector Search (or Spark/Hadoop equivalents). Monitor cost, latency, adoption, and model drift.
- Cross-cloud orchestration: Blend OpenAI, Azure OpenAI, and AWS Bedrock services through secure custom connectors to maximize flexibility and adoption.
- Data integration: Partner with Data Engineering to deliver ETL/ELT pipelines, API integrations, and event-driven connectors that feed RAG pipelines and AI agents.
- Change management and adoption: Train field teams, gather feedback, iterate quickly, and embed agents into SOPs. Track usage and ROI with adoption metrics and behavior-change KPIs.
- Stakeholder communication: Translate technical results into business value for leadership and clients. Contribute use cases and playbooks for the “Construction Site of the Future.”
- Compliance and hand-offs: Ensure all AI solutions meet the company’s data governance and security standards. Draft clear user stories and specs for escalation to central AI/Data Engineering teams when necessary.
Qualifications
- 4+ years in AI engineering, data science, or ML-focused software engineering.
- Proven experience building multiple AI agents in production environments.
- 2+ years of hands-on experience with LLMs, RAG pipelines, and LLMOps practices.
- Must have strong traditional software engineering background in Python
Bonus Points
- Experience in construction, manufacturing, or other process-heavy industries.
- Advanced degree in a technical field.
Role:
Join project teams across the U.S. as the on-site catalyst who turns AI ideas into working reality. Partnering with each project’s AI Champion (Project Manager or Superintendent), you’ll uncover pain points, redesign workflows, and deploy AI agents that cut down reporting, accelerate RFIs, simplify lookahead planning, progress updates, materials tracking, and more. When needed, you will develop user stories and coordinate development with the central AI Studio. You’ll help advance the vision of the “Construction Site of the Future,” showing how agentic AI will transform project operations.
Location: New Haven, Connecticut
Responsibilities:
- Opportunity hunting and workflow redesign – Lead Lean/Six Sigma discovery workshops; map value streams, assess process and data maturity, and log low-effort/high-impact AI use cases.
- Process and data maturity assessment – Evaluate each jobsite’s current workflows and underlying data; surface gaps that block AI adoption and develop phased improvement plans with Operations Excellence to establish the right process baseline before deploying agents.
- Assess the market solutions – Evaluate off-the-shelf and platform tools; launch pilots, measure impact, and scale wins.
- Rapid AI-agent builds – Convert user stories into production-ready agents in Copilot Studio / Power Apps/Automate, ChatGPT Enterprise, or code-first frameworks within days; wire them to Teams/SharePoint on the front end and Databricks Lakehouse or other sources on the back end.
- Enterprise-grade engineering & LLMOps – Build RAG pipelines backed by Delta tables, Unity Catalog, and Databricks Vector Search; automate infra with GitHub Actions / Posit; monitor latency, cost, adoption, and drift.
- Data integrations – Partner with Data Engineering to design and maintain ETL pipelines, API integrations, and event-driven connectors feeding RAG and agents.
- Cross-cloud orchestration – Blend OpenAI, Azure OpenAI, and AWS Bedrock behind secure custom connectors; package agents for seamless rollout.
- Change enablement – Train crews, gather feedback, iterate, and track adoption and ROI metrics; apply influence model principles to embed agents into daily routines and SOPs, and track behavior change KPIs.
- Stakeholder communication – Brief project leadership and clients on agent impact in clear business terms; contribute use cases and playbooks for “Construction Site of the Future.”
- Escalation & hand-off – Draft clear user stories, data specs, and acceptance criteria for any complex solution that requires the central AI Solution Engineers or Data Engineering / Data Science team to lean in.
Qualifications:
- 3+ years in AI engineering / full-stack data applications or data science, including 2+ years building production LLM/RAG solutions.
- Bachelor’s in CS, Engineering, Physics, or a related field; Master’s preferred.
- Prior hands-on work in construction or heavy process industries (manufacturing, oil & gas, chemicals) is a significant plus.
- Demonstrated process excellence background (Lean/Six Sigma Green Belt or equivalent) with experience diagnosing process and data gaps and supporting change management plans with Operations Excellence.
- Strong facilitation and communication skills.
- Hands-on expertise with Copilot Studio, Power Apps/Automate, custom connectors, and CoE Toolkit governance.
- Programming & data stack: Python, SQL, Databricks Lakehouse, vector stores.
- DevOps & IaC: GitHub Actions (or Azure DevOps) and Posit Workbench/Connect automation or comparable CI/CD tooling; strong Git/GitHub workflow discipline.
- Integration & ETL skills: Foundational understanding of ETL/ELT design, Airflow or Databricks Workflows, and REST/GraphQL API development; proven collaboration with Data Engineering on source-to-lake and lake-to-agent pipelines.
- Willing and able to travel and work on active jobsites.
Role:
The Technical Product Manager, Functional AI, will lead the definition and delivery of AI solutions that transform our core business functions, including Finance, HR, Legal, Marketing, and others. This role bridges functional expertise and technical execution—partnering with business leaders to identify opportunities, shaping requirements into scalable AI solutions, and ensuring adoption that delivers measurable value. The Technical Product Manager will collaborate closely with engineers and data teams to design, pilot, and scale solutions, while maintaining clear visibility into ROI and impact for leadership. Success in this role requires strong product management discipline, applied AI expertise, and the ability to translate complex technical concepts into business outcomes.
Responsibilities:
Product Management & Business Partnership:
- Lead discovery and scoping sessions with business stakeholders across corporate functions (Finance, HR, Marketing, etc.) to identify high-value AI opportunities.
- Build strong relationships with functional leaders to understand workflows, pain points, and success measures.
- Translate business requirements into clear technical requirements that guide design, engineering, and vendor evaluation.
- Drive user experience design by ensuring solutions are intuitive, accessible, and aligned with employee needs.
- Prepare clear documentation of requirements, workflows, and decision rationale to support transparent delivery.
- Lead Agile sprint planning, backlog grooming, and retrospectives to ensure timely and high-quality delivery of product features in collaboration with cross-functional teams.
AI Solution Design & Delivery Support:
- Partner with engineers to shape solution approaches, balancing build/buy/partner considerations.
- Contribute to solution architecture discussions, ensuring designs are scalable, secure, and compliant with standards.
- Collaborate closely with delivery teams to validate functionality against requirements, proactively evaluate feature effectiveness and accuracy, and resolve scope or design ambiguities to ensure product quality and alignment with user needs.
- Support testing, pilot deployment, and adoption efforts, incorporating user feedback into iterative improvements.
- Document and communicate lessons learned, value metrics, and impact stories to demonstrate business outcomes.
Value & Impact Measurement:
- Define success metrics and measurable outcomes for each AI initiative in partnership with business stakeholders.
- Work closely with the Data Analytics team to design and maintain value tracking reports and dashboards.
- Monitor adoption, efficiency gains, and ROI, and proactively identify areas for improvement.
- Present value realization updates to leadership, ensuring clear visibility into the business impact of AI solutions.
Qualifications:
- At least 5 years of experience in technical product management with a minimum of 2 years in AI-related products.
- Bachelor’s and Master’s in Computer Science, Physics, Engineering, or associated quantitative fields.
- Have proven experience and knowledge of corporate functions (Finance, HR, Legal, Marketing, etc.)
- Exceptional facilitation and communication skills—comfortable running discovery sessions, white-boarding with PMs, and demoing prototypes to senior leaders.
- Demonstrated product-management mindset: roadmap ownership, KPI definition, and budget/risk trade-off communication.
- Hands-on experience leading change initiatives and measuring adoption by teams.
- Strong analytical and problem-solving skills
- Excellent communication and collaboration skills
- Ability to articulate technical concepts to non-technical stakeholders
- Deep understanding of AI applications, tools, and methodologies
- Proven ability to apply AI/ML techniques (e.g., NLP, document intelligence, predictive modeling, generative AI) to solve business problems in corporate functions.
- Hands-on experience with modern AI/ML tools and platforms (e.g., OpenAI, Azure AI, AWS SageMaker, AWS Bedrock or similar).
- Familiarity with the latest trends in AI (e.g., agentic AI, multimodal models, RAG) and ability to evaluate their relevance for client use cases.
**CAN BE HYBRID IN DALLAS, CHICAGO, ATLANTA, NYC, OR FORT WASHINGTON, MD**
Our client is seeking a highly skilled Lead Full Stack Engineer to join our Enterprise Architecture & Engineering team. This is a high-impact role responsible for supporting and enhancing existing web applications while driving innovation through modern cloud and full-stack technologies.
You will lead the design and delivery of scalable, cloud-native solutions using Azure, C#/.NET, React, Next.js, and Node.js, partnering closely with product and senior leadership to shape the technical roadmap.
This role requires deep technical expertise, strong architectural judgment, and proven leadership experience managing distributed (onshore and offshore) engineering teams.
Responsibilities
- Lead the design, development, and ongoing maintenance of full-stack applications using Azure, C#/.NET, React, Next.js, and Node.js
- Provide production support and ensure high availability, performance, and reliability of applications
- Troubleshoot complex production issues and drive durable, long-term solutions
- Architect and implement scalable, secure, cloud-native solutions
- Drive technical evaluations and Proofs of Concept (POCs) for emerging technologies
- Partner with product managers and senior stakeholders to translate business needs into technical solutions
- Establish and enforce architecture standards, coding best practices, and documentation
- Mentor and lead distributed engineering teams to deliver high-quality software on schedule
- Facilitate cross-functional alignment across engineering, product, and business teams
- Maintain architecture diagrams and technical documentation to support knowledge sharing
Required Qualifications
- 8+ years of hands-on software development experience, including experience in a Lead or Senior Full Stack role
Strong expertise in:
- Azure
- C# / .NET
- React and Next.js
- Node.js
- SQL Server and NoSQL databases (e.g., Cosmos DB)
- Solid understanding of microservices and event-driven architectures
- Experience with containerization technologies such as Docker and Azure Service Fabric
- Hands-on experience building and maintaining CI/CD pipelines (Azure DevOps, Jenkins, or similar)
- Strong front-end and back-end fundamentals (JavaScript, HTML5, CSS, REST APIs, database design)
- Knowledge of application and cloud security principles (PKI, cryptography, SSL/TLS, secure protocols)
- Proven experience leading and mentoring onshore/offshore teams
- Excellent communication skills with the ability to influence senior stakeholders
Preferred Qualifications
- Experience deploying large-scale, cloud-native applications on Azure
- Familiarity with Kubernetes or other container orchestration platforms
- Exposure to Python and AI/ML technologies (e.g., Azure Cognitive Services, LangChain, Azure Document Intelligence, Azure OpenAI)
- Understanding of cloud infrastructure and security best practices
- Experience designing and executing technical POCs
Company Description
At Titl, we simplify the real estate process by eliminating paperwork, legal obstacles, and delays associated with buying, owning, or selling a home. Our advanced technology ensures transparency and peace of mind throughout every transaction. We provide a modern and user-friendly way to handle property—designed for today and prepared for future needs.
Role Description
We're seeking an experienced Full-Stack Engineer to join our team working on a sophisticated property data research and report generation platform. This role involves building and maintaining enterprise-grade systems that automate property data extraction from government sources, generate comprehensive property reports, and manage complex business workflows including payments, authentication, and blockchain integration.
What You'll Work On
- Backend Services: Develop and maintain NestJS microservices handling property data scraping, PDF generation, report aggregation, and enterprise account management
- Frontend Applications: Build responsive Next.js applications with complex state management and real-time updates
- Data Pipeline: Work with automated scraping systems using Puppeteer and AI-powered document processing (Google Document AI, OpenAI)
- Integration Development: Implement OAuth flows, Stripe payment processing, webhook handling, and third-party API integrations
- Queue Management: Design and maintain Bull queue systems for background job processing and async workflows
- Blockchain Integration: Work with Polymesh blockchain for property ownership verification and asset tokenization
- Database Design: Create efficient Prisma schemas and optimize PostgreSQL queries for complex property data relationships
Required Technical Skills
Core Stack (Must Have)
- Backend: Advanced proficiency in NestJS with deep understanding of dependency injection, decorators, guards, and service patterns
- Frontend: Expert-level Next.js 14 (App Router) and React with TypeScript
- Database: Strong Prisma ORM experience and PostgreSQL optimization skills
- TypeScript: Production-level TypeScript across full stack
- API Design: RESTful API design, DTOs, validation, and Swagger documentation Infrastructure & DevOps
- Docker: Container orchestration and development environments
- Cloud Platforms: Google Cloud Platform (Cloud Storage, Cloud Run)
- Queue Systems: Bull or similar job queue systems (Redis-backed)
- Monorepo: Experience with pnpm workspaces or similar monorepo tooling Authentication & Payments
- OAuth 2.0: Multi-provider authentication (Google, Facebook, LinkedIn)
- JWT: Token-based authentication and authorization patterns
- Stripe: Payment processing, webhooks, subscription management, and usage-based billing Specialized Skills
- Web Scraping: Puppeteer or similar browser automation tools
- PDF Processing: PDF generation, manipulation, and data extraction
- AI/ML Integration: Experience with AI APIs (OpenAI, Google AI, etc.)
- Background Jobs: Async processing, retry logic, and error handling
Highly Desired Skills
- Blockchain: Polymesh or Ethereum blockchain integration experience
- Document Processing: OCR, document AI, or legal document processing
- Property/Real Estate Domain: Understanding of property records, deeds, liens, title commitments
- Legal Tech: Experience with legal document workflows or compliance systems
- Testing: Jest, testing-library, E2E testing frameworks
- Performance Optimization: Query optimization, caching strategies, lazy loading
- Security: OWASP best practices, rate limiting, encryption
Architecture & Design Requirements
You should be comfortable with:
- Design Patterns: Service-oriented architecture, repository pattern, factory pattern
- Dependency Injection: Understanding NestJS DI container and module system
- Database Relations: Complex multi-tenant data models with proper isolation
- State Management: React Context, server/client component patterns
- Error Handling: Comprehensive error handling, retry logic, fallback mechanisms, API Security: Rate limiting, API key management, webhook signature verification
Experience Requirements
- 5+ years of full-stack development experience
- 3+ years with TypeScript in production environments
- 2+ years with NestJS or similar enterprise Node.js frameworks
- 2+ years with modern React and Next.js
- Experience building production SaaS applications with multi-tenant architecture
- Track record of shipping complex features end-to-end
- Experience with third-party integrations and webhook systems
- Domain Knowledge (Preferred)
- Understanding of property data and real estate records
- Familiarity with government data systems and public records
- Knowledge of legal document structures (deeds, liens, mortgages, title commitments)
- Experience with regulated industries and compliance requirements
- Understanding of Miami-Dade County or similar municipal systems (bonus)
Development Practices
You should have experience with:
- Git workflows: Feature branches, pull requests, code review
- Documentation: Writing clear technical documentation and API specs
- Testing: Unit tests, integration tests, E2E tests
- CI/CD: Automated testing and deployment pipelines
- Agile: Working in iterative development cycles
- Code Quality: ESLint, Prettier, TypeScript strict mode
Problem-Solving Skills
We're looking for someone who can:
- Debug complex distributed systems across multiple services
- Optimize database queries and reduce API response times
- Design scalable architectures for high-volume data processing
- Handle edge cases in automated scraping and data extraction
- Troubleshoot integration issues with third-party services
- Implement robust error handling and monitoring
- Communication & Collaboration
- Clear written communication for documentation and code reviews
- Ability to explain technical concepts to non-technical stakeholders
- Collaborative approach to problem-solving
- Proactive in identifying and addressing technical debt
- Experience mentoring junior developers (preferred)
- Package Manager Note
- This project uses pnpm exclusively for monorepo management. Experience with pnpm workspaces is preferred, but npm/yarn monorepo experience transfers well.
What Makes You Stand Out
- Contributions to open-source projects
- Experience with LangChain or LangGraph for AI orchestration
- FastAPI or Python experience (for AI service integration)
- Understanding of title insurance or property ownership verification
- Experience with Puppeteer clusters and browser farm optimization
- Background in fintech or regulated industries
- Experience with multi-environment deployments (local, staging, production)
Working Style
This role requires:
- Attention to detail when working with legal and financial data
- Systematic approach to debugging complex systems
- Ability to work independently on ambiguous problems
- Comfort with reading and understanding existing codebases
- Pragmatic decision-making balancing speed and quality
- Tech Stack Summary: NestJS • Next.js • TypeScript • Prisma • PostgreSQL • Puppeteer • Bull • OAuth • Stripe • Google Document AI • OpenAI • Docker • GCP • Polymesh • pnpm
- This role offers the opportunity to work on challenging technical problems at the intersection of PropTech, LegalTech, and AI, building systems that handle real-world property data at scale.
About GMI Cloud
GMI Cloud is a fast-growing AI infrastructure company backed by Headline VC and one of only six cloud providers worldwide to earn NVIDIA’s prestigious Reference Platform Cloud Partner designation .
We are operating hundreds of megawatts of AI-ready data center capacity across North America and a growing AI Factory footprint in Asia, delivering a full spectrum of services from GPU compute service to AI model inference API solutions. As an NVIDIA six global Reference Platform Cloud Partner, our infrastructure meets the highest standards for performance, security, and scalability in AI deployments.
We empower AI startups and enterprises to “build AI without limits,” providing everything they need to prototype, train, and deploy AI models quickly and reliably.
Role Overview
The Senior Account Director – Hyperscalers owns GMI Cloud’s relationships with:
- Microsoft Azure AI Infrastructure
- Google Global DC / 3rd-Party DC Ops
- Meta Data Center Strategy & Planning
- Anthropic Infrastructure & Capacity Delivery
- Oracle Cloud Infrastructure (OCI) DC Planning
- OpenAI Compute Infrastructure Strategy and other hyperscaler AI infrastructure teams.
Your mission is to drive multi-year, multi-MW, take-or-pay capacity agreements, enabling those organizations to scale AI compute with GMI’s GPU cloud and data center platform. This is a highly strategic role requiring deep industry knowledge, credibility, and a proven ability to work with Director–VP level infrastructure leaders.
What You Will Do
1. Build and own relationships with hyperscaler infrastructure decision-makers
- Engage Director/VP-level leaders in AI infra, DC planning, capacity engineering, and compute procurement.
- Serve as a trusted partner on AI infrastructure scaling, budget cycles, and long-term planning.
2. Drive multi-year strategic GPU capacity deals
- Structure and negotiate multi-MW GPU cluster contracts across H200, B200, B300, GB200, GB300.
- Lead offer design: capacity reservations, take-or-pay agreements, global region planning.
3. Navigate hyperscaler procurement and internal workflows
- Understand vendor onboarding, RFP/RFQ cycles, security reviews, compliance, and infra governance.
- Work closely with engineering, infra ops, DC build teams, and finance stakeholders on the customer side.
4. Partner internally with GMI SA / Infra / Ops / DC teams
- Align customer requirements with GMI’s global capacity roadmap.
- Coordinate technical validation, POC, readiness checks, and deployment schedules.
- Ensure smooth execution from contracting to delivery.
5. Drive account growth and long-term partnerships
- Develop joint roadmaps with hyperscalers for future MW expansion.
- Build multi-region execution plans across US West, Central, East and APAC/TW AI factories.
- Own quarterly business reviews, strategic planning, and long-term renewals.
What We’re Looking For
Experience & Track Record
- 10+ years in AI cloud, hyperscale infrastructure, data center, HPC, or GPU cloud industries.
- Proven success closing large-scale infrastructure or multi-year capacity deals.
- Existing relationships or prior work with hyperscaler DC / infra teams is a major plus.
Industry Background (ideal pools)
Experience at one or more of the following:
- GPU Cloud / AI Compute
- Energy-to-AI & DC Operators
- Global DC & Colo Providers
Technical & Business Skills
- Strong understanding of AI compute, GPU roadmaps, cluster architecture, IB/RoCE networks, and DC power/cooling fundamentals.
- Familiar with hyperscaler capacity planning, multi-year budgeting, procurement, vendor management, and infra governance models.
- Ability to translate complex technical requirements into clear commercial agreements.
Soft Skills
- Executive communication at Director–VP level.
- Ability to influence cross-functional stakeholders across engineering, infra ops, DC, and finance.
- High integrity, low-ego, strategic, relationship-driven mindset.
Success Metrics
- MW of contracted GPU capacity (take-or-pay / reserved capacity).
- Multi-year revenue from hyperscaler portfolio.
- Region coverage and expansion (US + APAC/TW).
- Deal velocity and execution quality.
- Strategic depth of customer partnership.
Location & Travel
- Bay Area based (preferred).
- ~10% travel to customer sites, DC partners, and industry events.
Hi
I hope you’re doing well.
My name is Sai, and I’m an Account Manager with Astir IT Solutions. We are currently working with our client on a senior-level opportunity for Agentic AI QA Engineer at Dallas, TX (Need Locals)!
Based on your background, I believe this role could be a strong fit.
Job Title: Agentic AI QA Engineer
Location: Dallas, TX (Need Locals)
Experience: 7+ years
Position type: Contract W2/C2C
Required Qualifications
• 7+ years in Software QA/Testing, with 2+ years in AI/ML or LLM-based systems; hands-on experience testing agentic/multi-agent architectures.
• Strong programming skills in Python or TypeScript/JavaScript; experience building test harnesses, simulators, and fixtures.
• Experience with LLM evaluation (exact/soft match, BLEU/ROUGE, BERTScore, semantic similarity via embeddings), guardrails, and prompt testing.
• Expertise in distributed systems testing latency profiling, resiliency patterns (circuit breakers, retries), chaos engineering, and message queues.
• Familiarity with orchestration frameworks (LangChain, LangGraph, LlamaIndex, DSPy, OpenAI Assistants/Actions, Azure OpenAI orchestration, or similar).
• Proficiency with CI/CD (GitHub Actions/Azure DevOps), observability (OpenTelemetry, Prometheus/Grafana, Datadog), and feature flags/canaries.
• Solid understanding of privacy/security/compliance in AI systems (PII handling, content policies, model safety).
• Excellent communication and leadership skills; proven ability to work cross-functionally with Ops, Data, and Engineering.
Preferred Qualifications
• Experience with multi-agent simulators, agent graph testing, and tooling latency emulation.
• Knowledge of MLOps (model versioning, datasets, evaluation pipelines) and A/B experimentation for LLMs.
• Background in cloud (AWS), serverless, containerization, and event-driven architectures.
- Prior ownership of cost/latency/SLAs for AI workloads in production
If you are currently open to new opportunities, I would appreciate the chance to connect and discuss this role in more detail. Please let me know a convenient time for a quick call, or feel free to share your updated resume.
Looking forward to hearing from you.
Thanks & Regards.
Sai
Sr. Account Manager
Astir IT Solutions, Inc.
ID: , Contact: 732-694-6000 * 795