Does Aws Use Nvidia Jobs in Usa
8,537 positions found — Page 9
**Candidate must be willing to go into office 3 days a week**
Senior Full-Stack AI & Data Engineer – Contract
RBA is an established leader and trusted partner for enterprise and mid-size organizations seeking to transform their business through technology solutions. As a Digital and Technology consultancy, we combine strategic insight with technical expertise to deliver impactful, scalable solutions that align with business goals. We take pride in working with some of the most recognized companies in our market—while fostering a culture that blends challenging career opportunities with a collaborative, fun work environment.
We are seeking a Senior Full-Stack AI & Data Engineer to join our growing Data & AI practice, supporting a high-impact client. In this role, you will lead the design and development of end-to-end AI-powered applications that drive personalization, predictive analytics, and next-generation digital experiences.
You’ll partner with business stakeholders, product teams, and engineers to build production-grade AI solutions—from data pipelines and model development to APIs and user-facing applications. The ideal candidate brings deep expertise across the full stack, modern data platforms, and generative AI technologies, with a passion for solving complex business challenges through innovative solutions.
Responsibilities
- Design and develop end-to-end AI-powered applications, including backend APIs and user-facing interfaces, to enable scalable and intuitive AI solutions.
- Build and maintain robust APIs using technologies such as Node.js, NestJS, or FastAPI, and develop modern web applications using React or similar frameworks.
- Develop, fine-tune, and deploy machine learning models using frameworks such as PyTorch and Scikit-learn.
- Implement advanced generative AI solutions, including Retrieval-Augmented Generation (RAG) pipelines and multi-modal AI applications.
- Design and build agentic AI systems using frameworks such as LangChain, enabling multi-step reasoning, tool use, and automation.
- Architect and optimize end-to-end data pipelines (ETL/ELT) using Python, SQL, and orchestration tools such as Airflow.
- Manage and integrate data workflows within Snowflake, leveraging technologies such as Snowpark or Cortex.
- Implement monitoring and observability for AI systems, including tracking model performance, drift, latency, and reliability.
- Design and deploy cloud-native solutions using Docker, Kubernetes, and CI/CD pipelines across AWS, Azure, or GCP.
- Collaborate with business stakeholders to translate data into actionable insights and intelligent applications.
- Contribute to DevOps best practices, including infrastructure-as-code (Terraform) and automated testing.
- Mentor junior engineers and promote best practices in AI ethics, data governance, and code quality.
Requirements
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
- 5+ years of experience across full-stack development, including backend (Node.js/Python) and frontend frameworks (React or similar).
- Strong experience designing and building data pipelines and modern data platforms, including expertise in SQL and data modeling.
- Proven experience deploying AI/ML solutions in production environments, including MLOps and model lifecycle management.
- Hands-on experience with generative AI technologies, including LLMs, prompt engineering, and RAG architectures.
- Experience with cloud platforms such as AWS, Azure, or GCP.
- Strong understanding of DevOps practices, including CI/CD, containerization, and infrastructure-as-code (Terraform).
- Excellent communication skills and ability to work effectively in client-facing environments.
Preferred Qualifications
- Experience with Snowflake, including Snowpark, Cortex, or similar data platform capabilities.
- Experience building agent-based AI systems or working with frameworks such as LangChain.
- Familiarity with vector databases and semantic search architectures.
- Experience developing mobile applications using React Native or Flutter.
- Knowledge of mobile architecture, UI/UX principles, and API integration patterns.
- Experience deploying applications to Apple App Store or Google Play Store.
- Familiarity with security and authentication protocols, including OAuth2, biometric authentication, and secure data handling.
- Cloud or data platform certifications (AWS, Azure, GCP, Snowflake, or similar).
Leadership & Culture
- Demonstrate leadership through mentorship, technical guidance, and promoting engineering best practices.
- Balance innovation with pragmatism—able to work across cutting-edge AI solutions and foundational data engineering tasks.
- Thrive in a collaborative, fast-paced consulting environment with a strong focus on client impact and delivery excellence.
APN Consulting, Inc.
is a progressive IT staffing and services company delivering innovative business solutions that drive meaningful client outcomes.
We specialize in high-impact technology areas including ServiceNow, Full Stack Development, Cloud & Data, and AI/ML.
As we continue to expand our global service offerings, we are seeking top talent to join our growing team.
Job Title: SDET Lead Location: Fort Mill, SC / Austin, TX Duration: 6–12 months Job Responsibilities & Requirements Must-Have Skills: Strong experience with AWS Hands-on experience with GitHub Copilot and Generative AI tools Expertise in Camunda and BPM (Business Process Management) Proficiency in Java Experience with Selenium and Test Automation frameworks Nice-to-Have Skills: Experience with AWS Glue jobs Familiarity with Amazon S3 Additional Requirements: Strong communication and collaboration skills Ability to work effectively in a team environment Adherence to compliance standards and office schedules Diversity & Inclusion Statement We are committed to fostering a diverse, inclusive, and equitable workplace where individuals from all backgrounds feel valued and empowered to contribute their unique perspectives.
We strongly encourage applications from candidates of all genders, races, ethnicities, abilities, and experiences to help us build a culture of belonging.
Work Location: Open to Chicago or Tempe
Work Model: Hybrid - onsite 3 days per week
Duration: 8 Months with possibility of extension
Project Overview / Contractor's Role:
We are looking for a skilled full stack cloud engineer with deep experience in AWS, serverless computing, Data ETL pipelines, and relational databases.
Experience Level: Senior - Level 3
Qualifications (must haves):
> 5 years of experience working on distributed architecture in AWS or another cloud provider
> 5+ years experience in typescript is required while Golang experience is a bonus
> Comfortable working in existing systems and integrating across systems such as between on premises services and the cloud
> Experience with Terraform or another IaC language is preferred and this person should feel comfortable working in CI/CD pipelines and with infrastructure
> 2+ years experience in an enterprise organization
Preferred Qualifications:
* Go lang experience
* Experience working on financial application
* Experience working with data visualization applications
Required Technical Skills:
* Typescript
* Cloud architecture
* Microservice architecture
* Serverless computing
* Relational database
* ETL pipelines
Tasks & Responsibilities:
* End to end ownership of the application (typescript/sql) and infrastructure code (terraform).
* Attending all design and product meetings and providing technical insight into implementation.
* Navigating an enterprise organization that requires collaboration across many teams.
Duration-: 10+ Months
Location: Remote
Overview
An experienced Solution Architect to lead the enterprise rollout of Microsoft Purview across a complex global, multi cloud environment. The consultant will define architecture, implement domain?based governance, and drive adoption of Purview capabilities including cataloging, lineage, classification, access governance, and compliance controls.
Key Responsibilities
- Architecture & Implementation
- Define target?state architecture for Microsoft Purview across Azure, AWS, M365, on prem, and third party platforms.
- Develop and drive the implementation roadmap across U.S. Businesses, PGIM, Corporate Technology, and international units.
- Establish Purview reference architecture, integration patterns, and guardrails.
- Domain Based Governance
- Design collections, hierarchies, and RBAC aligned to domain structures and legal entity boundaries.
- Enable domain owned stewardship while enforcing enterprise taxonomies and governance standards.
- Platform Configuration
- Configure Data Map, Catalog, Scans, Classifications, Sensitivity Labels, and Lineage.
- Optimize scan strategy (frequency, cost, performance) and extend classifiers and metadata models.
- Security & Compliance
- Integrate Purview with M365 Information Protection, Entra ID, and security baselines.
- Support PII/PCI/PHI detection, access governance, and regulatory compliance (SOX, GLBA, NYDFS, GDPR).
- Engineering & Integration
- Integrate with Synapse, Fabric, Databricks (including Unity Catalog), Snowflake, SQL Server, AWS sources, and SAP/Oracle.
- Implement IaC (Bicep/Terraform), CI/CD for Purview artifacts, and automation via APIs.
- Adoption & Stakeholder Management
- Deliver training, onboarding playbooks, and steward enablement.
- Lead workshops for new data domains and products.
- Provide executive level reporting on progress, risks, and KPIs.
Required Qualifications
- 10+ years in data architecture/governance; 2+ years hands on Purview experience at enterprise scale.
- Strong expertise in metadata management, lineage, classification, scan optimization, glossary management and domain based operating models.
- Solid Azure ecosystem knowledge (Storage, Key Vault, Synapse, Fabric, Databricks), M365 Information Protection, and Entra ID.
- Experience with IaC (Bicep/Terraform), APIs/Atlas, and scripting (PowerShell/Python).
- Financial services or regulated industry exposure.
- Excellent communication, stakeholder leadership, and cross domain facilitation skills.
We're building safety-enhancing technology for aviation that will save lives. Automated aviation systems will enable a future where air transportation is safer, more convenient and fundamentally transformative to the way goods - and eventually people - move around the planet. We are a team of mission-driven engineers with experience across aerospace, robotics and self-driving cars working to make this future a reality.
As a Client Platform Engineer, you will be a part of our IT and Security team. Your work will have a key impact on the development of Reliable Robotic's internal infrastructure and processes. You will work as part of the IT and Security team towards building, deploying and managing tools. You will be working on building or improving existing automation to support standardization efforts, with a focus on end-user facing endpoints.
Responsibilities
In your role as a Client Platform Engineer, you will be responsible for the design, implementation of tools supporting the business internal needs and standards, as well as support of the IT related needs of our end-users. You will be involved in the design and implementation of tools and their supporting infrastructure, to improve the daily experience of our users across the entire business. This will include managing SaaS services, managing physical infrastructure, and cloud infrastructure to support the needs of the teams at Reliable. You will be responsible for identifying additional opportunities for standardization, improvement of the user experience, and training various teams, including the IT/Security teams you will be working on, for redundancy when new tooling is introduced.
Basic Success Criteria
5+ years of experience supporting business needs in IT
Excellent troubleshooting skills
Basic networking skills and understanding of basic network services (vlan/routing/wifi/dhcp/dns/etc)
Previous management of in-office infrastructure (wifi, printing, etc)
Experience with MDM platforms to manage Apple devices (macOS, iOS, iPadOS) and Windows 11 devices
Experience with configuration management systems such as Puppet (preferred), Chef or Ansible
Scripting skills in various languages (Python, bash, powershell)
Understanding of SSL certificates, and PKI infrastructure
Preferred Success Criteria
Ability to explain complex concepts to users or colleagues
Ability to troubleshoot problems remotely (over Slack, Google Meet, or others)
Familiarity with Google Workspace management (access and management of Google services, integration of Google with other services, and the security mechanisms available in those environments)
Experience with virtualization/containerization, both in AWS and GCP, KVM, LXC, and Docker
Development tooling experience (git, github, vscode, pip3)
Experience with AWS services (IAM, ec2, s3, route53, SSO, lambda, etc)
Application (re)packaging and management (msi/pkg/deb)
Experience in environments where data management, classification and handling is critical
Experience with end-user supports with software and computing devices
Mobile device management and BYOD policies
This position is based at our facility in Mountain View, California.
Must be willing to travel up to 10% of the time.
This position requires access to information that is subject to U.S. export controls. An offer of employment will be contingent upon the applicant's capacity to perform in compliance with U.S. export control laws.
All applicants are asked to provide documentation that legally establishes status as a U.S. person or non-U.S. person (and nationalities in the case of a non-U.S. person). Where the applicant is not a U.S. person, meaning not a (i) U.S. citizen or national, (ii) U.S. lawful permanent resident, (iii) refugee under 8 U.S.C. * 1157, or (iv) asylee under 8 U.S.C. * 1158, or not otherwise permitted to access the export-controlled technology without U.S. government authorization, the Company reserves the right not to apply for an export license for such applicants whose access to export-controlled technology or software source code requires authorization and may decline to proceed with the application process and any offer of employment on that basis.
At Reliable Robotics, our goal is to be a diverse and inclusive workforce. As an Equal Opportunity Employer, we do not discriminate on the basis of race, religion, color, creed, ancestry, sex, gender (including pregnancy, childbirth, breastfeeding, or related medical conditions), gender identity, gender expression, sexual orientation, age, non-disqualifying physical or mental disability or medical conditions, national origin, military or veteran status, genetic information, marital status, or any other basis covered by applicable law. All employment and promotion is decided on the basis of qualifications, merit, and business need.
If you require reasonable accommodation in completing an application, interviewing, completing any pre-employment testing, or otherwise participating in the employee selection process, please direct your inquiries to
Compensation Range: $125K - $185K
Apply for this Job
Business Area:
EngineeringSeniority Level:
AssociateJob Description:
At Cloudera, we empower people to transform complex data into clear and actionable insights. With as much data under management as the hyperscalers, we're the preferred data partner for the top companies in almost every industry. Powered by the relentless innovation of the open source community, Cloudera advances digital transformation for the world's largest enterprises.
At Cloudera, our Data Services Pillar is the heart of data innovation. We don't just work with technology; we build it. Our mission is to empower data practitioners by creating seamless, enterprise-grade experiences for data engineering, warehousing, streaming, operational databases, and AI.
You will be a key member of the NFQE (Non Functional QE) team that drives the performance reliability of Cloudera's Kuberneteshosted data services. The role blends deep technical knowledge of performance testing, distributed data workloads, and container orchestration with a datadriven mindset. You'll design, automate, run, and analyze performance tests for Cloudera's flagship services, ensuring they meet or exceed customerdefined SLOs/SLAs at scales.
As a Performance Engineer, you will:
Work with internal development teams and the open source community to proactively drive performance improvements/optimizations across our data warehouse and Data Engineering stack.
Work with product managers, developers and the field team to understand performance and scale requirements, and develop benchmarks based on these requirements.
Develop automation to execute benchmarks, collect and aggregate metrics and profiles, and report results, trends, and regressions.
Analyze performance and scalability characteristics to identify bottlenecks in large-scale distributed systems.
Perform root cause analysis of performance issues identified by internal testing and from customers and suggest corrective actions.
Evaluate performance of systems and provide related guidance to the team.
We are excited about you if you have:
3 + years of industry experience in performance-related work, ideally on large-scale distributed systems
Understanding of DBMS algorithms and data structure fundamentals.
Understanding of hardware trends and full-stack systems performance: CPU, RAM, storage, network, Linux kernel, JVM, and distributed systems performance.
Understanding of performance analysis tools and techniques.
Strong design, coding skills, and test automation skills (Java/C++/Golang/Python preferred)
Knowledge of relevant frameworks, cloud provider knowledge, K8s, etc.
Ability to work in a distributed setting with team members spread in multiple geographies
Demonstrated ability to work on large cross-functional projects, including strong written communication skills and a collaborative mindset, as you will be working with many teams inside and outside of Cloudera.
Experience with benchmark and performance test design. You eshould understand basic concepts of performance testing including different types of performance tests (microbenchmarks, end-to-end benchmarks, concurrency and scale testing), how to reduce (or deal with) noise in test results, etc.
Experience designing performance tests that provide useful insights into specific aspects of performance.
Solid understanding of basic performance theory - in particular a very good understanding of latency, throughput, and concurrency and how they relate to each other.
Strong understanding of the types of workloads they'll be testing Ideally they should have specific experience creating performance tests for the specific product area they'll be working on (SQL, ML, etc).
B.S. or M.S. in Computer Science or equivalent experience.
You might also have:
Experience with the Hadoop ecosystem (i.e. Hive, Impala, Spark), in specific Prior work on largescale data lakehouse or datawarehouse performance
Hands-on experience with containerization, Kubernetes, public cloud infrastructure (AWS, Azure and/or GCP) and mesh-networks
Certifications: CKA/CKAD, AWS Solutions Architect, GCP Cloud Architect, Azure Solutions Architect, or equivalent.
Security & Compliance: Experience writing performance tests that also verify dataprivacy and audit compliance (e.g., GDPR, HIPAA).
Why this role matters:
This is your opportunity to build cloud-native solutions that are deployable anywhere whether in massive clusters on any cloud provider or in private data centers. You'll work with cutting-edge technologies like Trino, Spark, Airflow, and advanced AI inferencing systems to shape the future of analytics. Your code will directly influence how data engineers, analysts, and developers worldwide find value in their data.
We believe in the power of open source. You'll collaborate with project committers, contributing upstream to keep technologies like Apache Hive and Impala evolving. You'll harden these engines for rock-solid security, optimize them for peak performance, and make them effortlessly run across all environments. Join us and help build the trusted, cloud-native platform that powers insights for the most data-intensive companies on the planet.
This position is not eligible for sponsorship.
The expected base salary range for this role in:
California is $124,000 - $155,000
The salary will vary depending on your job-related skills, experience and location.
What you can expect from us:
Generous PTO Policy
Support work life balance with Unplugged Days
Flexible WFH Policy
Mental & Physical Wellness programs
Phone and Internet Reimbursement program
Access to Continued Career Development
Comprehensive Benefits and Competitive Packages
Paid Volunteer Time
Employee Resource Groups
EEO/VEVRAA
#LI-SZ1
#LI-HYBRID
Location: Newark, NJ (Hybrid)
Duration: 12 Months
Role Overview
Client is seeking a Senior Business Analyst to support the Product Enablement and Contract Automation initiative. This role is focused on enabling automated contract generation by establishing accurate, validated, and structured product data that serves as a single source of truth for Group Insurance product offerings.
The Senior Business Analyst works closely with business, product, and technology partners to translate contract and product intent into clear data, mapping, and process requirements that support integration between AWS Cloud, APIs, and SharePoint and OpenText Content Web Document Services (CWDS).
Key Responsibilities
- Partner with Group Insurance business, product, and technology stakeholders to understand contract automation objectives
- Identify, document, and validate field-level data elements required for automated contract generation
- Create data mapping specifications including transformation rules, validation criteria, and business logic
- Leverage AI-assisted tooling to accelerate data discovery, mapping analysis, and documentation
- Facilitate working sessions with business partners to validate data definitions, mappings, and contract logic
- Document end-to-end document generation workflows, including system interactions and exception handling
- Translate validated requirements into consumable artifacts for engineering and quality teams
- Support User Acceptance Testing (UAT) and implementation readiness activities
- Communicate risks, dependencies, and decisions across cross-functional teams
Required Qualifications
- 5+ years of experience as a Business Analyst or Business Systems Analyst
- Strong experience with data mapping, data validation, and integration-driven solutions
- Proven ability to validate requirements and outcomes with business partners
- Strong analytical, facilitation, and communication skills
Preferred Qualifications
- Experience supporting contract automation or document generation initiatives
- Familiarity with AWS Cloud, APIs, and SharePoint, document management, or content services platforms
- Experience leveraging AI tools to support analysis and requirements documentation
Software Engineer – Entry to Mid-Level (R&D Systems)
Novateur stands for Innovation. We value creativity, vision, collaboration, and above all, ambition to innovate. Novateur Research Solutions is an R&D firm located in Northern Virginia, developing intelligent systems that push the boundaries of computer vision, AI, and large-scale learning.
We are looking for Software Engineers eager to build scalable systems and deploy machine learning models in real-world environments. You will work closely with our researchers and engineers to develop software for real-time perception, geospatial analytics, and distributed systems.
Responsibilities:
• Develop and deploy production-grade software in Python and C++.
• Build APIs, data pipelines, and visualization tools to support machine learning workflows.
• Collaborate with researchers to translate algorithms into efficient implementations.
• Contribute to system design, cloud deployment (AWS), and automation.
Requirements:
• BS or MS in Computer Science, Engineering, or a related field.
• Proficiency in modern programming and software engineering practices.
• Familiarity with Docker, Kubernetes, or AWS.
• Enthusiasm for learning and applying machine learning or computer vision methods.
• U.S. Citizen or Permanent Resident.
Why Novateur?
Join a team that values creativity and initiative. Our engineers have freedom to innovate, collaborate with top researchers, publish research in major scientific conferences, and see their ideas deployed in impactful applications.
Company Benefits:
Novateur offers competitive pay and benefits comparable to Fortune 500 companies that include a wide choice of healthcare options with generous company subsidy, 401(k) with generous employer match, paid holidays and paid time off increasing with tenure, and company paid short-term disability, long-term disability, and life insurance.
We offer a work environment which fosters individual thinking along with collaboration opportunities within and beyond Novateur. In return, we expect a high level of performance and passion to deliver enduring results for our clients.
Position - Network Architect
Location: Denver, CO (Hybrid)
Long Term Contract
Unable to provide sponsorship for this role.
Job Description:
As a Network Architect, you will be responsible for the strategic design, planning, and security of our organization's network infrastructure. You will oversee the integration of advanced technologies to support the demands of both our on-premise data centers and cloud services, ensuring a resilient, scalable, and highly secure network.
Responsibilities
- Lead the design and development of network architectures, roadmaps, and technical specifications for enterprise-level networks, encompassing on-premise data centers and hybrid cloud environments.
- Serve as the subject matter expert for all network-related technologies, providing technical leadership and guidance to engineering and operations teams.
- Evaluate, recommend, and integrate new technologies such as Software-Defined Networking (SDN) and Software-Defined Wide Area Networking (SD-WAN) to improve network agility and efficiency.
- Design and implement robust network security solutions, including the configuration of firewalls, intrusion detection/prevention systems (IDS/IPS), and other security fabric components.
- Manage network segmentation and isolation strategies to protect sensitive data and critical systems, ensuring compliance with security standards.
- Design and manage network load-balancing solutions to ensure high availability, optimal performance, and efficient traffic distribution across the network.
- Oversee the implementation of routing and switching protocols (e.g., OSPF, BGP) to ensure network stability, performance, and scalability across multi-site and global environments.
- Ensure the design of secure interconnectivity between on-premise infrastructure, cloud services (e.g., AWS, Azure), and remote access points.
- Develop comprehensive network documentation, including diagrams, topologies, and implementation plans, and provide ongoing support and troubleshooting for complex network issues.
- Collaborate with cross-functional teams, including cybersecurity, cloud engineering, and application development, to align network architecture with business goals and security requirements.
- Stay current on emerging networking and security trends, technologies, and best practices to drive continuous improvement and innovation.
Required skills and qualifications
- Bachelor’s degree in computer science, Information Technology, or a related field.
- 10+ years of progressive experience in network engineering, with at least 5 years in a network architect role designing enterprise-level networks.
- Expert-level knowledge and hands-on experience with routing and switching protocols and configuration (e.g., Cisco, Juniper).
- Extensive experience with network security technologies, including firewalls (e.g., Palo Alto, Check Point), IDS/IPS, and VPNs.
- Proven experience in architecting and implementing secure solutions for on-premise and cloud (IaaS/PaaS) environments.
- Strong knowledge of Software-Defined Networking (SDN) principles and practical experience with SDN technologies.
- Expertise in designing and managing load-balancing systems for high-traffic applications.
- Experience with network modeling, capacity planning, and performance analysis.
- Excellent analytical, problem-solving, and communication skills.
- Relevant industry certifications (e.g., CCNP, CCIE, AWS Certified Advanced Networking, Azure Network Engineer Associate) are highly desirable.
Are you an experienced Back End Developer with a desire to excel? If so, then Talent Software Services may have the job for you! Our client is seeking an experienced Back End Developer to work at their company in Richfield, MN.
Position Summary: We are seeking a DevOps Engineer to join our Enterprise API Management team. The successful candidate will be responsible for building, deploying, and operating our platform and services, with an emphasis on automation, reliability, and secure-by-default delivery. This role partners closely with engineers through collaboration and pair programming to improve CI/CD pipelines, troubleshooting practices, and operational readiness, and it also requires some software development experience (e.g., ability to read/debug code and contribute when needed). Technologies involved include Kubernetes, Helm charts, Java/Spring Boot, AWS offerings (certification preferred), and API platform capabilities such as API Gateway and security.
Qualifications:
- Kubernetes 5+ Years
- Helm Charts 4+ Years
- AWS (core offerings; certification preferred) 5+ Years
- Java and Spring Boot 5+ Years
- API Security 5+ Years
Preferred:
- CI/CD and GitOps
- Infrastructure-as-Code experience (e.g., Terraform/CloudFormation)
- API Gateway/API Management experience
- Observability tooling (logging/metrics/tracing) and on-call readiness
- Lua Programming
- Collaborative delivery practices (pair programming, code reviews)