Legacybox Cloud Jobs in Usa
1,902 positions found — Page 80
A fast-growing robotics company is building autonomous systems designed to automate repetitive tasks within the global infrastructure and construction sector. The company’s mission is to help address workforce shortages while accelerating the development of critical infrastructure projects.
Their robotics platforms combine advanced autonomous navigation, cloud software, and intelligent field tools to improve precision and efficiency across construction environments. With a growing fleet of robots already deployed across multiple project sites globally, the company is developing the next generation of automation tools used in areas such as infrastructure development, surveying, energy projects, and large-scale construction.
The company is seeking a Full-stack Developer to design and maintain the software systems that power its robotics ecosystem — including operator dashboards, cloud infrastructure, and applications that interface with robots operating in the field.
This role will contribute to building reliable tools for mission planning, fleet monitoring, and communication between cloud systems and robotic platforms. The position involves close collaboration with robotics engineers, field operations teams, and product stakeholders to ensure the software delivers measurable impact in real-world environments.
Key Responsibilities
- Design and develop full-stack applications for device management, mission control, and fleet coordination.
- Build and maintain mobile applications used by field operators.
- Develop desktop applications used to interface with robotic systems.
- Create web dashboards and APIs for mission planning, telemetry visualization, and operational data analysis.
- Integrate cloud infrastructure for data storage, monitoring, and deployment.
- Ensure reliable communication between cloud services and deployed robotic systems.
- Collaborate with robotics, product, and field teams to deliver integrated software functionality.
- Write technical documentation for APIs, system architecture, and software modules.
- Optimize systems for scalability, reliability, and performance in field environments.
- Requirements
- Education
- Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a related field.
Experience
- 4+ years of professional full-stack development experience.
- Strong experience with JavaScript / TypeScript using modern frameworks such as React, Next.js, and Node.js.
- Strong UI/UX development experience using modern CSS frameworks.
- Experience building mobile applications using React Native.
- Experience developing cross-platform desktop applications.
- Production experience with cloud platforms such as AWS including APIs, storage, and deployment pipelines.
- Strong understanding of software architecture, testing methodologies, and performance optimization.
- Proficiency with development tools including Git, issue tracking systems, and CI/CD pipelines.
Soft Skills
- Strong analytical and problem-solving ability.
- Ability to collaborate effectively in a fast-paced engineering environment.
- Strong ownership mindset and communication skills.
- Preferred Qualifications
- Experience working with robotics systems or robotics middleware.
- Familiarity with real-time communication protocols such as WebSockets or MQTT.
- Experience working with IoT devices, connected hardware, or industrial systems.
- Experience working with performance-sensitive or multi-threaded applications.
- Experience with containerization technologies such as Docker or Kubernetes.
Our client, a large, complex enterprise organization, is looking for a Principal AI Engineer to serve as the lead architect and hands-on builder of a unified AI Platform as a Service (PaaS) a secure, multi-tenant foundation that helps internal teams build and operate semantic discovery, conversational experiences, and autonomous agent workflows at scale.
Summary:
The Principal AI Engineer will design and build an enterprise-grade “AI operating layer” that turns modern foundation model capabilities into a governed, reusable platform used across multiple business domains. This role balances approximately 40% hands-on development with 60% platform strategy, personally building core orchestration services, standardized capability interfaces, and trust/safety guardrails. The platform will enable teams to deploy specialized agents that can collaborate via defined protocols, securely access grounded knowledge sources, and execute autonomous tasks within a controlled, high-availability runtime.
Location: Remote (U.S.) / Hybrid options may be available based on client needs.
Compensation Range: $145,000 - $250,000 per year plus RSUs & Bonus
Benefits: Very competitive benefits
Responsibilities
- Architect and deliver a self-service AI platform that provides reusable patterns, reference implementations, and standardized building blocks for internal engineering teams.
- Define and communicate a multi-year platform roadmap, ensuring technical priorities map to enterprise outcomes and adoption goals.
- Design and implement stateful orchestration (state graphs/state machines) to handle planning edge cases, recovery, and self-correction in autonomous workflows.
- Build and operate secure remote tool gateways (e.g., MCP-style servers) and implement controlled function-calling interfaces for connecting agents to sensitive enterprise systems.
- Establish interoperability standards for agent-to-agent collaboration, enabling autonomous discovery and reliable task handoffs across independently built agent solutions.
- Design an agent identity and authorization layer that supports fine-grained permissions, auditable actions, and strong accountability for autonomous behaviors.
- Implement a unified knowledge layer using semantic retrieval and multimodal grounding to support accurate, “source-aligned” responses and decisions.
- Build long-term context persistence (“memory”) using retrieval and graph-based storage to preserve institutional knowledge and improve continuity over time.
- Create a trust and evaluation layer with automated testing pipelines to measure quality, safety, cost, latency, and reliability of agent behavior across tenants.
- Own runtime lifecycle management for agent sessions, ensuring high availability, persistence, scalability, and controlled rollout patterns.
- Lead deep code reviews focused on agent-specific failure modes (runaway loops, tool misuse, state growth, unreliable calling patterns) and implement mitigations.
- Optimize inference performance and spend through techniques such as prompt caching, model routing, and workload-aware runtime strategies.
- Act as a technical multiplier by mentoring senior/staff engineers on advanced agentic patterns, evaluation methods, and production hardening.
- Partner closely with Cloud and Infrastructure teams to influence enabling services and platform primitives needed for enterprise AI delivery.
- Raise the bar on engineering quality via documentation, profiling, reliability improvements, and ongoing performance tuning.
QUALIFICATIONS:
Required:
- 10+ years of software engineering experience, including 4+ years operating at a Principal/Architect level.
- 2+ years architecting and shipping LLM-based systems, with demonstrated experience taking agentic solutions into production at scale.
- 5+ years working in agile delivery environments.
- Google Cloud Professional Cloud Architect certification.
- Proven ability to lead technical workstreams and translate business needs into durable platform architectures.
- Strong expertise in asynchronous orchestration (e.g., Python) plus proficiency in a statically typed language (Java, Go, or Rust) for high-concurrency platform services.
- Hands-on experience with stateful graph orchestration patterns and frameworks (e.g., LangGraph/ADK-style approaches) to power robust reasoning workflows.
- Strong cloud-native experience with CLI tooling and Infrastructure-as-Code; proven ability to deploy and scale containerized workloads using container orchestration and serverless platforms.
- Experience designing and operating distributed architectures at scale, including vector stores, graph databases, and structured data pipelines.
- Deep knowledge of multi-agent design patterns and the ability to extend or replace off-the-shelf orchestration when scaling, safety, or reliability requires it.
- Working understanding of modern agent reasoning approaches (Chain-of-Thought, ReAct, Tree-of-Thoughts, Self-Reflection) and when to apply them.
- Experience supporting very high request volumes and/or extremely large datasets in production environments.
- Expertise building semantic retrieval layers, attribute-aware discovery, and stateful persistence for long-lived agent context.
- Strong understanding of MCP-style tool protocols, agent-to-agent interaction patterns, REST/gRPC APIs, OAuth2, and function-calling mechanics.
- Familiarity with microservices architecture patterns and distributed systems best practices.
- Ability to implement observability for agentic systems, including traceability/telemetry and debugging methods for multi-step reasoning and handoffs.
- Awareness of global AI regulations (e.g., EU AI Act) and ability to translate requirements into technical controls and platform governance.
- Strong written and verbal communication skills across technical and business audiences.
- Calm, decisive execution in high-pressure or incident scenarios.
- Highly organized, self-directed, detail-oriented, and effective with limited supervision.
- Ability to support off-hours work as needed, including rotational on-call, weekends, and holidays.
Preferred:
- Bachelor’s or Master’s in Computer Science, AI, or a related field.
- PhD in AI, Distributed Systems, or Cognitive Science.
- Google Cloud Professional Machine Learning Engineer certification and/or specialized certifications in multi-agent systems/autonomous reasoning.
- 3+ years working with distributed caching technologies.
- Experience provisioning and configuring cloud platform resources at scale.
- Experience supporting consumer-facing digital or commerce environments.
- Proficiency with Google Workspace (Sheets, Docs, Slides, Gmail).
About Hansell Tierney:
Hansell Tierney is one of the premier staffing and recruiting companies in the Pacific Northwest. Launched in 2001, we are a woman-owned business that serves and staffs Northwest organizations by doing things the right way, not just the easiest way. Hansell Tierney partners with candidates and clients to match the best candidates with interesting local opportunities. We navigate every relationship with the highest level of discretion and service while holding ourselves accountable to our promises. Our business thrives on our deep understanding of the job market and our ability to skillfully tailor our recruitment process to meet our clients’ unique needs.
Advanced Software Engineering Manager – DevOps / CI/CD
Location: Onsite – Cincinnati, Ohio (Blue Ash)
Employment Type: Direct Hire
Compensation: $145,000–$160,000/year
Benefits: Health, Dental, Vision, 401(k) with Match, PTO, Paid Holidays, and additional enterprise benefits
Travel: None
Start: ASAP
Open Role Due To: Technology modernization and enterprise platform expansion
About the Role
Our client, a global leader in enterprise technology and innovation, is seeking an Advanced Software Engineering Manager – DevOps / CI/CD to architect, operate, and continuously enhance a modern CI/CD ecosystem that enables fast, secure, and reliable software delivery across the organization.
This role will drive pipeline scalability, automation maturity, platform modernization, and operational excellence. You will partner closely with development, QA, architecture, IT, security, and product teams to accelerate engineering productivity while ensuring enterprise-grade security, compliance, and governance standards are met.
This is a highly visible leadership role requiring both strategic vision and strong hands-on technical expertise.
What You’ll Do
- Lead the design, development, and delivery of scalable, high-quality DevOps and CI/CD solutions aligned to enterprise architecture standards
- Architect and modernize CI/CD pipelines using Azure DevOps, GitHub Actions, Jenkins, and/or GitLab CI
- Develop Infrastructure as Code solutions using Terraform, Ansible, ARM/Bicep, and related tools
- Translate technology strategy into actionable roadmaps supporting enterprise project portfolios
- Drive modernization initiatives across cloud, infrastructure, automation, and security domains
- Lead migration strategies from current-state to future-state architecture aligned with capital planning cycles
- Present architectural recommendations to executive leadership, including cost/benefit and risk analysis
- Mentor engineers on software engineering best practices, reusable design patterns, and architectural principles
- Promote reusable frameworks, shared libraries, and automation standards
- Author and review architectural artifacts including system diagrams, interface specifications, and technical design documentation
Why This Role Stands Out
- Enterprise-wide impact with high visibility to senior leadership
- Opportunity to modernize and scale CI/CD platforms in a large, complex environment
- Strong influence over cloud architecture, automation strategy, and operational excellence
- Competitive compensation and comprehensive benefits package
What We’re Looking For
- Bachelor’s Degree in Computer Science or STEM-related field
- 8+ years of experience in DevOps, CI/CD, Infrastructure Engineering, or Site Reliability Engineering
- 5+ years leading the design and delivery of complex, large-scale, high-quality software or automation systems
- Strong hands-on expertise in software or infrastructure engineering, including design patterns and code structure
- Expertise with CI/CD platforms (Azure DevOps, GitHub Actions, Jenkins, GitLab CI)
- Strong proficiency in Infrastructure as Code tools (Terraform, Ansible, ARM/Bicep)
- Deep knowledge in at least two of the following: cloud platforms, APIs, application development, infrastructure/network design, middleware, servers & storage, database management, or operations
- Solid understanding of network and security architecture principles
- Experience with Linux and/or Windows systems, virtualization, and containerization
- Experience with enterprise observability and monitoring platforms
- Proven ability to operate strategically while delivering hands-on technical solutions
- Strong written, verbal, and presentation communication skills
Preferred Skills
- Experience building and mentoring high-performing engineering teams
- Experience designing and implementing elastic architectures in Azure and/or Google Cloud Platform
- Experience working in large-scale or regulated enterprise environments
- Experience with Google Cloud Platform (GCP) or Google Distributed Cloud (GDCE)
- Awareness of industry trends and competitive DevOps landscape
Job Title – Lead Data Engineer
Please note this role is not able to offer visa transfer or sponsorship now or in the future
About the role
As a Lead Data Engineer, you will make an impact by designing, building, and operating scalable, cloud‐native data platforms supporting batch and streaming use cases, with strong focus on governance, performance, and reliability. You will be a valued member of the Data Engineering team and work collaboratively with cross‐functional engineering, cloud, and architecture stakeholders.
In this role, you will:
- Design, build, and operate scalable cloud‐native data platforms supporting batch and streaming workloads with strong governance, performance, and reliability.
- Develop and operate data systems on AWS, Azure, and GCP, designing cloud‐native, scalable, and cost‐efficient data solutions.
- Build modern data architectures including data lakes, data lakehouses, and data hubs, with strong understanding of ingestion patterns, data governance, data modeling, observability, and platform best practices.
- Develop data ingestion and collection pipelines using Kafka and AWS Glue; work with modern storage formats such as Apache Iceberg and Parquet.
- Design and develop real‐time streaming pipelines using Kafka, Flink, or similar streaming frameworks, with understanding of event‐driven architectures and low‐latency data processing.
- Perform data transformation and modeling using SQL‐based frameworks and orchestration tools such as dbt, AWS Glue, and Airflow, including Slowly Changing Dimensions (SCD) and schema evolution.
- Use Apache Spark extensively for large‐scale data transformations across batch and streaming workloads.
Work model
We believe hybrid work is the way forward as we strive to provide flexibility wherever possible. Based on this role's business requirements, this is a hybrid position requiring 4 days a week in a client or Cognizant office in Atlanta, GA. Regardless of your working arrangement, we are here to support a healthy work-life balance though our various wellbeing programs.
The working arrangements for this role are accurate as of the date of posting. This may change based on the project you're engaged in, as well as business and client requirements. Rest assured; we will always be clear about role expectations.
What you need to have to be considered
- Hands‐on experience developing and operating data systems on AWS, Azure, and GCP.
- Proven ability to design cloud‐native, scalable, and cost‐efficient data solutions.
- Experience building data lakes, data lakehouses, and data hubs with strong understanding of ingestion patterns, governance, modeling, observability, and platform best practices.
- Expertise in data ingestion and collection using Kafka and AWS Glue, with experience in Apache Iceberg and Parquet.
- Strong experience designing and developing real‐time streaming pipelines using Kafka, Flink, or similar streaming frameworks.
- Deep expertise in data transformation and modeling using SQL‐based frameworks and orchestration tools including dbt, AWS Glue, and Airflow, with knowledge of SCD and schema evolution.
- Extensive experience using Apache Spark for large‐scale batch and streaming data transformations.
These will help you stand out
- Experience with event‐driven architectures and low‐latency data processing.
- Strong understanding of schema evolution, SCD modeling, and modern data modeling concepts.
- Experience with Apache Iceberg, Parquet, and modern ingestion/storage patterns.
- Strong knowledge of observability, governance, and platform best practices.
- Ability to partner effectively with cloud, architecture, and engineering teams.
Salary and Other Compensation:
Applications will be accepted until March 17, 2025.
The annual salary for this position is between $81,000 - $135,000, depending on experience and other qualifications of the successful candidate.
This position is also eligible for Cognizant's discretionary annual incentive program, based on performance and subject to the terms of Cognizant's applicable plans.
Benefits: Cognizant offers the following benefits for this position, subject to applicable eligibility requirements:
- Medical/Dental/Vision/Life Insurance
- Paid holidays plus Paid Time Off
- 401(k) plan and contributions
- Long‐term/Short‐term Disability
- Paid Parental Leave
- Employee Stock Purchase Plan
Disclaimer: The salary, other compensation, and benefits information is accurate as of the date of this posting. Cognizant reserves the right to modify this information at any time, subject to applicable law.
**CAN BE HYBRID IN DALLAS, CHICAGO, ATLANTA, NYC, OR FORT WASHINGTON, MD**
Our client is seeking a highly skilled Lead Full Stack Engineer to join our Enterprise Architecture & Engineering team. This is a high-impact role responsible for supporting and enhancing existing web applications while driving innovation through modern cloud and full-stack technologies.
You will lead the design and delivery of scalable, cloud-native solutions using Azure, C#/.NET, React, Next.js, and Node.js, partnering closely with product and senior leadership to shape the technical roadmap.
This role requires deep technical expertise, strong architectural judgment, and proven leadership experience managing distributed (onshore and offshore) engineering teams.
Responsibilities
- Lead the design, development, and ongoing maintenance of full-stack applications using Azure, C#/.NET, React, Next.js, and Node.js
- Provide production support and ensure high availability, performance, and reliability of applications
- Troubleshoot complex production issues and drive durable, long-term solutions
- Architect and implement scalable, secure, cloud-native solutions
- Drive technical evaluations and Proofs of Concept (POCs) for emerging technologies
- Partner with product managers and senior stakeholders to translate business needs into technical solutions
- Establish and enforce architecture standards, coding best practices, and documentation
- Mentor and lead distributed engineering teams to deliver high-quality software on schedule
- Facilitate cross-functional alignment across engineering, product, and business teams
- Maintain architecture diagrams and technical documentation to support knowledge sharing
Required Qualifications
- 8+ years of hands-on software development experience, including experience in a Lead or Senior Full Stack role
Strong expertise in:
- Azure
- C# / .NET
- React and Next.js
- Node.js
- SQL Server and NoSQL databases (e.g., Cosmos DB)
- Solid understanding of microservices and event-driven architectures
- Experience with containerization technologies such as Docker and Azure Service Fabric
- Hands-on experience building and maintaining CI/CD pipelines (Azure DevOps, Jenkins, or similar)
- Strong front-end and back-end fundamentals (JavaScript, HTML5, CSS, REST APIs, database design)
- Knowledge of application and cloud security principles (PKI, cryptography, SSL/TLS, secure protocols)
- Proven experience leading and mentoring onshore/offshore teams
- Excellent communication skills with the ability to influence senior stakeholders
Preferred Qualifications
- Experience deploying large-scale, cloud-native applications on Azure
- Familiarity with Kubernetes or other container orchestration platforms
- Exposure to Python and AI/ML technologies (e.g., Azure Cognitive Services, LangChain, Azure Document Intelligence, Azure OpenAI)
- Understanding of cloud infrastructure and security best practices
- Experience designing and executing technical POCs
The role involves translating business requirements into technical architecture, leading implementations, and providing guidance across Sales Cloud, Service Cloud, and Experience Cloud environments.
Must Have skills: Salesforce Salesforce integration Salesforce security Service cloud Key Responsibilities: Design scalable Salesforce architecture, including data models, integrations, security, and user experience.
Collaborate with stakeholders to gather requirements and translate them into functional designs and user stories.
Provide technical leadership on declarative solutions (Flows, configurations) and programmatic development (Apex, LWC).
Lead solution implementation, including integration, data migration, testing, and deployment.
Act as a trusted advisor to business and technical teams, ensuring best practices and platform optimization.
Stay current with Salesforce releases and innovations to recommend improvements.
Required Skills: 10+ years of Salesforce experience.
Strong expertise in Salesforce Clouds (Sales, Service, Experience).
Hands-on experience with Apex, Lightning Web Components (LWC), APIs, and integrations.
Experience with data modeling, data migration, and enterprise architecture.
Excellent communication and stakeholder management skills.
Remote working/work at home options are available for this role.
Analyze business and technical requirements, design integration flows, and ensure operational excellence of integration platforms.
Mentor junior engineers and promote best practices in integration architecture and support operations.
MAJOR RESPONSIBILITIES Administer and operate SAP BTP environments, including connectivity, user roles, and API provisioning.
Design, develop, and deploy complex integration flows using SAP CPI, SAP Integration Suite, and Advanced Event Mesh.
Manage day-to-day operations, including monitoring, incident resolution, and performance tuning of integration platforms.
Configure and maintain Solace/Event Mesh messaging infrastructure—topics, queues, subscriptions—and ensure operational stability.
Lead migration projects from legacy platforms (e.g., SAP PI/PO) to SAP Integration Suite.
Implement and support API-first integration designs, ensuring secure and scalable interfaces using REST/SOAP.
Utilize SAP Solution Manager, CPI Monitoring Dashboards, and SAP BTP Admin tools for proactive system monitoring.
Collaborate with cross-functional teams to gather requirements and translate them into technical solutions.
Guide junior developers and lead technical knowledge transfer sessions.
Optimize existing integrations for performance, maintainability, and error handling.
Participate in Agile events and drive continuous improvement in integration operations.
Interface with tools like ElasticSearch, Splunk for logging, diagnostics, and operational insights.
Adhere to and enforce all company standards related to integration development, including security, performance, and resiliency standards.
On call production support as needed.
MINIMUM JOB REQUIREMENTS Education Bachelor’s degree in computer science, IT, or related discipline.
Work Experience 8+ years of experience in SAP integration development and operations.
2+ years of hands-on experience in SAP BTP, SAP Integration Suite, and SAP CPI.
Experience with ABAP (IDOCs, BAPI, RFC), REST/SOAP services, XML, JSON, and transformation logic.
Hands-on experience with Advanced Event Mesh, APIs, and Cloud Console.
Experience with SAP Fiori and S/4HANA integration scenarios.
Experience applying project management methodologies.
Knowledge / Skills / Abilities Strong background in platform administration, operational support, and incident management.
Proficiency in Java, Groovy, or Python.
Knowledge of authentication protocols (OAuth, SAML, SSL) and relational databases (SAP HANA, Oracle).
SAP certifications in Integration Suite, BTP, or Cloud Platform Integration (preferred).
Experience with DevOps tools (Git, Jenkins, Docker, Kubernetes, CI/CD).
Exposure to cloud platforms (AWS, Azure, GCP).
Knowledge of event-driven architectures and microservices.
Familiarity with Agile practices and tools (JIRA, Confluence).
Experience with SAP Open Connectors, Graph API, and CAPM.
Broad knowledge of integration protocols and tools.
Strong communication and documentation skills.
Analytical mindset with a focus on operational excellence.
Proven ability to mentor and lead junior engineers.
Collaborative and proactive in cross-functional environments.
Detail-oriented and highly organized.
Passionate about innovation and continuous learning.
Medline Industries, LP, and its subsidiaries, offer a competitive total rewards package, continuing education & training, and tremendous potential with a growing worldwide organization.
The anticipated salary range for this position: $101,000.00
- $152,000.00 Annual The actual salary will vary based on applicant’s location, education, experience, skills, and abilities.
This role is bonus and/or incentive eligible.
Medline will not pay less than the applicable minimum wage or salary threshold.
Our benefit package includes health insurance, life and disability, 401(k) contributions, paid time off, etc., for employees working 30 or more hours per week on average.
For a more comprehensive list of our benefits please click here .
For roles where employees work less than 30 hours per week, benefits include 401(k) contributions as well as access to the Employee Assistance Program, Employee Resource Groups and the Employee Service Corp.
We’re dedicated to creating a Medline where everyone feels they belong and can grow their career.
We strive to do this by seeking diversity in all forms, acting inclusively, and ensuring that people have tools and resources to perform at their best.
Explore our Belonging page here .
Medline Industries, LP is an equal opportunity employer.
Medline evaluates qualified individuals without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, age, disability, neurodivergence, protected veteran status, marital or family status, caregiver responsibilities, genetic information, or any other characteristic protected by applicable federal, state, or local laws.
Amazon Web Services (AWS) is the pioneer and recognized leader in cloud computing. AWS customers transform and reinvent their businesses through the cloud and the AWS Partner Network (APN) is helping to dramatically accelerate that innovation, with more than 140k partners in more than 150 countries. More than 90% of Fortune 100 companies and the majority of Fortune 500 companies utilize AWS Partner solutions and services.
Would you like to help drive go-to-market excellence with consulting partners and system integrators through the Small Business Acceleration Initiative (SBAI)? The APN Customer and Partner Engagements team is seeking an experienced candidate to lead the GTM System Integrator Strategy & Expansion for SBAI. As the GTM System Integrator Strategy & Expansion Lead, you will establish scalable processes and best practices that accelerate customer acquisition and launches through consulting partners and system integrators, while leading geographic and business unit expansion of the SBAI motion, including future indirect selling scenarios beyond current scope.
This ideal candidate is highly strategic, operationally excellent, and partner-focused, one who can design and implement repeatable GTM frameworks that enable system integrators to drive SMB customer acquisition at scale. You have relentlessly high standards and obsess over creating mechanisms that work across diverse geographies and business units. You are equally comfortable developing global strategy as you are rolling up your sleeves to establish best practices with individual SI partners. This role has a global responsibility, and you will influence and collaborate with a wide variety of AWS leaders including SBAI program leaders, executives across AWS Global Sales (AGS) and
AWS Specialists and Partners (ASP), as well as regional leaders, system integrator executives, and operations teams. You are passionate about building the foundation for indirect selling expansion, leveraging partner capabilities, and creating scalable frameworks that enable revenue growth through the SI ecosystem.
Position available and relocation provided for candidates in Seattle, San Francisco, Los Angeles, Chicago, Dallas, Austin, Atlanta, DC, New York, Boston
Key job responsibilities
- Develop and execute comprehensive GTM strategy for system integrator engagement within SBAI, establishing scalable processes, playbooks, and best practices that accelerate customer acquisition and opportunity launches across consulting partners and SIs
- Lead geographic expansion of SBAI motion into new territories and regions, working with regional leaders to adapt the partner-led model while maintaining program consistency and effectiveness
- Drive business unit expansion strategy, identifying opportunities to extend SBAI frameworks beyond SMB-Small into ISV, Startup, Public Sector, and other customer cohorts, including designing future indirect selling scenarios
- Establish and optimize SI partner engagement models, including capacity and capability frameworks specific to different customer segments, partner types, and geographic markets
- Build strong relationships with system integrator executives and practice leaders, understanding their business models and co-developing solutions that align AWS growth objectives with SI strategic priorities
- Create and maintain comprehensive GTM toolkits, including partner playbooks, enablement materials, success metrics, and operational frameworks that can be replicated across geographies and business units
- Work closely with SBAI program team, Partner Core leaders, and field teams (PTMs, PDMs, PSMs) to ensure successful implementation of SI strategies and gather feedback for continuous improvement
- Drive cross-organizational alignment across AGS, ASP, Marketing, and Operations to ensure SI expansion initiatives are supported with appropriate resources, systems, and incentives
- Develop business cases and ROI models that demonstrate the value of SI-led customer acquisition, securing executive support and investment for expansion initiatives
- Monitor and analyze SI performance metrics, identifying trends, opportunities, and areas for optimization to continuously improve partner effectiveness and program outcomes
About the team
Diverse Experiences
AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying.
Why AWS?
Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses.
Inclusive Team Culture
AWS values curiosity and connection. Our employee-led and company-sponsored affinity groups promote inclusion and empower our people to take pride in what makes us unique. Our inclusion events foster stronger, more collaborative teams. Our continual innovation is fueled by the bold ideas, fresh perspectives, and passionate voices our teams bring to everything we do.
Mentorship & Career Growth
We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional.
Work/Life Balance
We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve.- 3+ years of Go-To-Market, Business Development, Sales, or Consulting experience
- 3+ years of program or project management experience
- Experience in stakeholder management, dealing with multiple stakeholders at varied levels of the organization
- Experience using data and metrics to determine and drive improvements
- Proven track record of designing and scaling GTM programs across multiple geographies or business units- Experience leading change in multiple site environments and influenceing those that are not direct reports or within your organization
- Experience in analyzing data to drive decisions
- Master of Business Administration, or Associate's degree or above
- Experience in partner strategy, alliances, business development, or GTM program management for a large technology firm
- Deep understanding of cloud-based technologies and partner ecosystems, particularly system integrator business models
- Experience with AWS Partner Network (APN) programs or similar partner programs at scale
- Track record of successfully launching and scaling partner programs across multiple geographies
Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status.
Los Angeles County applicants: Job duties for this position include: work safely and cooperatively with other employees, supervisors, and staff; adhere to standards of excellence despite stressful conditions; communicate effectively and respectfully with employees, supervisors, and staff to ensure exceptional customer service; and follow all federal, state, and local laws and Company policies. Criminal history may have a direct, adverse, and negative relationship with some of the material job duties of this position. These include the duties and responsibilities listed above, as well as the abilities to adhere to company policies, exercise sound judgment, effectively manage stress and work safely and respectfully with others, exhibit trustworthiness and professionalism, and safeguard business operations and the Company’s reputation. Pursuant to the Los Angeles County Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records.
Pursuant to the San Francisco Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records.
Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
The base salary range for this position is listed below. Your Amazon package will include sign-on payments and restricted stock units (RSUs). Final compensation will be determined based on factors including experience, qualifications, and location. Amazon also offers comprehensive benefits including health insurance (medical, dental, vision, prescription, Basic Life & AD&D insurance and option for Supplemental life plans, EAP, Mental Health Support, Medical Advice Line, Flexible Spending Accounts, Adoption and Surrogacy Reimbursement coverage), 401(k) matching, paid time off, and parental leave. Learn more about our benefits at , CA, San Francisco - 101,6 ,800.00 USD annually
USA, CA, Santa Clara - 101,6 ,800.00 USD annually
USA, GA, Atlanta - 92,4 ,000.00 USD annually
USA, IL, Chicago - 92,4 ,000.00 USD annually
USA, MA, Boston - 92,4 ,000.00 USD annually
USA, NY, New York - 101,6 ,800.00 USD annually
USA, TX, Dallas - 92,4 ,000.00 USD annually
USA, VA, Arlington - 92,4 ,000.00 USD annually
USA, WA, Seattle - 92,4 ,000.00 USD annually
HCLTech is looking for a highly talented and self- motivated Database Architect to join it in advancing the technological world through innovation and creativity.
Job Title: Database Architect
Job ID: 72275
Position Type: Full Time
Location: Onsite
Mandatory Skills
Strong expertise in SQL, NoSQL, and cloud database platforms (e.g., AWS, Azure, Google Cloud).
Proficiency in data modeling, ETL processes, and database performance tuning.
Knowledge of data security protocols and compliance standards.
Experience with enterprise-level database systems (Oracle, PostgreSQL, MongoDB, etc.).
Analytical mindset with problem-solving skills.
Excellent communication and collaboration abilities.
Years of Experience Required
5+ years of experience in database administration, design, or architecture.
Detailed JD
Role Overview
A Database Architect is responsible for designing, developing, and maintaining large-scale database systems that support an organization’s data needs. They ensure databases are secure, scalable, efficient, and aligned with business objectives.
Key Responsibilities
Database Design & Modeling
Develop logical and physical data models.
Define database architecture standards and best practices.
Ensure scalability and performance optimization.
Implementation & Maintenance
Oversee installation, configuration, and upgrades of database systems.
Create and maintain documentation for database structures and processes.
Monitor database performance and troubleshoot issues.
Data Security & Integrity
Implement robust security measures to protect sensitive data.
Ensure compliance with data governance and regulatory requirements.
Establish backup and recovery strategies.
Collaboration
Work with software engineers, data analysts, and IT teams to integrate databases with applications.
Translate business requirements into technical database solutions.
Innovation & Optimization
Evaluate emerging database technologies (SQL, NoSQL, cloud-based solutions).
Rcommend improvements for efficiency and cost-effectiveness.
Required Skills & Qualifications
Strong expertise in SQL, NoSQL, and cloud database platforms (e.g., AWS, Azure, Google Cloud).
Proficiency in data modeling, ETL processes, and database performance tuning.
Knowledge of data security protocols and compliance standards.
Experience with enterprise-level database systems (Oracle, PostgreSQL, MongoDB, etc.).
Analytical mindset with problem-solving skills.
Excellent communication and collaboration abilities.
Typical Background
Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field.
5+ years of experience in database administration, design, or architecture.
Certifications (optional but valuable): Oracle Certified Master, AWS Certified Database Specialty, Microsoft Certified: Azure Database Administrator Associate.
Pay and Benefits
Pay Range Minimum: $136000 per year
Pay Range Maximum: $204000 per year
HCLTech is an equal opportunity employer, committed to providing equal employment opportunities to all applicants and employees regardless of race, religion, sex, color, age, national origin, pregnancy, sexual orientation, physical disability or genetic information, military or veteran status, or any other protected classification, in accordance with federal, state, and/or local law. Should any applicant have concerns about discrimination in the hiring process, they should provide a detailed report of those concerns to for investigation.
A candidate’s pay within the range will depend on their skills, experience, education, and other factors permitted by law. This role may also be eligible for performance-based bonuses subject to company policies. In addition, this role is eligible for the following benefits subject to company policies: medical, dental, vision, pharmacy, life, accidental death & dismemberment, and disability insurance; employee assistance program; 401(k) retirement plan; 10 days of paid time off per year (some positions are eligible for need-based leave with no designated number of leave days per year); and 10 paid holidays per year
How You’ll Grow
At HCLTech, we offer continuous opportunities for you to find your spark and grow with us. We want you to be happy and satisfied with your role and to really learn what type of work sparks your
brilliance the best. Throughout your time with us, we offer transparent communication with senior-level employees, learning and career development programs at every level, and opportunities to experiment in different roles or even pivot industries. We believe that you should be in control of your career with unlimited opportunities to find the role that fits you best.
LTIMindtree is an equal opportunity employer that is committed to diversity in the workplace. Our employment decisions are made without regard to race, color, creed, religion, sex (including pregnancy, childbirth or related medical conditions), gender identity or expression, national origin, ancestry, age, family-care status, veteran status, marital status, civil union status, domestic partnership status, military service, handicap or disability or history of handicap or disability, genetic information, atypical hereditary cellular or blood trait, union affiliation, affectional or sexual orientation or preference, or any other characteristic protected by applicable federal, state, or local law, except where such considerations are bona fide occupational qualifications permitted by law.
A little about us...
Role: AWS DevOps Engineer
Location: Charlotte, NC
Salary: Market Rate
Job Description:
We are seeking a highly skilled Senior DevOps Engineer with strong expertise in AWS cloud infrastructure automation databases and modern containerized environments The ideal candidate will have experience designing implementing and maintaining scalable secure and reliable systems while enabling fast and efficient development workflows You will work closely with development architecture and operations teams to build robust CICD pipelines automate infrastructure provisioning and ensure high availability of business critical applications
Key Responsibilities:
- Design implement and manage AWS cloud infrastructure EC2 S3 Lambda ECSEKS etc with scalability and security in mind
- Develop and maintain Infrastructure as Code IaC using Terraform
- Build manage and optimize Docker base images and containerized application stacks
- Orchestrate and maintain Kubernetes EKS clusters for production and staging environments
- Set up manage and optimize CICD pipelines in GitLab to support fast reliable deployments
- Manage MCP servers and ensure reliable operations for critical services
- Automate operational tasks and workflows using Python and JavaScript
- Support fullstack teams React Nodejs by providing containerized environments and deployment strategies
- Manage and optimize databases SQL PostgreSQL for performance security and scalability
- Integrate and manage AWS streaming services Kinesis MSK Kafka or similar for realtime data pipelines
- Implement container image security scanning governance and lifecycle management
- Monitor system performance availability and cost implementing proactive improvements
- Ensure compliance with security and governance standards across cloud infrastructure and database layers
- Collaborate with developers and architects to improve application delivery scalability and resilience
Required Skills Qualifications:
- 8 years of experience in DevOps Cloud Infrastructure
- Strong Handson experience with AWS services EC2 S3 ECSEKS Lambda VPC IAM CloudWatch Kinesis MSK
- Proficiency in Terraform for infrastructure automation
- Expertise with Docker including base image creation and Kubernetes orchestration
- Strong scripting programming skills in Python and JavaScript
- Experience with GitLab CICD for pipelines automation and environment management
- Strong database experience with SQL and PostgreSQL setup scaling replication performance tuning
- Exposure to streaming architectures AWS Kinesis Kafka MSK or similar
- Experience supporting React based applications from a DevOps perspective
- Familiarity with MCP servers and containerized service deployments
- Knowledge of cloud cost optimization and security best practices
- Strong problem-solving troubleshooting and communication skills
- Preferred Qualifications
- AWS certifications eg AWS Certified Solutions Architect DevOps Engineer Professional
- Experience with monitoring observability tools Prometheus Grafana ELK Datadog
- Knowledge of networking load balancing and distributed system design
- Familiarity with Agile Scrum methodologies
Skills
- Mandatory Skills : AWS Lambda, Docker, Python
- Good to Have Skills : Ansible, Git, Kubernetes
LTIMindtree is an equal opportunity employer that is committed to diversity in the workplace. Our employment decisions are made without regard to race, color, creed, religion, sex (including pregnancy, childbirth or related medical conditions), gender identity or expression, national origin, ancestry, age, family-care status, veteran status, marital status, civil union status, domestic partnership status, military service, handicap or disability or history of handicap or disability, genetic information, atypical hereditary cellular or blood trait, union affiliation, affectional or sexual orientation or preference, or any other characteristic protected by applicable federal, state, or local law, except where such considerations are bona fide occupational qualifications permitted by law.