Legacybox Cloud Jobs in Usa

1,902 positions found — Page 8

Sr Platform Architect
Salary not disclosed
Dunwoody, GA 2 days ago

Senior Platform Architect

Reports To: Director of Engineering

Department: Engineering

Location: Hybrid - Atlanta, GA


What makes MTech different:


Purpose-Driven Work – Build technology that solves real problems for the world

Casual & Collaborative – No corporate bureaucracy, direct access to senior leadership

Innovation-Focused – Healthy innovation pipeline expanding into new segments and technologies

Transparent & Data-Driven – Clear metrics, objectives, and visibility into company performance

Modern Development – Robust development tools, training programs, and technical excellence

Flexibility & Balance – Flexible work environment that values results over presenteeism



Job Summary

The Senior Platform Architect will lead the technical architecture, design, and modernization of large-scale, multi-tenant enterprise SaaS platforms built on Azure and the .NET stack. This role requires mastery of distributed systems, cloud-native design, and advanced engineering practices to deliver highly available, performant, and secure solutions for global consumer-facing SaaS and Agentic AI products.


Responsibilities and Duties


Architectural Design & Transformation

  • Lead migration from monolithic systems to modular monolith and microservices architectures using domain-driven design, bounded contexts, and decomposition strategies.
  • Design multi-tenant SaaS platforms with advanced tenant isolation, resource partitioning, and elastic scaling using Azure services.
  • Define and enforce architectural standards for .NET (C#), TypeScript, Angular, SQL Server, and Azure, including dependency injection, SOLID principles, asynchronous programming, and reactive patterns.
  • Design and implement distributed systems: service orchestration, API gateway management, IoT, edge computing, distributed transactions, eventual consistency, CQRS, and event sourcing.
  • Architect for cloud-native resiliency: circuit breakers, bulkheads, retries, failover, geo-redundancy, and disaster recovery using Azure App Services, Azure Functions, Service Bus, Cosmos DB, and Azure SQL.
  • Develop and maintain architecture documentation, reference models, and decision records using industry frameworks (TOGAF, Zachman, C4 Model).


Performance Engineering & Observability

  • Establish and monitor platform SLOs (latency, throughput, error rates, availability) mapped to customer SLAs.
  • Architect and implement advanced caching strategies, indexing, and query optimization for SQL Server and NoSQL stores in coordination with Senior Data Architect, Data Engineers, and Database Admins.
  • Design and implement telemetry pipelines: distributed tracing (OpenTelemetry), structured logging, metrics collection, and real-time dashboards for system health and diagnostics.
  • Conduct performance profiling, load testing, and capacity planning for backend services and frontend applications.


Automation, Quality, and DevOps

  • Architect and implement CI/CD pipelines with automated build, test, security scanning, and deployment workflows.
  • Integrate static code analysis, code coverage, and quality gates into the development lifecycle.
  • Design and enforce automated testing strategies: unit, integration, contract, and end-to-end tests for backend and frontend components.
  • Develop infrastructure as code (IaC) solutions for repeatable, scalable cloud provisioning.
  • Create incident response playbooks for rollback, failover, and recovery, drive down MTTR and automate remediation where possible.


Security, Compliance, and Governance

  • Architect for multi-tenant security: authentication/authorization (OAuth2, OpenID Connect), encryption at rest and in transit, secrets management, and compliance with SOC 1, SOC 2, GDPR, and other regulatory standards.
  • Implement secure software development lifecycle (SSDLC) practices, threat modeling, and vulnerability management, including ZDR, DLP, No Model Training policies with AI Models.
  • Ensure architectural governance and alignment with enterprise frameworks (TOGAF, Zachman), maintain architecture decision records, and participate in architecture review boards.


Technical Leadership & Collaboration

  • Mentor engineering teams in advanced architectural concepts, distributed systems, cloud-native development, and best practices.
  • Collaborate with Data Architect, DevOps, IT Services, Engineering and Product Management teams to ensure platform extensibility, integration, and support for complex business requirements.
  • Evaluate and integrate AI/ML services, advanced analytics, and developer productivity tools to enhance platform capabilities.
  • Champion a culture of technical excellence, continuous improvement, and innovation.


Required Experience & Skills

  • Minimum 10+ years in software/platform engineering, with at least 8 years in platform architecture for enterprise SaaS on Azure and .NET tech stack.
  • Proven experience architecting and delivering large-scale, multi-tenant SaaS platforms for global consumer-facing products.
  • Deep expertise in .NET (C#), Azure cloud services (App Services, Functions, Service Bus, Cosmos DB, SQL Server), Azure Open AI, Microsoft Agent Framework, TypeScript, Angular, CI/CD, automated testing, and observability.
  • Mastery of distributed systems, cloud-native patterns, event-driven architectures, and microservices.
  • Demonstrated success in technical debt reduction, performance engineering, and architectural modernization.
  • Experience with architectural frameworks (TOGAF, Zachman, C4 Model), architectural governance, and compliance.
  • Strong understanding of platform security, regulatory compliance, and multi-tenant SaaS challenges.


Success Metrics (First 12 Months)

  • Reduction in platform-related incidents/support tickets.
  • Improvement in deployment speed and release velocity.
  • Reduction in MTTR for platform incidents.
  • Achievement of modularization milestones (monolith decomposition, service rollout, platform observability in production).
  • Increase in automated test coverage, code quality, and system performance metrics.


Preferred Skills & Certifications

  • TOGAF, Zachman, or similar architecture certification.
  • Advanced knowledge of event sourcing, CQRS, service mesh, and cloud-native security.
  • Familiarity with semantic technologies, knowledge graphs, and AI/ML integration.
  • Hands-on experience with infrastructure as code, automated testing tools, and modern DevOps practices.
  • Strong background in platform security, compliance, and multi-tenant SaaS challenges.


EEO Statement

Integrated into our shared values is MTech’s commitment to diversity and equal employment opportunity. All qualified applicants will receive consideration for employment without regard to sex, age, race, color, creed, religion, national origin, disability, sexual orientation, gender identity, veteran status, military service, genetic information, or any other characteristic or conduct protected by law. MTech aims to maintain a global inclusive workplace where every person is regarded fairly, appreciated for their uniqueness, advanced according to their accomplishments, and encouraged to fulfill their highest potential. We

Not Specified
Salesforce Product Owner/Manager
✦ New
Salary not disclosed
Jacksonville, FL 1 day ago

Salesforce Product Owner/Manager
Location: Remote from US
Department: Enterprise Applications
Employment Type: Contract/Contract to Hire

Overview
The organization is seeking a Salesforce Product Owner or Product Manager to lead enhancements, governance, and the long term roadmap for the Salesforce platform. This role focuses on closing the gap between business expectations and current system capabilities while also shaping the future direction of Salesforce, including exploration of Service Cloud, Agent Cloud, and emerging AI driven features. This position requires strong local partnership with Jacksonville based stakeholders and the ability to navigate a complex, multi system environment.

Key Responsibilities

Product Ownership and Roadmap
• Own and refine the Salesforce roadmap, including near term improvements to data quality, integration, and reporting, as well as longer term initiatives such as Agent Cloud and AI assisted capabilities.
• Prioritize work based on business value, complexity, and cross functional impact.
• Ensure business expectations are aligned with realistic delivery timelines and technical feasibility.

Requirements Gathering and Backlog Management
• Lead discovery sessions across Sales, Finance, HR, Operations, and Contracts teams to gather detailed requirements.
• Document clear user stories, acceptance criteria, and functional requirements.
• Evaluate opportunities for AI assisted workflows, agent productivity tools, and automated recommendations within Salesforce.

Data Quality and Governance
• Establish data governance standards to reduce duplicate accounts and inconsistent information.
• Define validation rules that support accurate opportunity management and prevent incorrect or duplicate entries.
• Improve data alignment across revenue structures, people attributes, and account hierarchies.

Integration and Automation
• Identify integration needs across Salesforce, Oracle Fusion, Mosaic, HR systems, Finance systems, and other downstream applications.
• Evaluate automation opportunities such as eliminating manual uploads of financial hierarchies and improving synchronization of HR and Finance attributes.
• Work with technical teams to prepare the platform for future AI or Agent Cloud capabilities that rely on strong upstream and downstream data integrity.

Revenue and Reporting Alignment
• Partner with Finance teams to resolve gaps between estimated and actual revenue and ensure reports reflect accurate information at profit level structures.
• Improve the flow of win or loss information and reduce the need for duplicate entry across CRM and contract related objects.
• Strengthen reporting visibility across retailers, revenue breakdowns, and opportunity lifecycle stages.

User Experience and Adoption
• Lead user acceptance testing and ensure enhancements meet the required standards.
• Define requirements for alerts, reminders, and user guidance, including notifications tied to financial mismatches or incomplete opportunity steps.
• Support communication, training, and adoption activities for new features and process changes.

Qualifications
• Five or more years of experience as a Product Owner, Product Manager, or Salesforce focused Business Analyst.
• Strong understanding of Salesforce Sales Cloud and familiarity with Service Cloud or concepts related to agent workflows and AI capabilities.
• Experience working with financial and HR systems, preferably Oracle Fusion.
• Skilled in opportunity lifecycle management, revenue workflows, data quality, and Salesforce reporting.
• Effective communicator with the ability to work closely with senior business stakeholders.
• Must be local to Jacksonville, Florida or willing to relocate.

Ideal Candidate
The ideal candidate is proactive and detail oriented, capable of driving both immediate system improvements and long term platform evolution. This person brings structure to complex business needs, aligns teams around priorities, and focuses on delivering enhancements that improve data accuracy, reporting, opportunity management, and cross system consistency. They are comfortable working in a hybrid environment, influencing stakeholders, and preparing the organization for future capabilities such as Agent Cloud and AI assisted features.



Welcome to ConsultNet, SaltClick, and Omni. As a premier national provider of technology talent and solutions, our expertise spans across project services, contract-to-hire, direct placement, and managed services, both onshore and nearshore.

Celebrating more than 25 years of partnership with a diverse client base, we've crafted rewarding opportunities for our consultants, fostering high-performing teams that deliver impactful results.

Over the last few years, thousands of consultants have found their calling with us in roles that have made a meaningful impact on their lives, enhanced their career, challenged them, and propelled them towards achieving their personal and professional goals. At ConsultNet, we believe effective communication is crucial in aligning the right job with your unique skills and professional aspirations. To us, it's all about the personal approach we take and the values we uphold.

Our comprehensive service offerings cover a wide range of technology positions across key markets nationwide. Client more at .

We champion equality and inclusivity, proudly supporting an Equal Opportunity Employer policy. We welcome applicants regardless of Race, Color, Religion, Sex, Sexual Orientation, Gender Identity, National Origin, Age, Genetic Information, Disability, Protected Veteran Status, or any other status protected by law.

Not Specified
Data Architect
✦ New
Salary not disclosed
  • At least 3-year experience as architect for large scale cloud data projects involving minimum 3 technological tracks as mentioned above in hyper scaler platforms.
  • Minimum 6 years' expertise in data and analytics area.
  • Deep understanding of databases and analytical technologies in the industry including MPP and NoSQL databases, Data Lake, Data Warehouse design, BI reporting and Dashboard development.
  • Experience of Data architecture, data governance, data quality standards, and data security practices in at least 2 implementations.
  • Experience in customer data models and developing KPIs out of customer data.
  • Experience in customer facing roles to provide Solutions for Data use cases.
  • Certification in the Cloud based data stack.
  • Experience in deployment of a large distributed Big Data Application
  • Track record of thought leadership and innovation around Big Data. Solid understanding of Data landscape and related emerging technology.

Technical skills needed:

  • Languages – Java, Python, Scala
  • AWS – S3, EMR, Glue, Redshift, Athena, Lamda
  • Azure – Blob, ADLS, ADF, Synapse, PowerBI
  • Google Cloud – Bigquery, DataProc, Looker
  • Snowflake
  • Databricks
  • CDH - Hive, Spark, HDFS, Kafka CDH etc.
  • ETL – Informatica/DBT/Mattilion,

Roles & Responsibilities:

  • Architect and Design Sales and Marketing data initiatives, demonstrate data architectural knowledge, customer management and innovation
  • Delivery of customer Cloud data Strategies for Marketing, aligned with customer's business objectives and with a focus on Cloud Migrations
  • Provide leadership in platform migration methodologies and techniques including governance frameworks, guidelines, and best practices.
  • Build point of views, thought leadership, solutions for proposals, competency development and mentoring etc.
  • Solution Design experience on Data Lake, Data Warehouse, BI, Data Mart and Analytics systems
  • Delivery of customer Cloud Strategies, aligned with customer's business objectives and with a focus on Cloud Migration.
Not Specified
Lead Software Engineer–DevSecOps
✦ New
🏢 Boeing
Salary not disclosed
Berkeley, MO 1 day ago

Job Description

At Boeing, we innovate and collaborate to make the world a better place. We’re committed to fostering an environment for every teammate that’s welcoming, respectful and inclusive, with great opportunity for professional growth. Find your future with us.

The Boeing Company is currently seeking a Lead Software EngineerDevSecOps to support our Phantom Works Virtual Warfare Center team located in Berkeley, MO. This position will focus on supporting the Boeing Defense, Space & Security (BDS) business organization.

The DevSecOps Lead Engineer will architect and implement secure development and execution environments for the rapid prototyping and experimentation we use to answer our customers’ toughest questions about future technologies and capabilities. The Virtual Warfare Center executes far-reaching analysis to address military capability gaps in and across multiple warfighting domains in the face of accelerating adversary capabilities. In DevSecOps you will be part of a team modernizing our approach to software development and enhancing our security posture.

As the Virtual Warfare Center’s DevSecOps Team Lead you will lead a team of engineers designing, implementing, and monitoring software development infrastructure across multiple networks and physical locations across the United States. You will build and maintain cross-functional relationships with multiple teams to coordinate the selection, approval, deployment, and maintenance of a consistent set of software tools in all locations. Your work will guarantee our development and deployment infrastructure and processes are reliable, efficient, consistent, and secure. Your team will partner with relevant stakeholders to create processes, design cloud-based solutions, support deploying applications in cloud environments, evaluate solution performance and implement enhancements. You will guide the team through the update of a legacy software development infrastructure to use modern technologies including containers, cloud, high performance computing, AI/ML, and automation. This position requires mentoring early-career employees on DevSecOps design, implementation, maintenance, communication, and leadership skills. Your team will track required software updates and drive the process to eliminate known vulnerabilities including monitoring systems, tools, and software packages for security vulnerabilities. You will contribute to a collaborative, cross-functional team managing software security approvals and automate the integration of security into all phases of the software development lifecycle. Your work with an array of software development, IT, and cybersecurity teams will address emergent issues while improving the efficiency and usability of our systems and software products.

Position Responsibilities:

  • Lead a team of engineers responsible for designing, installing, configuring, and maintaining a consistent, secure software development toolchain across multiple networks and physical locations.
  • Spearhead the approval and implementation of continuous integration and continuous deployment pipelines into collateral secret and program spaces.
  • Coordinate between software development, IT, and security teams on vulnerability tracking and mitigation, driving efforts forward.
  • Architect and implement the transition of a multi-site, multi-network software development environment into a cloud-based approach.
  • Lead trade studies and tool selection to upgrade and modernize software development processes and operational infrastructure.
  • Lead implementation of best practices and methodologies for provisioning, platform scaling, configuration management, monitoring and troubleshooting
  • Maintain the DevSecOps vision and roadmap, track status, and communicate progress to stakeholders.
  • Mentor and coach the team, provide technical leadership, foster a culture of knowledge sharing and continuous learning, and grow their skills.

Basic Qualifications (Required Skills/ Experience):

  • Bachelor’s Degree in an engineering discipline or 17+ years equivalent related experience
  • 10+ years’ experience with software engineering
  • 3+ years’ experience with scripting languages such as Bash or Python
  • 3+ years’ experience containerized software development
  • 3+ years’ experience supporting DevSecOps lifecycle
  • Experience with Agile development practices using continuous integration and deployment
  • 3+ years of experience performing automation, implementation and deployments in both Windows and Linux systems
  • Active Secret clearance

Preferred Qualifications (Desired Skills/Experience):

  • Active Top Secret SCI clearance
  • Experience with gitlab
  • Experience with Jenkins
  • Experience with JIRA
  • 3+ years’ experience supporting cloud development environments
  • Experience with cloud computing in classified environments
  • CompTIA Security+
  • Bachelor of Science degree from an accredited course of study in engineering, engineering technology (includes manufacturing engineering technology), chemistry, physics, mathematics, data science, or computer science.

Travel: 10%

Drug Free Workplace:

Boeing is a Drug Free Workplace (DFW) where post offer applicants and employees are subject to testing for marijuana, cocaine, opioids, amphetamines, PCP, and alcohol when criteria is met as outlined in our policies.

CodeVue Coding Challenge:

To be considered for this position you will be required to complete a technical assessment as part of the selection process. Failure to complete the assessment will remove you from consideration.

Pay & Benefits:

At Boeing, we strive to deliver a Total Rewards package that will attract, engage and retain the top talent. Elements of the Total Rewards package include competitive base pay and variable compensation opportunities.

The Boeing Company also provides eligible employees with an opportunity to enroll in a variety of benefit programs, generally including health insurance, flexible spending accounts, health savings accounts, retirement savings plans, life and disability insurance programs, and a number of programs that provide for both paid and unpaid time away from work.

The specific programs and options available to any given employee may vary depending on eligibility factors such as geographic location, date of hire, and the applicability of collective bargaining agreements.

Pay is based upon candidate experience and qualifications, as well as market and business considerations.

Summary Pay Range for Lead: $136,850 - $185,150


Applications for this position will be accepted until Mar. 25, 2026


Export Control Requirements:

This position must meet U.S. export control compliance requirements. To meet U.S. export control compliance requirements, a “U.S. Person” as defined by 22 C.F.R. §120.62 is required. “U.S. Person” includes U.S. Citizen, U.S. National, lawful permanent resident, refugee, or asylee.

Export Control Details:

US based job, US Person required

Relocation

This position offers relocation based on candidate eligibility.

Security Clearance

This position requires an active U.S. Secret Security Clearance (U.S. Citizenship Required). (A U.S. Security Clearance that has been active in the past 24 months is considered active)

Visa Sponsorship

Employer will not sponsor applicants for employment visa status.

Shift

This position is for 1st shift


Equal Opportunity Employer:

Boeing is an Equal Opportunity Employer. Employment decisions are made without regard to race, color, religion, national origin, gender, sexual orientation, gender identity, age, physical or mental disability, genetic factors, military/veteran status or other characteristics protected by law.

Not Specified
Lead Software Engineer–DevSecOps (Berkeley)
✦ New
🏢 Boeing
Salary not disclosed
Berkeley, MO 1 day ago

Job Description

At Boeing, we innovate and collaborate to make the world a better place. We’re committed to fostering an environment for every teammate that’s welcoming, respectful and inclusive, with great opportunity for professional growth. Find your future with us.

The Boeing Company is currently seeking a Lead Software EngineerDevSecOps to support our Phantom Works Virtual Warfare Center team located in Berkeley, MO. This position will focus on supporting the Boeing Defense, Space & Security (BDS) business organization.

The DevSecOps Lead Engineer will architect and implement secure development and execution environments for the rapid prototyping and experimentation we use to answer our customers’ toughest questions about future technologies and capabilities. The Virtual Warfare Center executes far-reaching analysis to address military capability gaps in and across multiple warfighting domains in the face of accelerating adversary capabilities. In DevSecOps you will be part of a team modernizing our approach to software development and enhancing our security posture.

As the Virtual Warfare Center’s DevSecOps Team Lead you will lead a team of engineers designing, implementing, and monitoring software development infrastructure across multiple networks and physical locations across the United States. You will build and maintain cross-functional relationships with multiple teams to coordinate the selection, approval, deployment, and maintenance of a consistent set of software tools in all locations. Your work will guarantee our development and deployment infrastructure and processes are reliable, efficient, consistent, and secure. Your team will partner with relevant stakeholders to create processes, design cloud-based solutions, support deploying applications in cloud environments, evaluate solution performance and implement enhancements. You will guide the team through the update of a legacy software development infrastructure to use modern technologies including containers, cloud, high performance computing, AI/ML, and automation. This position requires mentoring early-career employees on DevSecOps design, implementation, maintenance, communication, and leadership skills. Your team will track required software updates and drive the process to eliminate known vulnerabilities including monitoring systems, tools, and software packages for security vulnerabilities. You will contribute to a collaborative, cross-functional team managing software security approvals and automate the integration of security into all phases of the software development lifecycle. Your work with an array of software development, IT, and cybersecurity teams will address emergent issues while improving the efficiency and usability of our systems and software products.

Position Responsibilities:

  • Lead a team of engineers responsible for designing, installing, configuring, and maintaining a consistent, secure software development toolchain across multiple networks and physical locations.
  • Spearhead the approval and implementation of continuous integration and continuous deployment pipelines into collateral secret and program spaces.
  • Coordinate between software development, IT, and security teams on vulnerability tracking and mitigation, driving efforts forward.
  • Architect and implement the transition of a multi-site, multi-network software development environment into a cloud-based approach.
  • Lead trade studies and tool selection to upgrade and modernize software development processes and operational infrastructure.
  • Lead implementation of best practices and methodologies for provisioning, platform scaling, configuration management, monitoring and troubleshooting
  • Maintain the DevSecOps vision and roadmap, track status, and communicate progress to stakeholders.
  • Mentor and coach the team, provide technical leadership, foster a culture of knowledge sharing and continuous learning, and grow their skills.

Basic Qualifications (Required Skills/ Experience):

  • Bachelor’s Degree in an engineering discipline or 17+ years equivalent related experience
  • 10+ years’ experience with software engineering
  • 3+ years’ experience with scripting languages such as Bash or Python
  • 3+ years’ experience containerized software development
  • 3+ years’ experience supporting DevSecOps lifecycle
  • Experience with Agile development practices using continuous integration and deployment
  • 3+ years of experience performing automation, implementation and deployments in both Windows and Linux systems
  • Active Secret clearance

Preferred Qualifications (Desired Skills/Experience):

  • Active Top Secret SCI clearance
  • Experience with gitlab
  • Experience with Jenkins
  • Experience with JIRA
  • 3+ years’ experience supporting cloud development environments
  • Experience with cloud computing in classified environments
  • CompTIA Security+
  • Bachelor of Science degree from an accredited course of study in engineering, engineering technology (includes manufacturing engineering technology), chemistry, physics, mathematics, data science, or computer science.

Travel: 10%

Drug Free Workplace:

Boeing is a Drug Free Workplace (DFW) where post offer applicants and employees are subject to testing for marijuana, cocaine, opioids, amphetamines, PCP, and alcohol when criteria is met as outlined in our policies.

CodeVue Coding Challenge:

To be considered for this position you will be required to complete a technical assessment as part of the selection process. Failure to complete the assessment will remove you from consideration.

Pay & Benefits:

At Boeing, we strive to deliver a Total Rewards package that will attract, engage and retain the top talent. Elements of the Total Rewards package include competitive base pay and variable compensation opportunities.

The Boeing Company also provides eligible employees with an opportunity to enroll in a variety of benefit programs, generally including health insurance, flexible spending accounts, health savings accounts, retirement savings plans, life and disability insurance programs, and a number of programs that provide for both paid and unpaid time away from work.

The specific programs and options available to any given employee may vary depending on eligibility factors such as geographic location, date of hire, and the applicability of collective bargaining agreements.

Pay is based upon candidate experience and qualifications, as well as market and business considerations.

Summary Pay Range for Lead: $136,850 - $185,150


Applications for this position will be accepted until Mar. 25, 2026


Export Control Requirements:

This position must meet U.S. export control compliance requirements. To meet U.S. export control compliance requirements, a “U.S. Person” as defined by 22 C.F.R. §120.62 is required. “U.S. Person” includes U.S. Citizen, U.S. National, lawful permanent resident, refugee, or asylee.

Export Control Details:

US based job, US Person required

Relocation

This position offers relocation based on candidate eligibility.

Security Clearance

This position requires an active U.S. Secret Security Clearance (U.S. Citizenship Required). (A U.S. Security Clearance that has been active in the past 24 months is considered active)

Visa Sponsorship

Employer will not sponsor applicants for employment visa status.

Shift

This position is for 1st shift


Equal Opportunity Employer:

Boeing is an Equal Opportunity Employer. Employment decisions are made without regard to race, color, religion, national origin, gender, sexual orientation, gender identity, age, physical or mental disability, genetic factors, military/veteran status or other characteristics protected by law.

temporary
Performance Engineer -- Non Functional QE
Salary not disclosed
San Jose, CA 3 days ago

Business Area:

Engineering

Seniority Level:

Associate

Job Description:

At Cloudera, we empower people to transform complex data into clear and actionable insights. With as much data under management as the hyperscalers, we're the preferred data partner for the top companies in almost every industry. Powered by the relentless innovation of the open source community, Cloudera advances digital transformation for the world's largest enterprises.

At Cloudera, our Data Services Pillar is the heart of data innovation. We don't just work with technology; we build it. Our mission is to empower data practitioners by creating seamless, enterprise-grade experiences for data engineering, warehousing, streaming, operational databases, and AI.

You will be a key member of the NFQE (Non Functional QE) team that drives the performance reliability of Cloudera's Kuberneteshosted data services. The role blends deep technical knowledge of performance testing, distributed data workloads, and container orchestration with a datadriven mindset. You'll design, automate, run, and analyze performance tests for Cloudera's flagship services, ensuring they meet or exceed customerdefined SLOs/SLAs at scales.

As a Performance Engineer, you will:

  • Work with internal development teams and the open source community to proactively drive performance improvements/optimizations across our data warehouse and Data Engineering stack.

  • Work with product managers, developers and the field team to understand performance and scale requirements, and develop benchmarks based on these requirements.

  • Develop automation to execute benchmarks, collect and aggregate metrics and profiles, and report results, trends, and regressions.

  • Analyze performance and scalability characteristics to identify bottlenecks in large-scale distributed systems.

  • Perform root cause analysis of performance issues identified by internal testing and from customers and suggest corrective actions.

  • Evaluate performance of systems and provide related guidance to the team.

We are excited about you if you have:

  • 3 + years of industry experience in performance-related work, ideally on large-scale distributed systems

  • Understanding of DBMS algorithms and data structure fundamentals.

  • Understanding of hardware trends and full-stack systems performance: CPU, RAM, storage, network, Linux kernel, JVM, and distributed systems performance.

  • Understanding of performance analysis tools and techniques.

  • Strong design, coding skills, and test automation skills (Java/C++/Golang/Python preferred)

  • Knowledge of relevant frameworks, cloud provider knowledge, K8s, etc.

  • Ability to work in a distributed setting with team members spread in multiple geographies

  • Demonstrated ability to work on large cross-functional projects, including strong written communication skills and a collaborative mindset, as you will be working with many teams inside and outside of Cloudera.

  • Experience with benchmark and performance test design. You eshould understand basic concepts of performance testing including different types of performance tests (microbenchmarks, end-to-end benchmarks, concurrency and scale testing), how to reduce (or deal with) noise in test results, etc.

  • Experience designing performance tests that provide useful insights into specific aspects of performance.

  • Solid understanding of basic performance theory - in particular a very good understanding of latency, throughput, and concurrency and how they relate to each other.

  • Strong understanding of the types of workloads they'll be testing Ideally they should have specific experience creating performance tests for the specific product area they'll be working on (SQL, ML, etc).

  • B.S. or M.S. in Computer Science or equivalent experience.

You might also have:

  • Experience with the Hadoop ecosystem (i.e. Hive, Impala, Spark), in specific Prior work on largescale data lakehouse or datawarehouse performance

  • Hands-on experience with containerization, Kubernetes, public cloud infrastructure (AWS, Azure and/or GCP) and mesh-networks

  • Certifications: CKA/CKAD, AWS Solutions Architect, GCP Cloud Architect, Azure Solutions Architect, or equivalent.

  • Security & Compliance: Experience writing performance tests that also verify dataprivacy and audit compliance (e.g., GDPR, HIPAA).

Why this role matters:

This is your opportunity to build cloud-native solutions that are deployable anywhere whether in massive clusters on any cloud provider or in private data centers. You'll work with cutting-edge technologies like Trino, Spark, Airflow, and advanced AI inferencing systems to shape the future of analytics. Your code will directly influence how data engineers, analysts, and developers worldwide find value in their data.

We believe in the power of open source. You'll collaborate with project committers, contributing upstream to keep technologies like Apache Hive and Impala evolving. You'll harden these engines for rock-solid security, optimize them for peak performance, and make them effortlessly run across all environments. Join us and help build the trusted, cloud-native platform that powers insights for the most data-intensive companies on the planet.

This position is not eligible for sponsorship.

The expected base salary range for this role in:

  • California is $124,000 - $155,000

The salary will vary depending on your job-related skills, experience and location.


What you can expect from us:

  • Generous PTO Policy

  • Support work life balance with Unplugged Days

  • Flexible WFH Policy

  • Mental & Physical Wellness programs

  • Phone and Internet Reimbursement program

  • Access to Continued Career Development

  • Comprehensive Benefits and Competitive Packages

  • Paid Volunteer Time

  • Employee Resource Groups

EEO/VEVRAA

#LI-SZ1

#LI-HYBRID

Not Specified
Netcool Developer
Salary not disclosed
Basking Ridge, NJ 2 days ago

Netcool Developer with AIOps Cloud Pak Expertise


• Responsible for integrating and migrating traditional IBM Netcool Operations Insight (NOI) environments into IBM Cloud Pak for AIOps

• Connects on‑prem Netcool/OMNIbus and Netcool/Impact systems with Cloud Pak for AIOps using native connectors

• Migrates existing event filters, automations, and runbook policies into the AIOps platform

• Ensures seamless bidirectional synchronization of event data between Netcool and Cloud Pak for AIOps

• Configures event and alert data mapping and transformation rules (e.g., JSONata) for consistent processing

• Develops automation policies and runbooks using Netcool/Impact, and potentially Python or Bash scripting

• Supports the AIOps platform by supplying and validating high‑quality data for ML models (event grouping, log anomaly detection, metric anomaly detection, change risk assessment)

• Leverages Cloud Pak for AIOps topology and resource management features to build application‑centric infrastructure views

• Collaborates with DevOps, SRE, and operations teams to integrate third‑party tools such as Splunk, ServiceNow, Slack, and others

• Troubleshoots and resolves complex hybrid‑cloud issues arising during integration and ongoing operations

  • • Possesses deep expertise in the IBM Netcool suite, including Netcool/OMNIbus, Netcool/Impact, probes, gateways, and Web GUI
Not Specified
Databricks Architect/ Senior Data Engineer
🏢 OZ
Salary not disclosed
Boca Raton, FL 2 days ago

OZ – Databricks Architect/ Senior Data Engineer


Note: Only applications from U.S. citizens or lawful permanent residents (Green Card Holders) will be considered.


We believe work should be innately rewarding and a team-building venture. Working with our teammates and clients should be an enjoyable journey where we can learn, grow as professionals, and achieve amazing results. Our core values revolve around this philosophy. We are relentlessly committed to helping our clients achieve their business goals, leapfrog the competition, and become leaders in their industry. What drives us forward is the culture of creativity combined with a disciplined approach, passion for learning & innovation, and a ‘can-do’ attitude!


What We're Looking For:

We are seeking a highly experienced Databricks professional with deep expertise in data engineering, distributed computing, and cloud-based data platforms. The ideal candidate is both an architect and a hands-on engineer who can design scalable data solutions while actively contributing to development, optimization, and deployment.


This role requires strong technical leadership, a deep understanding of modern data architectures, and the ability to implement best practices in DataOps, performance optimization, and data governance.


Experience with modern AI/GenAI-enabled data platforms and real-time data processing environments is highly desirable.


Position Overview:

The Databricks Senior Data Engineer will play a critical role in designing, implementing, and optimizing enterprise-scale data platforms using the Databricks Lakehouse architecture. This role combines architecture leadership with hands-on engineering, focusing on building scalable, secure, and high-performance data pipelines and platforms. The ideal candidate will establish coding standards, define data architecture frameworks such as the Medallion Architecture, and guide the end-to-end development lifecycle of modern data solutions.


This individual will collaborate with cross-functional stakeholders, including data engineers, BI developers, analysts, and business leaders, to deliver robust data platforms that enable advanced analytics, reporting, and AI-driven decision-making.


Key Responsibilities:

  • Architecture & Design: Architect and design scalable, reliable data platforms and complex ETL/ELT and streaming workflows for the Databricks Lakehouse Platform (Delta Lake, Spark).
  • Hands-On Development: Write, test, and optimize code in Python, PySpark, and SQL for data ingestion, transformation, and processing.
  • DataOps & Automation: Implement CI/CD, monitoring, and automation (e.g., with Azure DevOps, DBX) for data pipelines.
  • Stakeholder Collaboration: Work with BI developers, analysts, and business users to define requirements and deliver data-driven solutions.
  • Performance Optimization: Tune delta tables, Spark jobs, and SQL queries for maximum efficiency and scalability.
  • GenAI Applications Development: It is a big plus to have experience in GenAI application development


Requirements:

  • 8+ years of experience in data engineering, with strong hands-on expertise in Databricks and Apache Spark.
  • Proven experience designing and implementing scalable ETL/ELT pipelines in cloud environments.
  • Strong programming skills in Python and SQL; experience with PySpark required.
  • Hands-on experience with Databricks Lakehouse, Delta Lake, and distributed data processing.
  • Experience working with cloud platforms such as Microsoft Azure, AWS, or GCP (Azure preferred).
  • Experience with CI/CD pipelines, Git, and DevOps practices for data engineering.
  • Strong understanding of data architecture, data modeling, and performance optimization.
  • Experience working with cross-functional teams to deliver enterprise data solutions.
  • Tackles complex data challenges, ensuring data quality and reliable delivery.


Qualifications:

  • Bachelor’s degree in Computer Science, Information Technology, Engineering, or a related field.
  • Experience designing enterprise-scale data platforms and modern data architectures.
  • Experience with data integration tools such as Azure Data Factory or similar platforms.
  • Familiarity with cloud data warehouses such as Databricks, Snowflake, or Azure Fabric.
  • Experience supporting analytics, reporting, or AI/ML workloads is highly desirable.
  • Databricks, Azure, or cloud certifications are preferred.
  • Strong problem-solving, communication, and technical leadership skills.


Technical Proficiency in:

  • Databricks, Apache Spark, PySpark, Delta Lake
  • Python, SQL, Scala (preferred)
  • Cloud platforms: Azure (preferred), AWS, or GCP
  • Azure Data Factory, Kafka, and modern data integration tools
  • Data warehousing: Databricks, Snowflake, or Azure Fabric
  • DevOps tools: Git, Azure DevOps, CI/CD pipelines
  • Data architecture, ETL/ELT design, and performance optimization


What You’re Looking For:

Join a fast-growing organization that thrives on innovation and collaboration. You’ll work alongside talented, motivated colleagues in a global environment, helping clients solve their most critical business challenges. At OZ, your contributions matter – you’ll have the chance to be a key player in our growth and success. If you’re driven, bold, and eager to push boundaries, we invite you to join a company where you can truly make a difference.


About Us:

OZ is a 28-year-old global technology consulting, services, and solutions leader specializing in creating business-focused solutions for our clients by leveraging disruptive digital technologies and innovation.


OZ is committed to creating a continuum between work and life by allowing people to work remotely. We offer competitive compensation and a comprehensive benefits package. You’ll enjoy our work style within an incredible culture. We’ll give you the tools you need to succeed so you can grow and develop with us and become part of a team that lives by its core values.

Not Specified
Data Integration & AI Engineer
Salary not disclosed
Edison, NJ 2 days ago

About Wakefern

Wakefern Food Corp. is the largest retailer-owned cooperative in the United States and supports its co-operative members' retail operations, trading under the ShopRite®, Price Rite®, The Fresh Grocer®, Dearborn Markets®, and Gourmet Garage® banners.


Employing an innovative approach to wholesale business services, Wakefern focuses on helping the independent retailer compete in a big business world. Providing the tools entrepreneurs need to stay a step ahead of the competition, Wakefern’s co-operative members benefit from the company’s extensive portfolio of services, including innovative technology, private label development, and best-in-class procurement practices.


The ideal candidate will have a strong background in designing, developing, and implementing complex projects, with focus on automating data processes and driving efficiency within the organization. This role requires a close collaboration with application developers, data engineers, data analysts, data scientists to ensure seamless data integration and automation across various platforms. The Data Integration & AI Engineer is responsible for identifying opportunities to automate repetitive data processes, reduce manual intervention, and improve overall data accessibility.


Essential Functions

  • Participate in the development life cycle (requirements definition, project approval, design, development, and implementation) and maintenance of the systems.
  • Implement and enforce data quality and governance standards to ensure the accuracy and consistency.
  • Provide input for project plans and timelines to align with business objectives.
  • Monitor project progress, identify risks, and implement mitigation strategies.
  • Work with cross-functional teams and ensure effective communication and collaboration.
  • Provide regular updates to the management team.
  • Follow the standards and procedures according to Architecture Review Board best practices, revising standards and procedures as requirements change and technological advancements are incorporated into the >tech_ structure.
  • Communicates and promotes the code of ethics and business conduct.
  • Ensures completion of required company compliance training programs.
  • Is trained – either through formal education or through experience – in software / hardware technologies and development methodologies.
  • Stays current through personal development and professional and industry organizations.

Responsibilities

  • Design, build, and maintain automated data pipelines and ETL processes to ensure scalability, efficiency, and reliability across data operations.
  • Develop and implement robust data integration solutions to streamline data flow between diverse systems and databases.
  • Continuously optimize data workflows and automation processes to enhance performance, scalability, and maintainability.
  • Design and develop end-to-end data solutions utilizing modern technologies, including scripting languages, databases, APIs, and cloud platforms.
  • Ensure data solutions and data sources meet quality, security, and compliance standards.
  • Monitor and troubleshoot automation workflows, proactively identifying and resolving issues to minimize downtime.
  • Provide technical training, documentation, and ongoing support to end users of data automation systems.
  • Prepare and maintain comprehensive technical documentation, including solution designs, specifications, and operational procedures.


Qualifications

  • A bachelor's degree or higher in computer science, information systems, or a related field.
  • Hands-on experience with cloud data platforms (e.g., GCP, Azure, etc.)
  • Strong knowledge and skills in data automation technologies, such as Python, SQL, ETL/ELT tools, Kafka, APIs, cloud data pipelines, etc.
  • Experience in GCP BigQuery, Dataflow, Pub/Sub, and Cloud storage.
  • Experience with workflow orchestration tools such as Cloud Composer or Airflow
  • Proficiency in iPaaS (Integration Platform as a Service) platforms, such as Boomi, SAP BTP, etc.
  • Develop and manage data integrations for AI agents, connecting them to internal and external APIs, databases, and knowledge sources to expand their capabilities.
  • Build and maintain scalable Retrieval-Augmented Generation (RAG) pipelines, including the curation and indexing of knowledge bases in vector databases (e.g., Pinecone, Vertex AI Vector Search).
  • Leverage cloud-based AI/ML platforms (e.g., Vertex AI, Azure ML) to build, train, and deploy machine learning models on a scale.
  • Establish and enforce data quality and governance standards for AI/ML datasets, ensuring the accuracy, completeness, and integrity of data used for model training and validation.
  • Collaborate closely with data scientists and machine learning engineers to understand data requirements and deliver optimized data solutions that support the entire machine learning lifecycle.
  • Hands-on experience with IBM DataStage and Alteryx is a plus.
  • Strong understanding of database design principles, including normalization, indexing, partitioning, and query optimization.
  • Ability to design and maintain efficient, scalable, and well-structured database schemas to support both analytical and transactional workloads,
  • Familiarity with BI visualization tools such as MicroStrategy, Power BI, Looker, or similar.
  • Familiarity with data modeling tools.
  • Familiarity with DevOps practices for data (CI/CD pipelines)
  • Proficiency in project management software (e.g., JIRA, Clarizen, etc.)
  • Familiarity with DevOps practices for data (CI/CD pipelines)
  • Strong knowledge and skills in data management, data quality, and data governance.
  • Strong communication, collaboration, and problem-solving skills.
  • Ability to work on multiple projects and prioritize tasks effectively.
  • Ability to work independently and in a team environment.
  • Ability to learn new technologies and tools quickly.
  • The ability to handle stressful situations.
  • Highly developed business acuity and acumen.
  • Strong critical thinking and decision-making skills.


Working Conditions & Physical Demands

This position requires in-person office presence at least 4x a week.


Compensation and Benefits

The salary range for this position is $75,868 - $150,644. Placement in the range depends on several factors, including experience, skills, education, geography, and budget considerations.

Wakefern is proud to offer a comprehensive benefits package designed to support the health, well-being, and professional development of our Associates. Benefits include medical, dental, and vision coverage, life and disability insurance, a 401(k) retirement plan with company match & annual company contribution, paid time off, holidays, and parental leave.


Associates also enjoy access to wellness and family support programs, fitness reimbursement, educational and training opportunities through our corporate university, and a collaborative, team-oriented work environment. Many of these benefits are fully or partially funded by the company, with some subject to eligibility requirements

Not Specified
Senior Java Architect (Banking Domain)
Salary not disclosed
Dallas, TX 2 days ago

Position : JAVA Solution Architect

Location : TX/NJ

Duration Long Term



As a Solution Architect, you will be an integral part of shaping the future of technology. This role requires deep technical expertise to translate complex business requirements into scalable, secure, and compliant technical solutions. You will serve as a bridge between business stakeholders and development teams, ensuring the delivery of high-quality, resilient systems that drive significant business impact.

Key Responsibilities

  • Solution Design & Architecture: Lead the design and development of end-to-end enterprise solutions, including high-level and low-level design documents and architecture diagrams.
  • Technology & Platform Selection: Select the appropriate technology stack, leveraging expertise in Java, Spring Boot, microservices architecture, and cloud platforms (AWS and Azure) to build robust, scalable, and cost-efficient applications.
  • Cloud Migration & Integration: Drive cloud transformation initiatives, including migrating on-premises applications to the cloud and integrating complex systems.
  • Security & Compliance: Ensure all solutions comply with the regulatory requirements (e.g., data privacy, security standards) and implement robust security measures, including identity and access management, encryption, and network security.
  • Technical Leadership & Collaboration: Provide technical guidance and mentorship to development teams, conducting code and architecture reviews to ensure alignment with architectural principles and best practices. Collaborate with cross-functional teams, including business analysts and project managers, to align technical solutions with business goals.
  • Innovation & Problem Solving: Evaluate new and emerging technologies, conducting proofs-of-concept (PoCs) to validate assumptions and drive continuous improvement in products, processes, and tools.

Qualifications and Skills

  • Experience:
  • 5+ years of relevant experience in a solution architecture or a lead engineering role within financial services or a related regulated industry.
  • Proven experience in designing and delivering large-scale IT projects with hands-on experience in Java-based systems.
  • Demonstrated experience running production applications in public cloud environments (AWS and/or Azure).
  • Technical Skills:
  • Proficiency in Java and Java frameworks (Spring, Spring Boot).
  • Strong DB Design (RDBMS, NoSQL) abilities
  • Strong knowledge of microservices, event-driven architecture (e.g., Kafka), and RESTful API design.
  • Experience with cloud services (compute, networking, databases, security) and containerization technologies (Docker, Kubernetes).
  • Familiarity with DevOps practices and CI/CD pipelines.
  • Soft Skills:
  • Excellent communication, presentation, and stakeholder management skills, with the ability to translate complex technical concepts for non-technical audiences.
  • Strong analytical, problem-solving, and decision-making abilities.
  • A proactive, self-motivated mindset with the ability to work through ambiguous requirements in an agile environment.

Preferred Certifications

  • AWS Certified Solutions Architect (Associate or Professional)
  • Microsoft Certified: Azure Solutions Architect Expert




Best Regards,

Deepak Gulia

Sr. Talent Acquisition-USA


100 Campus Drive, Suite 420, Florham Park, NJ 07932

| | ://

Not Specified
jobs by JobLookup
✓ All jobs loaded