Terraform Cloud Jobs in Usa
1,860 positions found — Page 10
AGE Solutions is a premier technology and professional services company, providing in-depth consulting, advanced technology solutions, and essential services throughout the U.S. government, defense, and intelligence sectors. Prioritizing innovation and client-focused solutions, we assist major agencies in addressing intricate issues and ensuring a more secure future.
We are looking for an Enterprise Architect to join our team in support of a DoD customer in Alexandria, VA. The EA will provide high-level architectural expertise to managers and technical staff; develop architectural products and deliverables for the enterprise and operational business lines; Advise on selection of technological purchases with regards to processing, data storage, data access, networks, systems, and applications development; Advise of feasibility of potential future projects to management.
Responsibilities Include:
- Provide actionable recommendations to resolve performance issues, enhance system capabilities, and proactively identify risks within production network equipment.
- Lead requirements development efforts by gathering, documenting, reviewing, consolidating, and refining functional and technical requirements in coordination with stakeholders.
- Facilitate and lead requirements review sessions and meetings to define networking requirements for assigned projects, including analysis of collected data.
- Provide input to the development of Technology Assessment Plans (TAPs)-including scope, objectives, configurations, technical approach, schedules, roles, and independent cost estimates-and produce After Action Reports (AARs) documenting methodologies, results, and recommendations.
- Produce formal recommendations and executive briefings outlining findings, strategy, and capability roadmaps, while coordinating overall technology planning and tracking implementation activities.
- Analyze industry and Government data on technology changes impacting the DISN and commercial telecommunications sector and provide preliminary design and implementation guidance to support strategic planning and decision-making. Areas include wireless technologies, IPv6 transition (NIPRNet/SIPRNet), NetOps and cyber defense (JTF-GNO), VoIP/DSN, VTC and streaming video, core system refreshes, and emerging standards aligned with DoD mandates.
- Provide SME engineering services to design, configure, test, implement, and sustain STIG-compliant network and security architectures, including NAC (802.1x), reverse proxy and load balancing, web filtering, DNS/DHCP/IPAM, VPN, wireless/IDS, VDI support, VoIP/VTC/streaming, Check Point firewalls and MDM, NMS tool suites, MPLS architecture, SATCOM support, and DR/COOP planning; perform Tier IV troubleshooting and produce required technical documentation (ECRs, ITRs, NSOs, AARs, test plans/reports, and as-built diagrams).
- Provide Cloud-certified SME engineering support for on-premises to Cloud migrations, coordinating with CSPs (e.g., Azure, AWS, Oracle, Google Cloud, MilCloud2) and facilitating integration with Secure Cloud Computing Architecture (SCCA) BCAP in multi-cloud environments.
- Develop and maintain Cloud migration documentation to ensure accurate knowledge management and configuration records.
- Engineer, implement, and sustain the Enterprise F5 Application Delivery Controllers (ADCs) supporting DoD DMZ, data center, and Cloud-hosted applications, including Tier 3 support and configuration of GTM (DNS), LTM, and APM modules in compliance with DISA STIGs and industry best practices.
- Implement F5 break-and-inspect capabilities to meet DISA Cloud security requirements and coordinate with CSPs and developers to establish Layer 7 health monitoring.
- Provide subject matter expertise to operations needed to perform root cause analysis and analysis regarding the effectiveness of the network solutions currently under consideration.
- Provide consultation through one-on-one interaction with team members to provide information, insight, and advice, customized for the unique and changing needs of business, including opportunities in growing information technology business. This consulting shall also include briefings to personnel on industry trends.
- Addressing ongoing questions about published service deliverables, identifying partnerships and industry networking opportunities, and interpreting forecasts and research to inform the government's business decisions.
Required Skills, Qualifications and Experience:
- Minimum Requirement:
- Eight years of relevant experience
- Certification Requirements:
- DoD Approved 8570 Baseline Certification: Category IAT Level III (CCNP Security, CASP+ CE, CISSP, CISA, GCED, GCIH)
- Computing Environment Certifications:
- Cisco Certified Internetwork Expert (CCIE) Enterprise Infrastructure and experience with Firewalls is also required.
- Preferred additional certifications: Check Point Certified Security Administrator (CCSA) or higher-level certification, Check Point Certified Security Expert (CCSE)
- Clearance Requirement:
- This position requires a SECRET with a Tier 5 investigation
Preferred Qualifications:
- Check Point Certified Security Administrator (CCSA) or higher-level certification, Check Point Certified Security Expert (CCSE)
Compensation: $150,000+
At AGE Solutions, we reward performance, invest in growth, and share success. Our benefits support the whole person, professionally, financially, and personally.
- 26 Days Paid Leave: Includes vacation, sick, personal time, and holidays. You choose how to use it.
- Performance Bonuses: Performance bonuses are awarded based on individual contributions and company-wide results, aligning recognition with impact.
- 401(k) with Match: We match 3% of your contributions with immediate vesting.
- Financial Protection: Company-paid life insurance up to $300K and options for additional coverage for you and your dependents.
- Health Benefits: Multiple medical plans, dental, vision, FSA and HSA options to fit your needs.
- Parental Leave: 15 days of fully paid leave for new parents, because family matters.
- Military Differential Pay: We bridge the gap for employees on active duty, so they don't take a financial hit while serving.
- Professional Growth: Paid training and certifications, tuition reimbursement, and the tools and tech to get the job done right.
- Shared Success: In the event of a company sale, our CEO has committed to returning 80% of net proceeds to employees. This ensures our team shares in the long term value they help create.
At AGE, you'll do work that matters, supported by a company that delivers for its people.
Translate complex data systems into clear architecture diagrams and documentation.
Contribute directly to implementation.
Responsibilities: Design and implement secure, scalable cloud-based data pipelines, data lakes, and data warehouses.
Evaluate, select, and integrate cloud data services including storage, databases, and analytics tools.
Develop and maintain cloud data architecture strategies aligned with business and technical goals.
Create clear data flow diagrams, system architecture diagrams, and entity-relationship diagrams.
Document data architecture designs, technical decisions, and system processes.
Maintain architecture documentation to support development and operational teams.
Participate directly in the implementation of cloud data solutions and data pipelines.
Identify and implement performance optimization strategies for cloud-based data systems.
Troubleshoot and resolve issues related to data pipelines, data quality, and data accessibility.
Requirements: Bachelor’s Degree in Computer Science Engineering (Mandatory).
Minimum 5 years of hands-on experience in data engineering using distributed computing frameworks such as Spark, MapReduce, or Databricks.
Proven experience designing and implementing Azure-based cloud data solutions.
Required Skills: Strong knowledge of data modeling techniques and best practices.
Proficiency with relational and non-relational database systems.
Strong experience creating architecture diagrams using tools such as Visio, Lucidchart, or similar visualization tools.
Preferred Skills: Advanced experience with Azure data services such as Databricks and Azure Data Lake.
Expertise in big data technologies including Hadoop and Spark.
Knowledge of data governance, security frameworks, and compliance practices.
Experience with Python and SQL scripting.
Optomi, in partnership with a leading logistics company is seeking a Senior Full Stack Java Developer (Java / Kafka / Spring Boot / AWS) to support and modernize critical systems within the Mechanical organization. This team maintains locomotives, railcars, and detector systems that capture millions of operational and safety data points across the network.
About the Position: The role focuses on refactoring legacy microservices, scaling event-driven architectures, re-platforming rules engines, and building cloud-native, resilient, high-volume IoT data pipelines. This is a high-impact engineering role where your work directly influences safety, reliability, and operational efficiency across one of the largest transport networks in the US.
Apply Today if your Background Includes:
- 6+ years of Full Stack experience with a backend emphasis in Java Spring Boot development.
- Strong event-driven architecture + Kafka experience
- Proven experience modernizing legacy microservices & distributed systems
- AWS Cloud experience
- Hands-on Python experience for backend/data workflows
- DevSecOps mindset: automated testing, CI/CD, secure coding
- Experience supporting both greenfield and legacy systems
- Strong relational DB experience (Postgres preferred)
- Experience handling IoT or high-volume sensor data pipelines
- Familiarity with open-source tooling and cloud-agnostic architectures
- Ability to mentor junior engineers and provide technical leadership
What the Right Professional Will Enjoy!
- Fully remote work opportunity with up to 20% travel.
- Opportunity to work with a fast-growing team focused on modernization, cloud adoption, and automation.
- Direct impact on safety, rail operations, and national freight logistics.
- Work on high-volume IoT, event-driven architectures, cloud-native systems.
- Exposure to AI/GenAI, automation, and open-source tooling
- Leadership opportunities with junior developers
Responsibilities:
- Modernize legacy microservices and distributed systems supporting mechanical operations and detector networks.
- Design and implement backend services using Java, Spring Boot, event-driven patterns, and Kafka.
- Scale and optimize high-volume IoT data pipelines (30M+ incoming data points from sensors/detectors).
- Lead architecture, design, and deployment efforts for new and existing services.
- Refactor and support large rules-engine frameworks (600+ rulesets).
- Contribute to cloud-native development (AWS preferred; Azure acceptable; cloud agnostic mindset encouraged).
- Use Python for backend workflows, automation, and data processing tasks.
- Build automated CI/CD and testing frameworks following DevSecOps best practices.
- Work with Postgres and relational databases to tune, model, and integrate data.
- Mentor junior developers and support a strong engineering culture focused on speed, clarity, and automation.
- Collaborate across teams to build scalable, modern systems.
- Support both new development and the existing application footprint.
Remote working/work at home options are available for this role.
Senior Platform Architect
Reports To: Director of Engineering
Department: Engineering
Location: Hybrid - Atlanta, GA
What makes MTech different:
Purpose-Driven Work – Build technology that solves real problems for the world
Casual & Collaborative – No corporate bureaucracy, direct access to senior leadership
Innovation-Focused – Healthy innovation pipeline expanding into new segments and technologies
Transparent & Data-Driven – Clear metrics, objectives, and visibility into company performance
Modern Development – Robust development tools, training programs, and technical excellence
Flexibility & Balance – Flexible work environment that values results over presenteeism
Job Summary
The Senior Platform Architect will lead the technical architecture, design, and modernization of large-scale, multi-tenant enterprise SaaS platforms built on Azure and the .NET stack. This role requires mastery of distributed systems, cloud-native design, and advanced engineering practices to deliver highly available, performant, and secure solutions for global consumer-facing SaaS and Agentic AI products.
Responsibilities and Duties
Architectural Design & Transformation
- Lead migration from monolithic systems to modular monolith and microservices architectures using domain-driven design, bounded contexts, and decomposition strategies.
- Design multi-tenant SaaS platforms with advanced tenant isolation, resource partitioning, and elastic scaling using Azure services.
- Define and enforce architectural standards for .NET (C#), TypeScript, Angular, SQL Server, and Azure, including dependency injection, SOLID principles, asynchronous programming, and reactive patterns.
- Design and implement distributed systems: service orchestration, API gateway management, IoT, edge computing, distributed transactions, eventual consistency, CQRS, and event sourcing.
- Architect for cloud-native resiliency: circuit breakers, bulkheads, retries, failover, geo-redundancy, and disaster recovery using Azure App Services, Azure Functions, Service Bus, Cosmos DB, and Azure SQL.
- Develop and maintain architecture documentation, reference models, and decision records using industry frameworks (TOGAF, Zachman, C4 Model).
Performance Engineering & Observability
- Establish and monitor platform SLOs (latency, throughput, error rates, availability) mapped to customer SLAs.
- Architect and implement advanced caching strategies, indexing, and query optimization for SQL Server and NoSQL stores in coordination with Senior Data Architect, Data Engineers, and Database Admins.
- Design and implement telemetry pipelines: distributed tracing (OpenTelemetry), structured logging, metrics collection, and real-time dashboards for system health and diagnostics.
- Conduct performance profiling, load testing, and capacity planning for backend services and frontend applications.
Automation, Quality, and DevOps
- Architect and implement CI/CD pipelines with automated build, test, security scanning, and deployment workflows.
- Integrate static code analysis, code coverage, and quality gates into the development lifecycle.
- Design and enforce automated testing strategies: unit, integration, contract, and end-to-end tests for backend and frontend components.
- Develop infrastructure as code (IaC) solutions for repeatable, scalable cloud provisioning.
- Create incident response playbooks for rollback, failover, and recovery, drive down MTTR and automate remediation where possible.
Security, Compliance, and Governance
- Architect for multi-tenant security: authentication/authorization (OAuth2, OpenID Connect), encryption at rest and in transit, secrets management, and compliance with SOC 1, SOC 2, GDPR, and other regulatory standards.
- Implement secure software development lifecycle (SSDLC) practices, threat modeling, and vulnerability management, including ZDR, DLP, No Model Training policies with AI Models.
- Ensure architectural governance and alignment with enterprise frameworks (TOGAF, Zachman), maintain architecture decision records, and participate in architecture review boards.
Technical Leadership & Collaboration
- Mentor engineering teams in advanced architectural concepts, distributed systems, cloud-native development, and best practices.
- Collaborate with Data Architect, DevOps, IT Services, Engineering and Product Management teams to ensure platform extensibility, integration, and support for complex business requirements.
- Evaluate and integrate AI/ML services, advanced analytics, and developer productivity tools to enhance platform capabilities.
- Champion a culture of technical excellence, continuous improvement, and innovation.
Required Experience & Skills
- Minimum 10+ years in software/platform engineering, with at least 8 years in platform architecture for enterprise SaaS on Azure and .NET tech stack.
- Proven experience architecting and delivering large-scale, multi-tenant SaaS platforms for global consumer-facing products.
- Deep expertise in .NET (C#), Azure cloud services (App Services, Functions, Service Bus, Cosmos DB, SQL Server), Azure Open AI, Microsoft Agent Framework, TypeScript, Angular, CI/CD, automated testing, and observability.
- Mastery of distributed systems, cloud-native patterns, event-driven architectures, and microservices.
- Demonstrated success in technical debt reduction, performance engineering, and architectural modernization.
- Experience with architectural frameworks (TOGAF, Zachman, C4 Model), architectural governance, and compliance.
- Strong understanding of platform security, regulatory compliance, and multi-tenant SaaS challenges.
Success Metrics (First 12 Months)
- Reduction in platform-related incidents/support tickets.
- Improvement in deployment speed and release velocity.
- Reduction in MTTR for platform incidents.
- Achievement of modularization milestones (monolith decomposition, service rollout, platform observability in production).
- Increase in automated test coverage, code quality, and system performance metrics.
Preferred Skills & Certifications
- TOGAF, Zachman, or similar architecture certification.
- Advanced knowledge of event sourcing, CQRS, service mesh, and cloud-native security.
- Familiarity with semantic technologies, knowledge graphs, and AI/ML integration.
- Hands-on experience with infrastructure as code, automated testing tools, and modern DevOps practices.
- Strong background in platform security, compliance, and multi-tenant SaaS challenges.
EEO Statement
Integrated into our shared values is MTech’s commitment to diversity and equal employment opportunity. All qualified applicants will receive consideration for employment without regard to sex, age, race, color, creed, religion, national origin, disability, sexual orientation, gender identity, veteran status, military service, genetic information, or any other characteristic or conduct protected by law. MTech aims to maintain a global inclusive workplace where every person is regarded fairly, appreciated for their uniqueness, advanced according to their accomplishments, and encouraged to fulfill their highest potential. We
Salesforce Product Owner/Manager
Location: Remote from US
Department: Enterprise Applications
Employment Type: Contract/Contract to Hire
Overview
The organization is seeking a Salesforce Product Owner or Product Manager to lead enhancements, governance, and the long term roadmap for the Salesforce platform. This role focuses on closing the gap between business expectations and current system capabilities while also shaping the future direction of Salesforce, including exploration of Service Cloud, Agent Cloud, and emerging AI driven features. This position requires strong local partnership with Jacksonville based stakeholders and the ability to navigate a complex, multi system environment.
Key Responsibilities
Product Ownership and Roadmap
• Own and refine the Salesforce roadmap, including near term improvements to data quality, integration, and reporting, as well as longer term initiatives such as Agent Cloud and AI assisted capabilities.
• Prioritize work based on business value, complexity, and cross functional impact.
• Ensure business expectations are aligned with realistic delivery timelines and technical feasibility.
Requirements Gathering and Backlog Management
• Lead discovery sessions across Sales, Finance, HR, Operations, and Contracts teams to gather detailed requirements.
• Document clear user stories, acceptance criteria, and functional requirements.
• Evaluate opportunities for AI assisted workflows, agent productivity tools, and automated recommendations within Salesforce.
Data Quality and Governance
• Establish data governance standards to reduce duplicate accounts and inconsistent information.
• Define validation rules that support accurate opportunity management and prevent incorrect or duplicate entries.
• Improve data alignment across revenue structures, people attributes, and account hierarchies.
Integration and Automation
• Identify integration needs across Salesforce, Oracle Fusion, Mosaic, HR systems, Finance systems, and other downstream applications.
• Evaluate automation opportunities such as eliminating manual uploads of financial hierarchies and improving synchronization of HR and Finance attributes.
• Work with technical teams to prepare the platform for future AI or Agent Cloud capabilities that rely on strong upstream and downstream data integrity.
Revenue and Reporting Alignment
• Partner with Finance teams to resolve gaps between estimated and actual revenue and ensure reports reflect accurate information at profit level structures.
• Improve the flow of win or loss information and reduce the need for duplicate entry across CRM and contract related objects.
• Strengthen reporting visibility across retailers, revenue breakdowns, and opportunity lifecycle stages.
User Experience and Adoption
• Lead user acceptance testing and ensure enhancements meet the required standards.
• Define requirements for alerts, reminders, and user guidance, including notifications tied to financial mismatches or incomplete opportunity steps.
• Support communication, training, and adoption activities for new features and process changes.
Qualifications
• Five or more years of experience as a Product Owner, Product Manager, or Salesforce focused Business Analyst.
• Strong understanding of Salesforce Sales Cloud and familiarity with Service Cloud or concepts related to agent workflows and AI capabilities.
• Experience working with financial and HR systems, preferably Oracle Fusion.
• Skilled in opportunity lifecycle management, revenue workflows, data quality, and Salesforce reporting.
• Effective communicator with the ability to work closely with senior business stakeholders.
• Must be local to Jacksonville, Florida or willing to relocate.
Ideal Candidate
The ideal candidate is proactive and detail oriented, capable of driving both immediate system improvements and long term platform evolution. This person brings structure to complex business needs, aligns teams around priorities, and focuses on delivering enhancements that improve data accuracy, reporting, opportunity management, and cross system consistency. They are comfortable working in a hybrid environment, influencing stakeholders, and preparing the organization for future capabilities such as Agent Cloud and AI assisted features.
Welcome to ConsultNet, SaltClick, and Omni. As a premier national provider of technology talent and solutions, our expertise spans across project services, contract-to-hire, direct placement, and managed services, both onshore and nearshore.
Celebrating more than 25 years of partnership with a diverse client base, we've crafted rewarding opportunities for our consultants, fostering high-performing teams that deliver impactful results.
Over the last few years, thousands of consultants have found their calling with us in roles that have made a meaningful impact on their lives, enhanced their career, challenged them, and propelled them towards achieving their personal and professional goals. At ConsultNet, we believe effective communication is crucial in aligning the right job with your unique skills and professional aspirations. To us, it's all about the personal approach we take and the values we uphold.
Our comprehensive service offerings cover a wide range of technology positions across key markets nationwide. Client more at .
We champion equality and inclusivity, proudly supporting an Equal Opportunity Employer policy. We welcome applicants regardless of Race, Color, Religion, Sex, Sexual Orientation, Gender Identity, National Origin, Age, Genetic Information, Disability, Protected Veteran Status, or any other status protected by law.
Business Area:
EngineeringSeniority Level:
AssociateJob Description:
At Cloudera, we empower people to transform complex data into clear and actionable insights. With as much data under management as the hyperscalers, we're the preferred data partner for the top companies in almost every industry. Powered by the relentless innovation of the open source community, Cloudera advances digital transformation for the world's largest enterprises.
At Cloudera, our Data Services Pillar is the heart of data innovation. We don't just work with technology; we build it. Our mission is to empower data practitioners by creating seamless, enterprise-grade experiences for data engineering, warehousing, streaming, operational databases, and AI.
You will be a key member of the NFQE (Non Functional QE) team that drives the performance reliability of Cloudera's Kuberneteshosted data services. The role blends deep technical knowledge of performance testing, distributed data workloads, and container orchestration with a datadriven mindset. You'll design, automate, run, and analyze performance tests for Cloudera's flagship services, ensuring they meet or exceed customerdefined SLOs/SLAs at scales.
As a Performance Engineer, you will:
Work with internal development teams and the open source community to proactively drive performance improvements/optimizations across our data warehouse and Data Engineering stack.
Work with product managers, developers and the field team to understand performance and scale requirements, and develop benchmarks based on these requirements.
Develop automation to execute benchmarks, collect and aggregate metrics and profiles, and report results, trends, and regressions.
Analyze performance and scalability characteristics to identify bottlenecks in large-scale distributed systems.
Perform root cause analysis of performance issues identified by internal testing and from customers and suggest corrective actions.
Evaluate performance of systems and provide related guidance to the team.
We are excited about you if you have:
3 + years of industry experience in performance-related work, ideally on large-scale distributed systems
Understanding of DBMS algorithms and data structure fundamentals.
Understanding of hardware trends and full-stack systems performance: CPU, RAM, storage, network, Linux kernel, JVM, and distributed systems performance.
Understanding of performance analysis tools and techniques.
Strong design, coding skills, and test automation skills (Java/C++/Golang/Python preferred)
Knowledge of relevant frameworks, cloud provider knowledge, K8s, etc.
Ability to work in a distributed setting with team members spread in multiple geographies
Demonstrated ability to work on large cross-functional projects, including strong written communication skills and a collaborative mindset, as you will be working with many teams inside and outside of Cloudera.
Experience with benchmark and performance test design. You eshould understand basic concepts of performance testing including different types of performance tests (microbenchmarks, end-to-end benchmarks, concurrency and scale testing), how to reduce (or deal with) noise in test results, etc.
Experience designing performance tests that provide useful insights into specific aspects of performance.
Solid understanding of basic performance theory - in particular a very good understanding of latency, throughput, and concurrency and how they relate to each other.
Strong understanding of the types of workloads they'll be testing Ideally they should have specific experience creating performance tests for the specific product area they'll be working on (SQL, ML, etc).
B.S. or M.S. in Computer Science or equivalent experience.
You might also have:
Experience with the Hadoop ecosystem (i.e. Hive, Impala, Spark), in specific Prior work on largescale data lakehouse or datawarehouse performance
Hands-on experience with containerization, Kubernetes, public cloud infrastructure (AWS, Azure and/or GCP) and mesh-networks
Certifications: CKA/CKAD, AWS Solutions Architect, GCP Cloud Architect, Azure Solutions Architect, or equivalent.
Security & Compliance: Experience writing performance tests that also verify dataprivacy and audit compliance (e.g., GDPR, HIPAA).
Why this role matters:
This is your opportunity to build cloud-native solutions that are deployable anywhere whether in massive clusters on any cloud provider or in private data centers. You'll work with cutting-edge technologies like Trino, Spark, Airflow, and advanced AI inferencing systems to shape the future of analytics. Your code will directly influence how data engineers, analysts, and developers worldwide find value in their data.
We believe in the power of open source. You'll collaborate with project committers, contributing upstream to keep technologies like Apache Hive and Impala evolving. You'll harden these engines for rock-solid security, optimize them for peak performance, and make them effortlessly run across all environments. Join us and help build the trusted, cloud-native platform that powers insights for the most data-intensive companies on the planet.
This position is not eligible for sponsorship.
The expected base salary range for this role in:
California is $124,000 - $155,000
The salary will vary depending on your job-related skills, experience and location.
What you can expect from us:
Generous PTO Policy
Support work life balance with Unplugged Days
Flexible WFH Policy
Mental & Physical Wellness programs
Phone and Internet Reimbursement program
Access to Continued Career Development
Comprehensive Benefits and Competitive Packages
Paid Volunteer Time
Employee Resource Groups
EEO/VEVRAA
#LI-SZ1
#LI-HYBRID
Netcool Developer with AIOps Cloud Pak Expertise
• Responsible for integrating and migrating traditional IBM Netcool Operations Insight (NOI) environments into IBM Cloud Pak for AIOps
• Connects on‑prem Netcool/OMNIbus and Netcool/Impact systems with Cloud Pak for AIOps using native connectors
• Migrates existing event filters, automations, and runbook policies into the AIOps platform
• Ensures seamless bidirectional synchronization of event data between Netcool and Cloud Pak for AIOps
• Configures event and alert data mapping and transformation rules (e.g., JSONata) for consistent processing
• Develops automation policies and runbooks using Netcool/Impact, and potentially Python or Bash scripting
• Supports the AIOps platform by supplying and validating high‑quality data for ML models (event grouping, log anomaly detection, metric anomaly detection, change risk assessment)
• Leverages Cloud Pak for AIOps topology and resource management features to build application‑centric infrastructure views
• Collaborates with DevOps, SRE, and operations teams to integrate third‑party tools such as Splunk, ServiceNow, Slack, and others
• Troubleshoots and resolves complex hybrid‑cloud issues arising during integration and ongoing operations
- • Possesses deep expertise in the IBM Netcool suite, including Netcool/OMNIbus, Netcool/Impact, probes, gateways, and Web GUI
OZ – Databricks Architect/ Senior Data Engineer
Note: Only applications from U.S. citizens or lawful permanent residents (Green Card Holders) will be considered.
We believe work should be innately rewarding and a team-building venture. Working with our teammates and clients should be an enjoyable journey where we can learn, grow as professionals, and achieve amazing results. Our core values revolve around this philosophy. We are relentlessly committed to helping our clients achieve their business goals, leapfrog the competition, and become leaders in their industry. What drives us forward is the culture of creativity combined with a disciplined approach, passion for learning & innovation, and a ‘can-do’ attitude!
What We're Looking For:
We are seeking a highly experienced Databricks professional with deep expertise in data engineering, distributed computing, and cloud-based data platforms. The ideal candidate is both an architect and a hands-on engineer who can design scalable data solutions while actively contributing to development, optimization, and deployment.
This role requires strong technical leadership, a deep understanding of modern data architectures, and the ability to implement best practices in DataOps, performance optimization, and data governance.
Experience with modern AI/GenAI-enabled data platforms and real-time data processing environments is highly desirable.
Position Overview:
The Databricks Senior Data Engineer will play a critical role in designing, implementing, and optimizing enterprise-scale data platforms using the Databricks Lakehouse architecture. This role combines architecture leadership with hands-on engineering, focusing on building scalable, secure, and high-performance data pipelines and platforms. The ideal candidate will establish coding standards, define data architecture frameworks such as the Medallion Architecture, and guide the end-to-end development lifecycle of modern data solutions.
This individual will collaborate with cross-functional stakeholders, including data engineers, BI developers, analysts, and business leaders, to deliver robust data platforms that enable advanced analytics, reporting, and AI-driven decision-making.
Key Responsibilities:
- Architecture & Design: Architect and design scalable, reliable data platforms and complex ETL/ELT and streaming workflows for the Databricks Lakehouse Platform (Delta Lake, Spark).
- Hands-On Development: Write, test, and optimize code in Python, PySpark, and SQL for data ingestion, transformation, and processing.
- DataOps & Automation: Implement CI/CD, monitoring, and automation (e.g., with Azure DevOps, DBX) for data pipelines.
- Stakeholder Collaboration: Work with BI developers, analysts, and business users to define requirements and deliver data-driven solutions.
- Performance Optimization: Tune delta tables, Spark jobs, and SQL queries for maximum efficiency and scalability.
- GenAI Applications Development: It is a big plus to have experience in GenAI application development
Requirements:
- 8+ years of experience in data engineering, with strong hands-on expertise in Databricks and Apache Spark.
- Proven experience designing and implementing scalable ETL/ELT pipelines in cloud environments.
- Strong programming skills in Python and SQL; experience with PySpark required.
- Hands-on experience with Databricks Lakehouse, Delta Lake, and distributed data processing.
- Experience working with cloud platforms such as Microsoft Azure, AWS, or GCP (Azure preferred).
- Experience with CI/CD pipelines, Git, and DevOps practices for data engineering.
- Strong understanding of data architecture, data modeling, and performance optimization.
- Experience working with cross-functional teams to deliver enterprise data solutions.
- Tackles complex data challenges, ensuring data quality and reliable delivery.
Qualifications:
- Bachelor’s degree in Computer Science, Information Technology, Engineering, or a related field.
- Experience designing enterprise-scale data platforms and modern data architectures.
- Experience with data integration tools such as Azure Data Factory or similar platforms.
- Familiarity with cloud data warehouses such as Databricks, Snowflake, or Azure Fabric.
- Experience supporting analytics, reporting, or AI/ML workloads is highly desirable.
- Databricks, Azure, or cloud certifications are preferred.
- Strong problem-solving, communication, and technical leadership skills.
Technical Proficiency in:
- Databricks, Apache Spark, PySpark, Delta Lake
- Python, SQL, Scala (preferred)
- Cloud platforms: Azure (preferred), AWS, or GCP
- Azure Data Factory, Kafka, and modern data integration tools
- Data warehousing: Databricks, Snowflake, or Azure Fabric
- DevOps tools: Git, Azure DevOps, CI/CD pipelines
- Data architecture, ETL/ELT design, and performance optimization
What You’re Looking For:
Join a fast-growing organization that thrives on innovation and collaboration. You’ll work alongside talented, motivated colleagues in a global environment, helping clients solve their most critical business challenges. At OZ, your contributions matter – you’ll have the chance to be a key player in our growth and success. If you’re driven, bold, and eager to push boundaries, we invite you to join a company where you can truly make a difference.
About Us:
OZ is a 28-year-old global technology consulting, services, and solutions leader specializing in creating business-focused solutions for our clients by leveraging disruptive digital technologies and innovation.
OZ is committed to creating a continuum between work and life by allowing people to work remotely. We offer competitive compensation and a comprehensive benefits package. You’ll enjoy our work style within an incredible culture. We’ll give you the tools you need to succeed so you can grow and develop with us and become part of a team that lives by its core values.
About Wakefern
Wakefern Food Corp. is the largest retailer-owned cooperative in the United States and supports its co-operative members' retail operations, trading under the ShopRite®, Price Rite®, The Fresh Grocer®, Dearborn Markets®, and Gourmet Garage® banners.
Employing an innovative approach to wholesale business services, Wakefern focuses on helping the independent retailer compete in a big business world. Providing the tools entrepreneurs need to stay a step ahead of the competition, Wakefern’s co-operative members benefit from the company’s extensive portfolio of services, including innovative technology, private label development, and best-in-class procurement practices.
The ideal candidate will have a strong background in designing, developing, and implementing complex projects, with focus on automating data processes and driving efficiency within the organization. This role requires a close collaboration with application developers, data engineers, data analysts, data scientists to ensure seamless data integration and automation across various platforms. The Data Integration & AI Engineer is responsible for identifying opportunities to automate repetitive data processes, reduce manual intervention, and improve overall data accessibility.
Essential Functions
- Participate in the development life cycle (requirements definition, project approval, design, development, and implementation) and maintenance of the systems.
- Implement and enforce data quality and governance standards to ensure the accuracy and consistency.
- Provide input for project plans and timelines to align with business objectives.
- Monitor project progress, identify risks, and implement mitigation strategies.
- Work with cross-functional teams and ensure effective communication and collaboration.
- Provide regular updates to the management team.
- Follow the standards and procedures according to Architecture Review Board best practices, revising standards and procedures as requirements change and technological advancements are incorporated into the >tech_ structure.
- Communicates and promotes the code of ethics and business conduct.
- Ensures completion of required company compliance training programs.
- Is trained – either through formal education or through experience – in software / hardware technologies and development methodologies.
- Stays current through personal development and professional and industry organizations.
Responsibilities
- Design, build, and maintain automated data pipelines and ETL processes to ensure scalability, efficiency, and reliability across data operations.
- Develop and implement robust data integration solutions to streamline data flow between diverse systems and databases.
- Continuously optimize data workflows and automation processes to enhance performance, scalability, and maintainability.
- Design and develop end-to-end data solutions utilizing modern technologies, including scripting languages, databases, APIs, and cloud platforms.
- Ensure data solutions and data sources meet quality, security, and compliance standards.
- Monitor and troubleshoot automation workflows, proactively identifying and resolving issues to minimize downtime.
- Provide technical training, documentation, and ongoing support to end users of data automation systems.
- Prepare and maintain comprehensive technical documentation, including solution designs, specifications, and operational procedures.
Qualifications
- A bachelor's degree or higher in computer science, information systems, or a related field.
- Hands-on experience with cloud data platforms (e.g., GCP, Azure, etc.)
- Strong knowledge and skills in data automation technologies, such as Python, SQL, ETL/ELT tools, Kafka, APIs, cloud data pipelines, etc.
- Experience in GCP BigQuery, Dataflow, Pub/Sub, and Cloud storage.
- Experience with workflow orchestration tools such as Cloud Composer or Airflow
- Proficiency in iPaaS (Integration Platform as a Service) platforms, such as Boomi, SAP BTP, etc.
- Develop and manage data integrations for AI agents, connecting them to internal and external APIs, databases, and knowledge sources to expand their capabilities.
- Build and maintain scalable Retrieval-Augmented Generation (RAG) pipelines, including the curation and indexing of knowledge bases in vector databases (e.g., Pinecone, Vertex AI Vector Search).
- Leverage cloud-based AI/ML platforms (e.g., Vertex AI, Azure ML) to build, train, and deploy machine learning models on a scale.
- Establish and enforce data quality and governance standards for AI/ML datasets, ensuring the accuracy, completeness, and integrity of data used for model training and validation.
- Collaborate closely with data scientists and machine learning engineers to understand data requirements and deliver optimized data solutions that support the entire machine learning lifecycle.
- Hands-on experience with IBM DataStage and Alteryx is a plus.
- Strong understanding of database design principles, including normalization, indexing, partitioning, and query optimization.
- Ability to design and maintain efficient, scalable, and well-structured database schemas to support both analytical and transactional workloads,
- Familiarity with BI visualization tools such as MicroStrategy, Power BI, Looker, or similar.
- Familiarity with data modeling tools.
- Familiarity with DevOps practices for data (CI/CD pipelines)
- Proficiency in project management software (e.g., JIRA, Clarizen, etc.)
- Familiarity with DevOps practices for data (CI/CD pipelines)
- Strong knowledge and skills in data management, data quality, and data governance.
- Strong communication, collaboration, and problem-solving skills.
- Ability to work on multiple projects and prioritize tasks effectively.
- Ability to work independently and in a team environment.
- Ability to learn new technologies and tools quickly.
- The ability to handle stressful situations.
- Highly developed business acuity and acumen.
- Strong critical thinking and decision-making skills.
Working Conditions & Physical Demands
This position requires in-person office presence at least 4x a week.
Compensation and Benefits
The salary range for this position is $75,868 - $150,644. Placement in the range depends on several factors, including experience, skills, education, geography, and budget considerations.
Wakefern is proud to offer a comprehensive benefits package designed to support the health, well-being, and professional development of our Associates. Benefits include medical, dental, and vision coverage, life and disability insurance, a 401(k) retirement plan with company match & annual company contribution, paid time off, holidays, and parental leave.
Associates also enjoy access to wellness and family support programs, fitness reimbursement, educational and training opportunities through our corporate university, and a collaborative, team-oriented work environment. Many of these benefits are fully or partially funded by the company, with some subject to eligibility requirements
Position : JAVA Solution Architect
Location : TX/NJ
Duration Long Term
As a Solution Architect, you will be an integral part of shaping the future of technology. This role requires deep technical expertise to translate complex business requirements into scalable, secure, and compliant technical solutions. You will serve as a bridge between business stakeholders and development teams, ensuring the delivery of high-quality, resilient systems that drive significant business impact.
Key Responsibilities
- Solution Design & Architecture: Lead the design and development of end-to-end enterprise solutions, including high-level and low-level design documents and architecture diagrams.
- Technology & Platform Selection: Select the appropriate technology stack, leveraging expertise in Java, Spring Boot, microservices architecture, and cloud platforms (AWS and Azure) to build robust, scalable, and cost-efficient applications.
- Cloud Migration & Integration: Drive cloud transformation initiatives, including migrating on-premises applications to the cloud and integrating complex systems.
- Security & Compliance: Ensure all solutions comply with the regulatory requirements (e.g., data privacy, security standards) and implement robust security measures, including identity and access management, encryption, and network security.
- Technical Leadership & Collaboration: Provide technical guidance and mentorship to development teams, conducting code and architecture reviews to ensure alignment with architectural principles and best practices. Collaborate with cross-functional teams, including business analysts and project managers, to align technical solutions with business goals.
- Innovation & Problem Solving: Evaluate new and emerging technologies, conducting proofs-of-concept (PoCs) to validate assumptions and drive continuous improvement in products, processes, and tools.
Qualifications and Skills
- Experience:
- 5+ years of relevant experience in a solution architecture or a lead engineering role within financial services or a related regulated industry.
- Proven experience in designing and delivering large-scale IT projects with hands-on experience in Java-based systems.
- Demonstrated experience running production applications in public cloud environments (AWS and/or Azure).
- Technical Skills:
- Proficiency in Java and Java frameworks (Spring, Spring Boot).
- Strong DB Design (RDBMS, NoSQL) abilities
- Strong knowledge of microservices, event-driven architecture (e.g., Kafka), and RESTful API design.
- Experience with cloud services (compute, networking, databases, security) and containerization technologies (Docker, Kubernetes).
- Familiarity with DevOps practices and CI/CD pipelines.
- Soft Skills:
- Excellent communication, presentation, and stakeholder management skills, with the ability to translate complex technical concepts for non-technical audiences.
- Strong analytical, problem-solving, and decision-making abilities.
- A proactive, self-motivated mindset with the ability to work through ambiguous requirements in an agile environment.
Preferred Certifications
- AWS Certified Solutions Architect (Associate or Professional)
- Microsoft Certified: Azure Solutions Architect Expert
Best Regards,
Deepak Gulia
Sr. Talent Acquisition-USA
100 Campus Drive, Suite 420, Florham Park, NJ 07932
| | ://