Cloud Buddy Pipe Jobs in Usa

2,200 positions found — Page 14

Performance Engineer -- Non Functional QE
Salary not disclosed
San Jose, CA 3 days ago

Business Area:

Engineering

Seniority Level:

Associate

Job Description:

At Cloudera, we empower people to transform complex data into clear and actionable insights. With as much data under management as the hyperscalers, we're the preferred data partner for the top companies in almost every industry. Powered by the relentless innovation of the open source community, Cloudera advances digital transformation for the world's largest enterprises.

At Cloudera, our Data Services Pillar is the heart of data innovation. We don't just work with technology; we build it. Our mission is to empower data practitioners by creating seamless, enterprise-grade experiences for data engineering, warehousing, streaming, operational databases, and AI.

You will be a key member of the NFQE (Non Functional QE) team that drives the performance reliability of Cloudera's Kuberneteshosted data services. The role blends deep technical knowledge of performance testing, distributed data workloads, and container orchestration with a datadriven mindset. You'll design, automate, run, and analyze performance tests for Cloudera's flagship services, ensuring they meet or exceed customerdefined SLOs/SLAs at scales.

As a Performance Engineer, you will:

  • Work with internal development teams and the open source community to proactively drive performance improvements/optimizations across our data warehouse and Data Engineering stack.

  • Work with product managers, developers and the field team to understand performance and scale requirements, and develop benchmarks based on these requirements.

  • Develop automation to execute benchmarks, collect and aggregate metrics and profiles, and report results, trends, and regressions.

  • Analyze performance and scalability characteristics to identify bottlenecks in large-scale distributed systems.

  • Perform root cause analysis of performance issues identified by internal testing and from customers and suggest corrective actions.

  • Evaluate performance of systems and provide related guidance to the team.

We are excited about you if you have:

  • 3 + years of industry experience in performance-related work, ideally on large-scale distributed systems

  • Understanding of DBMS algorithms and data structure fundamentals.

  • Understanding of hardware trends and full-stack systems performance: CPU, RAM, storage, network, Linux kernel, JVM, and distributed systems performance.

  • Understanding of performance analysis tools and techniques.

  • Strong design, coding skills, and test automation skills (Java/C++/Golang/Python preferred)

  • Knowledge of relevant frameworks, cloud provider knowledge, K8s, etc.

  • Ability to work in a distributed setting with team members spread in multiple geographies

  • Demonstrated ability to work on large cross-functional projects, including strong written communication skills and a collaborative mindset, as you will be working with many teams inside and outside of Cloudera.

  • Experience with benchmark and performance test design. You eshould understand basic concepts of performance testing including different types of performance tests (microbenchmarks, end-to-end benchmarks, concurrency and scale testing), how to reduce (or deal with) noise in test results, etc.

  • Experience designing performance tests that provide useful insights into specific aspects of performance.

  • Solid understanding of basic performance theory - in particular a very good understanding of latency, throughput, and concurrency and how they relate to each other.

  • Strong understanding of the types of workloads they'll be testing Ideally they should have specific experience creating performance tests for the specific product area they'll be working on (SQL, ML, etc).

  • B.S. or M.S. in Computer Science or equivalent experience.

You might also have:

  • Experience with the Hadoop ecosystem (i.e. Hive, Impala, Spark), in specific Prior work on largescale data lakehouse or datawarehouse performance

  • Hands-on experience with containerization, Kubernetes, public cloud infrastructure (AWS, Azure and/or GCP) and mesh-networks

  • Certifications: CKA/CKAD, AWS Solutions Architect, GCP Cloud Architect, Azure Solutions Architect, or equivalent.

  • Security & Compliance: Experience writing performance tests that also verify dataprivacy and audit compliance (e.g., GDPR, HIPAA).

Why this role matters:

This is your opportunity to build cloud-native solutions that are deployable anywhere whether in massive clusters on any cloud provider or in private data centers. You'll work with cutting-edge technologies like Trino, Spark, Airflow, and advanced AI inferencing systems to shape the future of analytics. Your code will directly influence how data engineers, analysts, and developers worldwide find value in their data.

We believe in the power of open source. You'll collaborate with project committers, contributing upstream to keep technologies like Apache Hive and Impala evolving. You'll harden these engines for rock-solid security, optimize them for peak performance, and make them effortlessly run across all environments. Join us and help build the trusted, cloud-native platform that powers insights for the most data-intensive companies on the planet.

This position is not eligible for sponsorship.

The expected base salary range for this role in:

  • California is $124,000 - $155,000

The salary will vary depending on your job-related skills, experience and location.


What you can expect from us:

  • Generous PTO Policy

  • Support work life balance with Unplugged Days

  • Flexible WFH Policy

  • Mental & Physical Wellness programs

  • Phone and Internet Reimbursement program

  • Access to Continued Career Development

  • Comprehensive Benefits and Competitive Packages

  • Paid Volunteer Time

  • Employee Resource Groups

EEO/VEVRAA

#LI-SZ1

#LI-HYBRID

Not Specified
Surgical Tech Extern, Surgical Tech and First Assist
Salary not disclosed
Decatur, IL 2 days ago
Pay Range:

$26.80 - $40.20

A successful candidate's actual pay rate will be based on several factors including relevant experience, skills, training, certifications and education.

Hospital Sisters Health System (HSHS) is seeking Surgical Tech Externs, Surgical Techs, and First Assists to join our Surgery Procedures-General unit. Ideal candidates are patient focused, mission driven caregivers looking for an opportunity to apply clinical knowledge in a fast-paced environment.

Assists surgical team members in providing intraoperative care to the perioperative patient by preparing and monitoring equipment, passing instruments, maintaining the sterile field, assisting with patient transfer/positioning and performing a variety of surgical skills and procedures.

Position Specifics:

- Opening January 2026: Brand new, state of the art Operating Rooms!
- Surgical Tech Externs, Surgical Techs and First Assists
- Department: Surgery
- Core Function: Surgical Tech
- Schedule: Full-Time Days (40 hours per week). On call/weekend rotation, currently on a 5-day rotation (so every 5th day you would be on call at the time the shift ends or if it’s a weekend day it would be for 24hrs). Buddy call would start midway through orientation and then independent responsibilities would start once off orientation
- Facility: St Mary’s Hospital
- Clinic Location: Decatur, IL
- Sign-On Bonus: $20,000 for Full-Time Surgical Techs and First Assists
- Compensation that aligns with experience. Shift differentials for night and weekend on top of base rate as well as call pay!

Education Qualifications

- High School Diploma or GED is required.
- Associates degree in surgical technology is preferred.
- Graduate from a recognized surgical technology program or has successfully completed military training in surgical technology is required.
- Military documentation to establish education includes a DD214 demonstrating graduation from a military training program.
- Illinois: High School Diploma or GED PLUS 3 years working as a Surgical Technologist working in healthcare setting may be considered in lieu of education.

Experience Qualifications

- 1 year surgery experience is preferred.

Certifications, Licenses and Registrations

- One of the following certifications is required: Certified Surgical Technologist (CST) issued by National Board of Surgical Technology and Surgical Assisting (NBSTSA) OR Tech in Surgery (TS-C) issued by the National Center for Competency Testing (NCCT).
- Basic Life Support (BLS) is required.

Scheduled Weekly Hours:

Throughout communities in Illinois and Wisconsin, 13 hospitals, numerous community-based health centers and clinics, our 13,000+ colleagues have built a culture based on our solid core values of respect, care, competence, and joy. These are the ideals we believe in, work by, and live each day.

Built upon more than 145 years of service to the communities we serve, we now look to the future and our place in it as a health care system that strives to continually improve processes, procedures, and outcomes with the latest and most advanced technologies and treatments.

Regardless of how far our passion for excellence carries us, our focus will always remain on the most important person in our entire organization: The patient.

Benefits: HSHS provides a benefits package designed to support the overall well-being of our colleagues including their physical, emotional, financial, spiritual, and work health. Colleagues budgeted to work at least 32 hours per pay period are eligible for HSHS benefits.

- Comprehensive and affordable health coverage includes medical, prescription, dental and vision coverage for full-time and part-time colleagues.

- Paid Time Off (PTO) combines vacation, sick, and personal days into one balance to allow you the flexibility to use your time off as you need.

- Retirement benefits including HSHS
contributions.

- Education Assistance benefits include up to $4,000 of educational assistance each calendar year and tuition discounts to select colleges with no waiting period.

- Adoption Assistance provides financial support up to $7,500 for colleagues growing their families through adoption to reimburse application and legal fees, transportation, and more!

- Other benefits include: Wellness program with incentives, employer-paid life insurance and short-term and long-term disability coverage, flexible spending accounts, employee assistance program, ID theft coverage, colleague rewards and recognition program, discount program, and more!

HSHS is proud to be an equal opportunity workplace dedicated to pursuing and hiring a diverse workforce.

Powered by SonicJobs (an advertiser on Veritone). By applying, you consent to share your data with SonicJobs and the employer. Veritone or SonicJobs does not store or use your application data beyond facilitating the application.
See Hospital Sisters Health System Terms & Conditions at and Privacy Policy at and SonicJobs Privacy Policy at and Terms of Use at Category:Healthcare, Keywords:Surgical Technician, Location:Decatur, IL-62524
Not Specified
Travel Cardiac Cath Lab Interventional Technologist
✦ New
Salary not disclosed
Omaha, NE 16 hours ago
Job Description

TheraEX Staffing Services is seeking a travel Cath Lab Technologist for a travel job in Omaha, Nebraska.

Job Description & Requirements

- Specialty: Cath Lab Technologist
- Discipline: Allied Health Professional
- Start Date: 03/30/2026
- Duration: 13 weeks
- 40 hours per week
- Shift: 10 hours, days
- Employment Type: Travel

Shift: 7:00 AM – 5:30 PM | 4 days/week

Guaranteed Hours: 40/week

Orientation: 12 hours (approx. 1 week with on-call buddy)

Charting: EPIC / Apollo cardiac charting

Requirements:

2+ years experience (3+ years hospital Cath Lab/IR preferred)

Certifications: RCIS, ACLS (AHA), BLS (AHA)

Driver’s license required at submission

Call Requirements:

Minimum 10 weeknight call shifts

Minimum 2 weekend call shifts (1 IR + 1 Cardiac)

30-minute response time

Floating & Coverage:

Float between 3 campuses: Immanuel, Lakeside, Mercy CB (6–15 miles apart)

Daily floating and on-call coverage possible

Additional Details:

Radius rule: 50 miles (cannot be employed by CommonSpirit/CHI/Dignity facilities)

TheraEX Staffing Services Job ID #1060556. Pay package is based on 10 hour shifts and 40 hours per week (subject to confirmation) with tax-free stipend amount to be determined.

About TheraEX Staffing Services

TheraEx Staffing Services is a leading name in healthcare staffing solutions. We enroll talented professionals to provide temporary staff to fill the needs of healthcare facilities across the nation.

Benefits

- Dental benefits
- Vision benefits
- 401k retirement plan
- Health Care FSA
- Life insurance
- Sick pay
- Holiday Pay
- Medical benefits
Not Specified
Travel PACU RN
✦ New
Salary not disclosed
Roseburg, OR 16 hours ago
Job Description

American Traveler is seeking a travel nurse RN PACU - Post Anesthesia Care for a travel nursing job in Roseburg, Oregon.

Job Description & Requirements

- Specialty: PACU - Post Anesthesia Care
- Discipline: RN
- Start Date: 04/06/2026
- Duration: 13 weeks
- 40 hours per week
- Shift: 8 hours, days
- Employment Type: Travel

Assignment Overview

- Shift: Days, 5x8hrs
- Hours: 40 hrs/wk
- Start Date: Apr 6, 2026
- Length: 13 weeks
- Openings: 2

Description

American Traveler is hiring an experienced RN for a PACU position requiring Phase 1 and Phase 2 recovery experience, with rotation between pre-op and PACU across two sites.

Details

- Acute care hospital setting with rotation between the main medical center (MMC) and an outpatient surgery center (ORSC)
- Unit covers Pre-op and PACU at MMC, as well as recovery at ORSC
- MMC has 8 pre-op bays and 4 PACU bays; ORSC has 10 recovery bays, 6 ORs, and 4 Endo suites
- Daily case volume ranges from 5–10 cases at MMC and 15–30 at ORSC
- Common cases include General, GYN, Ortho & Sports Medicine, Hand Specialist, Podiatry, Eyes, ENT, and Endo
- Nurse-to-patient ratios: Pre-op 1:2 | PACU 1:1 to 1:3 | Recovery 1:2 to 1:3
- Phase 1 and Phase 2 recovery experience required
- RNs are responsible for reading and interpreting cardiac rhythms and responding appropriately to life-threatening arrhythmias — no monitor techs on unit
- EMR: MEDITECH at MMC; mix of MEDITECH and paper charting at ORSC
- Schedule consists of five 8-hour day shifts with varied start times; unit operates Monday–Friday 0600–1800
- No weeknight, weekend, or holiday call requirements
- Travelers will be asked to float between MMC and ORSC

Requirements

- Active RN license (state or compact/NLC) required; pending licenses not accepted
- OR RN license required for this position; the facility is willing to request a 90-Day Emergency License if needed
- Current BLS and ACLS certifications required; PALS preferred
- Minimum 2 years of PACU nursing experience required
- Experience with MEDITECH EMR required
- Ability to read and interpret cardiac rhythms independently
- A copy of a valid driver's license is required for consideration
- Two professional references required: one supervisor from within the last 12 months and one peer or supervisor from within the last 3 years, each including dates of employment and eligibility for rehire

Additional Information

- Nurses rotate between pre-op, PACU, and recovery responsibilities across both sites, and are expected to be flexible in filling staffing gaps
- Medical assistants support nursing staff with patient admission and discharge
- Scheduling is managed by the unit manager/scheduler; travelers are expected to be flexible
- Orientation includes a general orientation, scavenger hunt, 1–2 days with a preceptor, and an assigned buddy until comfortable
- Scrubs are provided by the facility
- Candidates may not have been directly employed (full-time, part-time, or PRN) by any CommonSpirit or CHI/Dignity facility within the past year; candidates employed PRN through outside firms will be considered

American Traveler Job ID #P-684717. Pay package is based on 8 hour shifts and 40 hours per week (subject to confirmation) with tax-free stipend amount to be determined. Posted job title: Travel RN - PACU

About American Traveler

With over 25 years of experience, American Traveler has established a reputation for outstanding customer service. Our team ensures a smooth, worry-free experience for those starting on or expanding their travel nursing and allied careers.

With thousands of travel nursing and allied jobs nationwide, our attentive and approachable recruiters find positions that align perfectly with your career aspirations and personal requirements.

American Traveler offers exceptional benefits, including premium medical, dental, vision and life insurance beginning day one of your assignment, generous 401(k) match, substantial housing stipends, and more. Additionally, with 24/7 support and access to our in-house clinicians, you are assured confidence and comfort throughout your assignment.

With our team behind you, you can relax and enjoy a rewarding travel career.
Not Specified
Netcool Developer
Salary not disclosed
Basking Ridge, NJ 2 days ago

Netcool Developer with AIOps Cloud Pak Expertise


• Responsible for integrating and migrating traditional IBM Netcool Operations Insight (NOI) environments into IBM Cloud Pak for AIOps

• Connects on‑prem Netcool/OMNIbus and Netcool/Impact systems with Cloud Pak for AIOps using native connectors

• Migrates existing event filters, automations, and runbook policies into the AIOps platform

• Ensures seamless bidirectional synchronization of event data between Netcool and Cloud Pak for AIOps

• Configures event and alert data mapping and transformation rules (e.g., JSONata) for consistent processing

• Develops automation policies and runbooks using Netcool/Impact, and potentially Python or Bash scripting

• Supports the AIOps platform by supplying and validating high‑quality data for ML models (event grouping, log anomaly detection, metric anomaly detection, change risk assessment)

• Leverages Cloud Pak for AIOps topology and resource management features to build application‑centric infrastructure views

• Collaborates with DevOps, SRE, and operations teams to integrate third‑party tools such as Splunk, ServiceNow, Slack, and others

• Troubleshoots and resolves complex hybrid‑cloud issues arising during integration and ongoing operations

  • • Possesses deep expertise in the IBM Netcool suite, including Netcool/OMNIbus, Netcool/Impact, probes, gateways, and Web GUI
Not Specified
Databricks Architect/ Senior Data Engineer
🏢 OZ
Salary not disclosed
Boca Raton, FL 2 days ago

OZ – Databricks Architect/ Senior Data Engineer


Note: Only applications from U.S. citizens or lawful permanent residents (Green Card Holders) will be considered.


We believe work should be innately rewarding and a team-building venture. Working with our teammates and clients should be an enjoyable journey where we can learn, grow as professionals, and achieve amazing results. Our core values revolve around this philosophy. We are relentlessly committed to helping our clients achieve their business goals, leapfrog the competition, and become leaders in their industry. What drives us forward is the culture of creativity combined with a disciplined approach, passion for learning & innovation, and a ‘can-do’ attitude!


What We're Looking For:

We are seeking a highly experienced Databricks professional with deep expertise in data engineering, distributed computing, and cloud-based data platforms. The ideal candidate is both an architect and a hands-on engineer who can design scalable data solutions while actively contributing to development, optimization, and deployment.


This role requires strong technical leadership, a deep understanding of modern data architectures, and the ability to implement best practices in DataOps, performance optimization, and data governance.


Experience with modern AI/GenAI-enabled data platforms and real-time data processing environments is highly desirable.


Position Overview:

The Databricks Senior Data Engineer will play a critical role in designing, implementing, and optimizing enterprise-scale data platforms using the Databricks Lakehouse architecture. This role combines architecture leadership with hands-on engineering, focusing on building scalable, secure, and high-performance data pipelines and platforms. The ideal candidate will establish coding standards, define data architecture frameworks such as the Medallion Architecture, and guide the end-to-end development lifecycle of modern data solutions.


This individual will collaborate with cross-functional stakeholders, including data engineers, BI developers, analysts, and business leaders, to deliver robust data platforms that enable advanced analytics, reporting, and AI-driven decision-making.


Key Responsibilities:

  • Architecture & Design: Architect and design scalable, reliable data platforms and complex ETL/ELT and streaming workflows for the Databricks Lakehouse Platform (Delta Lake, Spark).
  • Hands-On Development: Write, test, and optimize code in Python, PySpark, and SQL for data ingestion, transformation, and processing.
  • DataOps & Automation: Implement CI/CD, monitoring, and automation (e.g., with Azure DevOps, DBX) for data pipelines.
  • Stakeholder Collaboration: Work with BI developers, analysts, and business users to define requirements and deliver data-driven solutions.
  • Performance Optimization: Tune delta tables, Spark jobs, and SQL queries for maximum efficiency and scalability.
  • GenAI Applications Development: It is a big plus to have experience in GenAI application development


Requirements:

  • 8+ years of experience in data engineering, with strong hands-on expertise in Databricks and Apache Spark.
  • Proven experience designing and implementing scalable ETL/ELT pipelines in cloud environments.
  • Strong programming skills in Python and SQL; experience with PySpark required.
  • Hands-on experience with Databricks Lakehouse, Delta Lake, and distributed data processing.
  • Experience working with cloud platforms such as Microsoft Azure, AWS, or GCP (Azure preferred).
  • Experience with CI/CD pipelines, Git, and DevOps practices for data engineering.
  • Strong understanding of data architecture, data modeling, and performance optimization.
  • Experience working with cross-functional teams to deliver enterprise data solutions.
  • Tackles complex data challenges, ensuring data quality and reliable delivery.


Qualifications:

  • Bachelor’s degree in Computer Science, Information Technology, Engineering, or a related field.
  • Experience designing enterprise-scale data platforms and modern data architectures.
  • Experience with data integration tools such as Azure Data Factory or similar platforms.
  • Familiarity with cloud data warehouses such as Databricks, Snowflake, or Azure Fabric.
  • Experience supporting analytics, reporting, or AI/ML workloads is highly desirable.
  • Databricks, Azure, or cloud certifications are preferred.
  • Strong problem-solving, communication, and technical leadership skills.


Technical Proficiency in:

  • Databricks, Apache Spark, PySpark, Delta Lake
  • Python, SQL, Scala (preferred)
  • Cloud platforms: Azure (preferred), AWS, or GCP
  • Azure Data Factory, Kafka, and modern data integration tools
  • Data warehousing: Databricks, Snowflake, or Azure Fabric
  • DevOps tools: Git, Azure DevOps, CI/CD pipelines
  • Data architecture, ETL/ELT design, and performance optimization


What You’re Looking For:

Join a fast-growing organization that thrives on innovation and collaboration. You’ll work alongside talented, motivated colleagues in a global environment, helping clients solve their most critical business challenges. At OZ, your contributions matter – you’ll have the chance to be a key player in our growth and success. If you’re driven, bold, and eager to push boundaries, we invite you to join a company where you can truly make a difference.


About Us:

OZ is a 28-year-old global technology consulting, services, and solutions leader specializing in creating business-focused solutions for our clients by leveraging disruptive digital technologies and innovation.


OZ is committed to creating a continuum between work and life by allowing people to work remotely. We offer competitive compensation and a comprehensive benefits package. You’ll enjoy our work style within an incredible culture. We’ll give you the tools you need to succeed so you can grow and develop with us and become part of a team that lives by its core values.

Not Specified
Data Integration & AI Engineer
Salary not disclosed
Edison, NJ 2 days ago

About Wakefern

Wakefern Food Corp. is the largest retailer-owned cooperative in the United States and supports its co-operative members' retail operations, trading under the ShopRite®, Price Rite®, The Fresh Grocer®, Dearborn Markets®, and Gourmet Garage® banners.


Employing an innovative approach to wholesale business services, Wakefern focuses on helping the independent retailer compete in a big business world. Providing the tools entrepreneurs need to stay a step ahead of the competition, Wakefern’s co-operative members benefit from the company’s extensive portfolio of services, including innovative technology, private label development, and best-in-class procurement practices.


The ideal candidate will have a strong background in designing, developing, and implementing complex projects, with focus on automating data processes and driving efficiency within the organization. This role requires a close collaboration with application developers, data engineers, data analysts, data scientists to ensure seamless data integration and automation across various platforms. The Data Integration & AI Engineer is responsible for identifying opportunities to automate repetitive data processes, reduce manual intervention, and improve overall data accessibility.


Essential Functions

  • Participate in the development life cycle (requirements definition, project approval, design, development, and implementation) and maintenance of the systems.
  • Implement and enforce data quality and governance standards to ensure the accuracy and consistency.
  • Provide input for project plans and timelines to align with business objectives.
  • Monitor project progress, identify risks, and implement mitigation strategies.
  • Work with cross-functional teams and ensure effective communication and collaboration.
  • Provide regular updates to the management team.
  • Follow the standards and procedures according to Architecture Review Board best practices, revising standards and procedures as requirements change and technological advancements are incorporated into the >tech_ structure.
  • Communicates and promotes the code of ethics and business conduct.
  • Ensures completion of required company compliance training programs.
  • Is trained – either through formal education or through experience – in software / hardware technologies and development methodologies.
  • Stays current through personal development and professional and industry organizations.

Responsibilities

  • Design, build, and maintain automated data pipelines and ETL processes to ensure scalability, efficiency, and reliability across data operations.
  • Develop and implement robust data integration solutions to streamline data flow between diverse systems and databases.
  • Continuously optimize data workflows and automation processes to enhance performance, scalability, and maintainability.
  • Design and develop end-to-end data solutions utilizing modern technologies, including scripting languages, databases, APIs, and cloud platforms.
  • Ensure data solutions and data sources meet quality, security, and compliance standards.
  • Monitor and troubleshoot automation workflows, proactively identifying and resolving issues to minimize downtime.
  • Provide technical training, documentation, and ongoing support to end users of data automation systems.
  • Prepare and maintain comprehensive technical documentation, including solution designs, specifications, and operational procedures.


Qualifications

  • A bachelor's degree or higher in computer science, information systems, or a related field.
  • Hands-on experience with cloud data platforms (e.g., GCP, Azure, etc.)
  • Strong knowledge and skills in data automation technologies, such as Python, SQL, ETL/ELT tools, Kafka, APIs, cloud data pipelines, etc.
  • Experience in GCP BigQuery, Dataflow, Pub/Sub, and Cloud storage.
  • Experience with workflow orchestration tools such as Cloud Composer or Airflow
  • Proficiency in iPaaS (Integration Platform as a Service) platforms, such as Boomi, SAP BTP, etc.
  • Develop and manage data integrations for AI agents, connecting them to internal and external APIs, databases, and knowledge sources to expand their capabilities.
  • Build and maintain scalable Retrieval-Augmented Generation (RAG) pipelines, including the curation and indexing of knowledge bases in vector databases (e.g., Pinecone, Vertex AI Vector Search).
  • Leverage cloud-based AI/ML platforms (e.g., Vertex AI, Azure ML) to build, train, and deploy machine learning models on a scale.
  • Establish and enforce data quality and governance standards for AI/ML datasets, ensuring the accuracy, completeness, and integrity of data used for model training and validation.
  • Collaborate closely with data scientists and machine learning engineers to understand data requirements and deliver optimized data solutions that support the entire machine learning lifecycle.
  • Hands-on experience with IBM DataStage and Alteryx is a plus.
  • Strong understanding of database design principles, including normalization, indexing, partitioning, and query optimization.
  • Ability to design and maintain efficient, scalable, and well-structured database schemas to support both analytical and transactional workloads,
  • Familiarity with BI visualization tools such as MicroStrategy, Power BI, Looker, or similar.
  • Familiarity with data modeling tools.
  • Familiarity with DevOps practices for data (CI/CD pipelines)
  • Proficiency in project management software (e.g., JIRA, Clarizen, etc.)
  • Familiarity with DevOps practices for data (CI/CD pipelines)
  • Strong knowledge and skills in data management, data quality, and data governance.
  • Strong communication, collaboration, and problem-solving skills.
  • Ability to work on multiple projects and prioritize tasks effectively.
  • Ability to work independently and in a team environment.
  • Ability to learn new technologies and tools quickly.
  • The ability to handle stressful situations.
  • Highly developed business acuity and acumen.
  • Strong critical thinking and decision-making skills.


Working Conditions & Physical Demands

This position requires in-person office presence at least 4x a week.


Compensation and Benefits

The salary range for this position is $75,868 - $150,644. Placement in the range depends on several factors, including experience, skills, education, geography, and budget considerations.

Wakefern is proud to offer a comprehensive benefits package designed to support the health, well-being, and professional development of our Associates. Benefits include medical, dental, and vision coverage, life and disability insurance, a 401(k) retirement plan with company match & annual company contribution, paid time off, holidays, and parental leave.


Associates also enjoy access to wellness and family support programs, fitness reimbursement, educational and training opportunities through our corporate university, and a collaborative, team-oriented work environment. Many of these benefits are fully or partially funded by the company, with some subject to eligibility requirements

Not Specified
Senior Java Architect (Banking Domain)
Salary not disclosed
Dallas, TX 2 days ago

Position : JAVA Solution Architect

Location : TX/NJ

Duration Long Term



As a Solution Architect, you will be an integral part of shaping the future of technology. This role requires deep technical expertise to translate complex business requirements into scalable, secure, and compliant technical solutions. You will serve as a bridge between business stakeholders and development teams, ensuring the delivery of high-quality, resilient systems that drive significant business impact.

Key Responsibilities

  • Solution Design & Architecture: Lead the design and development of end-to-end enterprise solutions, including high-level and low-level design documents and architecture diagrams.
  • Technology & Platform Selection: Select the appropriate technology stack, leveraging expertise in Java, Spring Boot, microservices architecture, and cloud platforms (AWS and Azure) to build robust, scalable, and cost-efficient applications.
  • Cloud Migration & Integration: Drive cloud transformation initiatives, including migrating on-premises applications to the cloud and integrating complex systems.
  • Security & Compliance: Ensure all solutions comply with the regulatory requirements (e.g., data privacy, security standards) and implement robust security measures, including identity and access management, encryption, and network security.
  • Technical Leadership & Collaboration: Provide technical guidance and mentorship to development teams, conducting code and architecture reviews to ensure alignment with architectural principles and best practices. Collaborate with cross-functional teams, including business analysts and project managers, to align technical solutions with business goals.
  • Innovation & Problem Solving: Evaluate new and emerging technologies, conducting proofs-of-concept (PoCs) to validate assumptions and drive continuous improvement in products, processes, and tools.

Qualifications and Skills

  • Experience:
  • 5+ years of relevant experience in a solution architecture or a lead engineering role within financial services or a related regulated industry.
  • Proven experience in designing and delivering large-scale IT projects with hands-on experience in Java-based systems.
  • Demonstrated experience running production applications in public cloud environments (AWS and/or Azure).
  • Technical Skills:
  • Proficiency in Java and Java frameworks (Spring, Spring Boot).
  • Strong DB Design (RDBMS, NoSQL) abilities
  • Strong knowledge of microservices, event-driven architecture (e.g., Kafka), and RESTful API design.
  • Experience with cloud services (compute, networking, databases, security) and containerization technologies (Docker, Kubernetes).
  • Familiarity with DevOps practices and CI/CD pipelines.
  • Soft Skills:
  • Excellent communication, presentation, and stakeholder management skills, with the ability to translate complex technical concepts for non-technical audiences.
  • Strong analytical, problem-solving, and decision-making abilities.
  • A proactive, self-motivated mindset with the ability to work through ambiguous requirements in an agile environment.

Preferred Certifications

  • AWS Certified Solutions Architect (Associate or Professional)
  • Microsoft Certified: Azure Solutions Architect Expert




Best Regards,

Deepak Gulia

Sr. Talent Acquisition-USA


100 Campus Drive, Suite 420, Florham Park, NJ 07932

| | ://

Not Specified
Infrastructure Project Manager
Salary not disclosed
Sacramento, CA 2 days ago

** Infrastructure PM Role **

** Candidate Should have Public Sector Experience **


Mandatory Qualifications:

  1. Bachelor’s degree in Information Technology, Computer Science, Engineering, Business Administration, or a related field.
  2. Seven (7) years of experience managing IT infrastructure projects, including planning, execution, monitoring, and successful delivery.
  3. Three (3) years of experience working with State of California departments, agencies, or public sector organizations.
  4. Five (5) years of experience managing infrastructure modernization projects, including one or more of the following:
  • Cloud migration initiatives
  • Data center modernization
  • Server and storage infrastructure upgrades
  • Network infrastructure projects
  • Disaster recovery and business continuity planning

5. Three (3) years of experience managing AWS cloud infrastructure projects, including:

  • Cloud migration strategy
  • AWS architecture coordination
  • Infrastructure automation
  • Cloud security and compliance

6. Five (5) years of experience using formal project management methodologies, such as:

  • PMBOK
  • Agile / Scrum


Desirable Qualifications

  1. PMP (Project Management Professional) Certification.
  2. AWS Certifications, such as:
  • AWS Certified Solutions Architect
  • AWS Certified Cloud Practitioner
  • AWS Certified DevOps Engineer
  1. Experience working with California Department of Technology (CDT) Project Approval Lifecycle (PAL).
  2. Experience preparing and managing State project documentation, including:
  • Feasibility Study Report (FSR)
  • Special Project Report (SPR)
  • Project Approval Lifecycle (PAL) documentation
  • Statement of Work (SOW)
  • Request for Offer (RFO)
  1. Experience with IT Service Management frameworks (ITIL).
  2. Experience managing large-scale cloud transformation programs within government environments.
Not Specified
Diver
✦ New
Salary not disclosed
Dunnellon, FL 1 day ago

Job Overview

You’re the person who goes where others can’t — below the surface, into low-visibility environments, and into the real conditions where restoration work happens. This is not recreational diving. This is working diving.

At Sea & Shoreline, our divers support dredging, restoration, and marine construction efforts in active field environments. The work can be physical, repetitive, and at times uncomfortable — but it is essential to the success of our projects. You help crews move work forward safely and effectively by executing underwater tasks, supporting dredging operations, and ensuring work below the surface is done right the first time.


What You Do

Diving & Underwater Operations

  • Perform underwater work in support of dredging, restoration, and marine construction projects
  • Operate in low-visibility, high-sediment environments
  • Assist with:
  • Dredging operations and material movement
  • Installation and removal of underwater equipment
  • Inspection of underwater conditions, structures, and work areas
  • Follow dive plans, safety protocols, and communication procedures at all times
  • Maintain awareness of surroundings, hazards, and changing conditions

Dredging & Field Support

  • Support dredging crews with underwater execution and troubleshooting
  • Assist topside teams with setup, breakdown, and equipment movement
  • Contribute to keeping projects on track by completing work efficiently and safely
  • Adapt to changing site conditions, timelines, and priorities

Safety & Dive Readiness

  • Follow all dive safety standards, including pre-dive checks, buddy protocols, and emergency procedures
  • Inspect and maintain dive gear and equipment
  • Participate in safety briefings and post-dive debriefs
  • Flag risks early — underwater and topside
  • Maintain required certifications and dive logs

Equipment & Maintenance

  • Assist with care, inspection, and basic maintenance of:
  • Dive gear and air systems
  • Dredging equipment and hoses
  • Boats and support equipment
  • Ensure equipment is properly cleaned, stored, and ready for use
  • Communicate equipment issues clearly and quickly

Culture & Team Contribution

  • Work closely with crews to support overall project success
  • Communicate clearly and directly — especially in high-risk or time-sensitive situations
  • Bring a team-first mindset to physically demanding work
  • Support a culture of safety, accountability, and reliability in the field


Who You Are

  • Certified Diver – You hold a commercial, scientific, or equivalent diving certification and are comfortable working in non-ideal conditions (low visibility, confined spaces, varying currents)
  • Comfortable with Real Work – You understand this is not recreational diving — it’s physical, hands-on work tied to dredging and restoration
  • Safety-Focused – You treat dive safety as non-negotiable and follow protocols consistently
  • Calm Under Pressure – You stay focused and steady in challenging underwater environments
  • Physically Capable – You can handle the demands of diving, lifting, working in water, and long field days
  • Team-Oriented – You work well with crews and understand your role in the bigger operation
  • Adaptable – You adjust to changing field conditions, schedules, and project needs
  • Digitally Aware – You’re comfortable logging dive activity, tracking certifications, and using mobile tools as needed
  • Aligned with the S&S Way – You lead with humility, grit, and reliability — doing the work without needing recognition


Physical Demands & Work Environment

This is a field-based role requiring regular work in and around water, including active dive operations. The position involves:

  • Working in water for extended periods in varying conditions
  • Diving in low-visibility, sediment-heavy environments
  • Lifting and carrying equipment up to 50+ lbs
  • Climbing in and out of boats and watercraft
  • Working outdoors in heat, humidity, rain, and uneven terrain
  • Long, physically demanding workdays based on project needs

This role requires maintaining dive certification, physical fitness for diving, and the ability to safely perform all required underwater tasks.

Sea & Shoreline is committed to providing reasonable accommodations in accordance with applicable laws to enable individuals to perform the essential functions of this position.


What We Offer

  • Comprehensive benefits, including medical, dental, and vision coverage.
  • 401(k) with company match.
  • Paid Time Off (PTO), holidays, and disability insurance.
  • Professional development and training opportunities.

At Sea & Shoreline, we value the contributions of every team member and are dedicated to supporting your professional and personal growth.

Not Specified
jobs by JobLookup
✓ All jobs loaded