Cloudera Jobs in Usa

9 positions found

Staff Software Engineer - Airflow
✦ New
🏒 Cloudera
Salary not disclosed
Austin, TX 9 hours ago

Business Area:

Engineering

Seniority Level:

Mid-Senior level

Job Description:

At Cloudera, we empower people to transform complex data into clear and actionable insights. With as much data under management as the hyperscalers, we're the preferred data partner for the top companies in almost every industry. Powered by the relentless innovation of the open source community, Cloudera advances digital transformation for the world's largest enterprises.

The Data Platform Pillar is the bedrock of Cloudera's technology, where we design and build the core components that let our customers store, manage, and process data with unmatched scalability, security, and performance.

As a Staff Software Engineer, Airflow, you will be a key technical leader, responsible for the architecture, technical vision, and strategic evolution of the Apache Airflow-based workflow orchestration platform within the company's data ecosystem. You will be expected to solve the most complex, ambiguous, and high-impact technical problems that span multiple teams or organizational boundaries.

As a Staff Software Engineer you will:

  • Technical & Architectural Leadership: Drive the multi-quarter technical roadmap and architecture for the Airflow platform, ensuring it is secure, highly scalable, reliable, and cost-efficient for enterprise-grade workloads.

  • Complex Problem Solving: Design and implement solutions for the most challenging technical issues, such as extreme scale, multi-tenancy isolation, complex scheduling, and hybrid/multi-cloud deployment models.

  • Cross-Functional Impact: Collaborate closely with product management, principal engineers, and other platform teams (e.g., Spark, Kubernetes) to define and deliver core orchestration capabilities that influence the entire data platform.

  • Engineering Excellence: Define and champion best practices, performance optimization, and quality standards (observability, testing, and fault tolerance) for the Airflow service and its integrations.

  • Mentorship: Mentor senior and junior engineers on complex technical design, best practices, and execution, elevating the overall technical capacity of the team and organization.

  • Open Source: Maintain significant contributions and influence within the Apache Airflow open-source community, aligning the project's roadmap with product strategy.

We are excited if you have (Required Qualifications):

  • Bachelor's degree in Computer Science or equivalent, and 6+ years of experience

  • Deep Airflow Expertise: Deep, hands-on knowledge of Apache Airflow internals (scheduler, executor, serialization, REST APIs) and complex DAG authoring/optimization.

  • Programming: Mastery in Python, some Java experience and extensive experience with core data platform technologies and cloud-native deployments (e.g., Kubernetes, Cloud Composer, AWS/GCP/Azure).

  • Systems Thinking: Demonstrated ability to drive design and architectural decisions with a focus on non-functional requirements (security, performance, high availability, fault tolerance).

  • Leadership: Proven ability to lead and drive technical projects across multiple teams without direct reporting authority.

You may also have:

  • Experience defining the architecture for multi-tenant, service-oriented data platforms.

  • Significant contributions to Apache Airflow or related open-source projects.

  • Background in performance tuning and profiling large-scale Python and distributed applications.

  • Familiarity with data governance and security frameworks (e.g., Ranger, Kerberos) and their integration with workflow orchestration.

Why this role matters:

You will tackle complex distributed systems challenges, crafting the foundational software for the control and data planes that powers CDP and keeps it running at massive scale. Working at the forefront of hybrid and multi-cloud technology, you will empower data scientists, engineers, and analysts with the tools and infrastructure they need for advanced analytics and modeling.

Collaboration is key, you will work alongside brilliant minds across product, data science, and engineering to drive innovation, standardize best practices, and shape the future of enterprise AI and data platforms. This is your chance to build the future of data and see your work make a global impact.

This role is not eligible for immigration sponsorship.

What you can expect from us:

  • Generous PTO Policy

  • Support work life balance with Unplugged Days

  • Flexible WFH Policy

  • Mental & Physical Wellness programs

  • Phone and Internet Reimbursement program

  • Access to Continued Career Development

  • Comprehensive Benefits and Competitive Packages

  • Paid Volunteer Time

  • Employee Resource Groups

EEO/VEVRAA

#LI-BV1

#LI-REMOTE

Not Specified
View & Apply
Performance Engineer -- Non Functional QE
✦ New
🏒 Cloudera
Salary not disclosed
San Jose, CA 1 day ago

Business Area:

Engineering

Seniority Level:

Associate

Job Description:

At Cloudera, we empower people to transform complex data into clear and actionable insights. With as much data under management as the hyperscalers, we're the preferred data partner for the top companies in almost every industry. Powered by the relentless innovation of the open source community, Cloudera advances digital transformation for the world's largest enterprises.

At Cloudera, our Data Services Pillar is the heart of data innovation. We don't just work with technology; we build it. Our mission is to empower data practitioners by creating seamless, enterprise-grade experiences for data engineering, warehousing, streaming, operational databases, and AI.

You will be a key member of the NFQE (Non Functional QE) team that drives the performance reliability of Cloudera's Kuberneteshosted data services. The role blends deep technical knowledge of performance testing, distributed data workloads, and container orchestration with a datadriven mindset. You'll design, automate, run, and analyze performance tests for Cloudera's flagship services, ensuring they meet or exceed customerdefined SLOs/SLAs at scales.

As a Performance Engineer, you will:

  • Work with internal development teams and the open source community to proactively drive performance improvements/optimizations across our data warehouse and Data Engineering stack.

  • Work with product managers, developers and the field team to understand performance and scale requirements, and develop benchmarks based on these requirements.

  • Develop automation to execute benchmarks, collect and aggregate metrics and profiles, and report results, trends, and regressions.

  • Analyze performance and scalability characteristics to identify bottlenecks in large-scale distributed systems.

  • Perform root cause analysis of performance issues identified by internal testing and from customers and suggest corrective actions.

  • Evaluate performance of systems and provide related guidance to the team.

We are excited about you if you have:

  • 3 + years of industry experience in performance-related work, ideally on large-scale distributed systems

  • Understanding of DBMS algorithms and data structure fundamentals.

  • Understanding of hardware trends and full-stack systems performance: CPU, RAM, storage, network, Linux kernel, JVM, and distributed systems performance.

  • Understanding of performance analysis tools and techniques.

  • Strong design, coding skills, and test automation skills (Java/C++/Golang/Python preferred)

  • Knowledge of relevant frameworks, cloud provider knowledge, K8s, etc.

  • Ability to work in a distributed setting with team members spread in multiple geographies

  • Demonstrated ability to work on large cross-functional projects, including strong written communication skills and a collaborative mindset, as you will be working with many teams inside and outside of Cloudera.

  • Experience with benchmark and performance test design. You eshould understand basic concepts of performance testing including different types of performance tests (microbenchmarks, end-to-end benchmarks, concurrency and scale testing), how to reduce (or deal with) noise in test results, etc.

  • Experience designing performance tests that provide useful insights into specific aspects of performance.

  • Solid understanding of basic performance theory - in particular a very good understanding of latency, throughput, and concurrency and how they relate to each other.

  • Strong understanding of the types of workloads they'll be testing Ideally they should have specific experience creating performance tests for the specific product area they'll be working on (SQL, ML, etc).

  • B.S. or M.S. in Computer Science or equivalent experience.

You might also have:

  • Experience with the Hadoop ecosystem (i.e. Hive, Impala, Spark), in specific Prior work on largescale data lakehouse or datawarehouse performance

  • Hands-on experience with containerization, Kubernetes, public cloud infrastructure (AWS, Azure and/or GCP) and mesh-networks

  • Certifications: CKA/CKAD, AWS Solutions Architect, GCP Cloud Architect, Azure Solutions Architect, or equivalent.

  • Security & Compliance: Experience writing performance tests that also verify dataprivacy and audit compliance (e.g., GDPR, HIPAA).

Why this role matters:

This is your opportunity to build cloud-native solutions that are deployable anywhere whether in massive clusters on any cloud provider or in private data centers. You'll work with cutting-edge technologies like Trino, Spark, Airflow, and advanced AI inferencing systems to shape the future of analytics. Your code will directly influence how data engineers, analysts, and developers worldwide find value in their data.

We believe in the power of open source. You'll collaborate with project committers, contributing upstream to keep technologies like Apache Hive and Impala evolving. You'll harden these engines for rock-solid security, optimize them for peak performance, and make them effortlessly run across all environments. Join us and help build the trusted, cloud-native platform that powers insights for the most data-intensive companies on the planet.

This position is not eligible for sponsorship.

The expected base salary range for this role in:

  • California is $124,000 - $155,000

The salary will vary depending on your job-related skills, experience and location.


What you can expect from us:

  • Generous PTO Policy

  • Support work life balance with Unplugged Days

  • Flexible WFH Policy

  • Mental & Physical Wellness programs

  • Phone and Internet Reimbursement program

  • Access to Continued Career Development

  • Comprehensive Benefits and Competitive Packages

  • Paid Volunteer Time

  • Employee Resource Groups

EEO/VEVRAA

#LI-SZ1

#LI-HYBRID

Not Specified
View & Apply
Sr. Engineering Manager - Storage Engineering
✦ New
🏒 Cloudera
Salary not disclosed
San Jose, CA 1 day ago

Business Area:

Engineering

Seniority Level:

Mid-Senior level

Job Description:

At Cloudera, we empower people to transform complex data into clear and actionable insights. With as much data under management as the hyperscalers, we're the preferred data partner for the top companies in almost every industry. Powered by the relentless innovation of the open source community, Cloudera advances digital transformation for the world's largest enterprises.

The Data Platform Pillar is the bedrock of Cloudera's technology, where we design and build the core components that let our customers store, manage, and process data with unmatched scalability, security, and performance.

Cloudera is looking for a strong engineering leader with a distributed systems background to lead a team within the Storage Engineering group, focused on building Apache Ozone and Apache HDFS. The Storage team is responsible for primary storage and storage access layers, which are core to the Cloudera Data Platform.

Apache Ozone is an open source, massively scalable, distributed object store with a distributed file system interface. Ozone is designed to scale to tens of billions of files and blocks, and overcome the limitations of Hadoop Distributed File System (HDFS), namely, millions of small files and managing a huge number of data nodes.

Ozone is one of the fastest-growing products inside CDP in terms of customer adoption and expansion revenue. This is an opportunity to lead a team that created and wrote most of the Ozone code and make a huge impact on the big data storage industry.

**This is an onsite role for our HQ in Santa Clara, CA**

As a Sr. Manager, Engineering you will:

  • Manage and lead a team of talented engineers and senior individual contributors based in North America.

  • Develop and execute on a technical roadmap and strategy for your team, aligning with the department's vision and the company's business goals.

  • Lead and mentor a team of software engineers, including senior and principal-level contributors, fostering a culture of technical excellence and innovation.

  • Partner with Engineering leaders, product managers, and partner teams to understand requirements, develop solid designs and implementations, and facilitate integration and adoption.

  • Drive and enforce best practices for the software development lifecycle, including coding standards, testing, deployment, system scalability, reliability, and security, tracking key performance indicators for engineering quality and efficiency.

  • Communicate team progress, successes, challenges, and strategic plans clearly and transparently to engineering leadership and other business stakeholders.

  • Oversee team resources, staffing, mentoring, and enhancing a best-of-class engineering team.

  • Work closely with customers in various geographies and partner teams (like PS and support) to ensure successful adoption of Ozone and provide technical guidance for enterprise customers running 100s of petabytes-scale big data analytics and ML/AI pipelines.

  • Guide the team in contributing to the Apache open-source community.

We are excited if you have (Required Qualifications):

  • Experience: 8+ years of experience in software engineering, with 2+ years in an engineering management role.

  • Domain Expertise: Demonstrable experience with the design, implementation, and operation of large-scale distributed systems, particularly in storage, file systems, databases, or cloud infrastructure.

  • Technical Depth: Strong understanding of fundamental storage concepts (e.g., consistency, replication, erasure coding, caching).

  • Management Skills: Proven track record of leading and managing high-performing engineering teams, demonstrating excellent communication and organizational skills.

  • Communication: Excellent written and verbal communication skills. If you can point to publicly available papers, technical articles or blog posts that is a huge plus.

  • Education: Bachelor's or Master's degree in Computer Science, Computer Engineering, or a related technical field.

You may also have:

  • Prior experience contributing to or leading large-scale open-source projects.

  • Familiarity with the Apache Hadoop big data ecosystem (HDFS, YARN, Hive, Impala, Spark) or related distributed data frameworks.

  • Experience with specific commercial or open-source distributed storage technologies (e.g., Ceph, Gluster, ZFS, S3-compatible systems).

  • Experience managing remote or hybrid engineers.

Why this role matters:

You will tackle complex distributed systems challenges, crafting the foundational software for the control and data planes that powers CDP and keeps it running at massive scale. Working at the forefront of hybrid and multi-cloud technology, you will empower data scientists, engineers, and analysts with the tools and infrastructure they need for advanced analytics and modeling.

Collaboration is key, you will work alongside brilliant minds across product, data science, and engineering to drive innovation, standardize best practices, and shape the future of enterprise AI and data platforms. This is your chance to build the future of data and see your work make a global impact.

This role is not eligible for immigrationsponsorship.

The expected base salary range for this role in

  • California is $203,000 - $254,000

The salary will vary depending on your job-related skills, experience and location

What you can expect from us:

  • Generous PTO Policy

  • Support work life balance with Unplugged Days

  • Flexible WFH Policy

  • Mental & Physical Wellness programs

  • Phone and Internet Reimbursement program

  • Access to Continued Career Development

  • Comprehensive Benefits and Competitive Packages

  • Paid Volunteer Time

  • Employee Resource Groups

EEO/VEVRAA

#LI-SZ1

#LI-REMOTE

Not Specified
View & Apply
Staff Software Engineer - Product Security
🏒 Cloudera
Salary not disclosed
Austin, TX 2 days ago

Business Area:

Engineering

Seniority Level:

Mid-Senior level

Job Description:

At Cloudera, we empower people to transform complex data into clear and actionable insights. With as much data under management as the hyperscalers, we're the preferred data partner for the top companies in almost every industry. Powered by the relentless innovation of the open source community, Cloudera advances digital transformation for the world's largest enterprises.

The Product Security group ensures our platforms are secure by design and compliant with the world's most rigorous industry and government standards. As a Staff Product Security Engineer, you will serve as a technical architect of trust and the primary connective tissue between Security, Product, and Engineering teams. You will be responsible for translating complex global security requirements into actionable, automated engineering solutions, acting as the "go-to" expert for the Security Features team.

As a senior technical member of the team, you will exercise significant latitude in defining technical objectives and architectural approaches to complex challenges. Leveraging a deep understanding of distributed systems and cloud-native platforms, you will lead high-impact, security-driven initiatives across the entire Cloudera product suite.

As a Staff Software Engineer, you will:

  • Architect and maintain advanced build tooling to automate and accelerate vulnerability remediation across all engineering pillars.

  • Lead Proof of Concepts (POCs) and evaluate third-party security tools to enhance our security posture without compromising developer velocity.

  • Design and develop core security features, including FIPS compliance, TLS/Encryption, Secrets Rotation, Identity & Access Management (IAM), and Certificate Management.

  • Drive root-cause analysis and triage for complex, product-wide stability issues related to security infrastructure.

  • Engineer specialized observability tools, such as encryption inventories, to audit and measure security standards during feature delivery.

  • Author comprehensive design specifications and test plans for cross-component security features, providing technical clarity in the face of ambiguity.

  • Elevate the team's technical bar through high-quality code reviews, documentation standards, and active mentorship of engineering talent.

  • Partner across organizational lines, collaborating with internal stakeholders and senior management to resolve customer escalations and align with long-term objectives.

We're excited about you if you have (Required Qualifications):

  • Bachelor's degree in Computer Science or a related field (or equivalent experience) with 6+ years of professional software engineering experience.

  • Deep technical expertise in containerized environments, specifically Kubernetes (EKS) and Docker.

  • Strong command of general-purpose and scripting languages, including Java, Python, Go, and Bash.

  • Proven experience with Infrastructure-as-Code (IaC) tools such as Terraform and Helm to automate secure infrastructure rollouts.

  • Expert-level experience automating complex CI/CD pipelines using platforms such as GitLab CI/CD, Jenkins, or GitHub Actions.

  • Exceptional troubleshooting skills with a track record of identifying root causes for site outages and resolving P1 escalations.

You may also have (Preferred Qualifications):

  • Experience with Post-Quantum Cryptography to support upcoming product transitions.

  • Practical experience with FIPS 140-3, TLS 1.3, and modern encryption standards.

  • Proven ability to automate CVE remediation and integrate SAST/DAST scanning tools-such as Trivy, Aquasec, Tenable, or Fortify-into developer workflows.

  • Familiarity with government compliance frameworks and industry standards including FedRAMP, ISO 27001, and SOC 2.

  • Deep understanding of secure coding practices and common vulnerabilities as outlined in the OWASP Top 10.

  • Experience working with Identity and Access Management (IAM) or Identity Governance platforms.

  • Strong management skills with a demonstrated ability to influence cross-functional teams and drive results in a remote environment.

This role is not eligible for immigration sponsorship

What you can expect from us:

  • Generous PTO Policy

  • Support work life balance with Unplugged Days

  • Flexible WFH Policy

  • Mental & Physical Wellness programs

  • Phone and Internet Reimbursement program

  • Access to Continued Career Development

  • Comprehensive Benefits and Competitive Packages

  • Paid Volunteer Time

  • Employee Resource Groups

EEO/VEVRAA

#LI-BV1

#LI-REMOTE

Not Specified
View & Apply
Sr. Staff Software Engineer - Hue
🏒 Cloudera
Salary not disclosed
Austin, TX 2 days ago

Business Area:

Engineering

Seniority Level:

Mid-Senior level

Job Description:

At Cloudera, we empower people to transform complex data into clear and actionable insights. With as much data under management as the hyperscalers, we're the preferred data partner for the top companies in almost every industry. Powered by the relentless innovation of the open source community, Cloudera advances digital transformation for the world's largest enterprises.

The Data Platform Pillar is the bedrock of Cloudera's technology, where we design and build the core components that let our customers store, manage, and process data with unmatched scalability, security, and performance.

One of our products, Hue, is a mature open-source SQL Assistant for querying Databases & Data Warehouses and collaboration. Many companies and organizations use Hue to quickly answer questions via self-service querying.

The Data Warehouse Experience team is seeking passionate developers to join our distributed engineering team. Our mission is to make data warehousing simple and innovative for end users, working on software development, testing, user experience, distributed systems, and scalability.You will architect and implement applications within Hue, part of Cloudera Enterprise, which includes CDP Data Platform (CDPD), the world's leading open-source data platform for mission-critical environments, as well as CDP/Data Warehouse on the public cloud. We have already released the next generation of SQL experience using container-native Hadoop services to work in a Kubernetes cluster.

As a Sr. Staff Software Engineer, you will:

  • Drive Hue's architecture and technical strategy to deliver a secure, scalable, and extensible data productivity interface for enterprise analytics.

  • Lead core feature development across SQL editing, data catalog exploration, visualization, workflow orchestration, and practitioner productivity tooling.

  • Define and execute the AI roadmap for Hue, incorporating Cloudera SQL AI, natural-language query experiences, contextual copilots, and retrieval-augmented analytics.

  • Advance intelligent SQL assistance and automation by designing foundational capabilities for query generation, optimization suggestions, and semantic data understanding.

  • Champion engineering excellence through robust testing, performance optimization, observability, and enterprise-grade reliability.

  • Mentor engineers and lead cross-functional collaboration with product, UX, data platform, and other teams to influence long-term strategic direction.

We are excited about you if you have:

  • Bachelor's degree in
    Computer Science or equivalent, and 7+ years of experience; OR Master's degree and 5+ years of experience; OR PhD and 3+ years of experience.

  • Attention to details and ability to build reliable and scalable software systems.

  • 4+ years of experience in development and test.

  • Effective communication and collaboration skills.

  • Strong development and system skills.

  • Strong critical & analytical skills.

  • Experience developing in Python.

  • Web development framework experience (React, Vue, Angular etc.).

  • Comfortable with HTML and CSS.

  • Comfortable with security.

You may also have:

  • Comfortable with the Web/RPC stacks in a Cloud world

  • Working knowledge of Kubernetes and microservices-based application design

  • Experience in developing continuous integration pipelines

Why this role matters:

You will tackle complex distributed systems challenges, crafting the foundational software for the control and data planes that powers CDP and keeps it running at massive scale. Working at the forefront of hybrid and multi-cloud technology, you will empower data scientists, engineers, and analysts with the tools and infrastructure they need for advanced analytics and modeling.

Collaboration is key, you will work alongside brilliant minds across product, data science, and engineering to drive innovation, standardize best practices, and shape the future of enterprise AI and data platforms. This is your chance to build the future of data and see your work make a global impact.

This role is not eligible for immigration sponsorship.

What you can expect from us:

  • Generous PTO Policy

  • Support work life balance with Unplugged Days

  • Flexible WFH Policy

  • Mental & Physical Wellness programs

  • Phone and Internet Reimbursement program

  • Access to Continued Career Development

  • Comprehensive Benefits and Competitive Packages

  • Paid Volunteer Time

  • Employee Resource Groups

EEO/VEVRAA

#LI-SZ1

#LI-REMOTE

Not Specified
View & Apply
Data Engineer
Salary not disclosed
Nashville, TN 5 days ago

****We are not in a position to sponsor candidates for employment for this position nor can we work with anyone in a corp-to-corp arrangement. W2 only! No exceptions!****


Summary of Duties

The Analytics Engineer is responsible for supporting data integration, data management, analytics, and reporting initiatives across the organization. This role partners closely with project teams, business leaders, and technical teams to gather requirements, design data solutions, and deliver high-quality analytics products.


The position plays a key role in designing and developing reports, datasets, data cubes, and fact tables to support enterprise reporting and reference data management initiatives. The Analytics Engineer will also collaborate with analysts and technical teams to ensure successful delivery of analytics solutions that support business decision-making.


The successful candidate will possess strong analytical and communication skills and the ability to build effective working relationships across teams. This role requires the ability to manage multiple priorities while collaborating with project stakeholders to define requirements, identify risks, and ensure that project objectives are met.


This position also provides hands-on expertise with BI tools, helping guide the development of dashboards, reports, and supporting documentation. The role requires the ability to work independently while maintaining consistent communication with project teams, technical resources, external vendors, and end users.


A strong working knowledge of SQL, BI reporting tools, ETL data mapping, and reference data management concepts is required.


Key Responsibilities

Responsibilities include, but are not limited to:

  1. Support the design, development, and requirements gathering for analytics solutions in collaboration with healthcare service line leaders.
  2. Develop and maintain standardized enterprise reports and dashboards.
  3. Work with analytics platforms and tools including SQL, Cloudera, Teradata, and BI tools.
  4. Partner with service line leaders to ensure effective adoption and optimization of reports and analytics solutions.
  5. Collaborate with the Data Management team to develop queries, datasets, data tables, and data cubes.
  6. Manage multiple assignments and assist leadership in prioritizing analytics work.
  7. Work independently and collaboratively with cross-functional teams to achieve project goals.
  8. Participate throughout the product delivery lifecycle to ensure project milestones and analyst deliverables are completed on time.

Knowledge, Skills, and Abilities

  • Professional demeanor with a strong customer service mindset
  • Excellent written and verbal communication skills
  • Strong interpersonal and collaboration skills
  • Ability to quickly learn and apply new technologies and processes
  • Working knowledge of Agile and Scrum delivery methodologies
  • Strong analytical and problem-solving abilities

Technical Skills (Preferred)

  • SQL
  • Cloudera
  • Teradata
  • Business Intelligence tools (MicroStrategy, Power BI)
  • Reference Data Management tools (Ataccama)
  • Data modeling and ETL concepts

Education

Bachelor’s degree in Information Technology, Data Analytics, Computer Science, or a related field preferred.

Experience

  • 2–4 years of experience working on increasingly complex data analytics or business intelligence projects
  • Demonstrated success delivering analytics or reporting solutions
  • Experience working in a fast-paced, results-oriented environment
  • Experience supporting enterprise reporting or analytics initiatives preferred
Not Specified
View & Apply
Enterprise Account Executive (Series B)
Salary not disclosed
Fremont, CA 1 week ago

One of our well-funded, series C startups in the AI deployment and inference space is looking to hire an Account Executive.


Key Qualifications:

  • 5+ years enterprise sales experience
  • Priority given to early sales hires from successful/growing startups who cold called and built their pipelines from scratch
  • Work closely with customers to understand their needs and pain points. Synthesize learnings into effective messaging that can be used across sales & marketing





About Us:

Greylock is an early-stage investor in hundreds of remarkable companies including Airbnb, LinkedIn, Dropbox, Workday, Cloudera, Facebook, Instagram, Roblox, Coinbase, Palo Alto Networks, among others. More can be found about us here: We Work:

We are full-time, salaried employees of Greylock and provide candidate referrals/introductions to our active investments. We will contact anyone who looks like a potential match--requesting to schedule a call with you immediately.


Due to the selective nature of this service and the volume of applicants we typically receive from our job postings, a follow-up email will not be sent until a match is identified with one of our investments.


Please note: We are not recruiting for any roles within Greylock at this time. This job posting is for direct employment with a startup in our portfolio.


Summary:

We recently investments in a team that has the desire to build AI agents for compliance teams. Our ideal candidate will be able to work directly with founding teams to build sales pipeline and close mid-market to enterprise-level deals and develop the go-to-market motion.

Not Specified
View & Apply
EDW Architect, II
🏒 Jobot
Salary not disclosed
Los Angeles 2 weeks ago
A proven EDW architect with 15+ years ready to Design the Implementation of Health System's Data governance processes and protocols to secure and provision data at rest and in motion.

This Jobot Job is hosted by: Brett Walker Are you a fit? Easy Apply now by clicking the "Apply Now" button and sending us your resume.

Salary: $130,000
- $150,000 per year A bit about us: A leading Healthcare system in the Southern California area.

This role is 100% REMOTE.

Please apply today to learn more about our client's EDW Architect II role.

Must have at least 15 years of related experience within various technical solutions to be considered.

Why join us? Leading EDW team in a healthcare system setting.

Partner with high level leaders in Modern Data Warehouse Framework! Apply today to learn more about this REMOTE role on PST hours schedule.

Job Details The EDW Architect II will create, implement, and maintain client's Enterprise Data Platform & Information Architecture Framework.

The key responsibility of this role is to create architecture, processes, procedures & protocols to support the full life cycle of the data of client's Analytics platform leveraging appropriate technologies.

Design the Implementation of client's Data governance processes and protocols to secure and provision data at rest and in motion.

Collaborate with Clinical, Research, Administrative system owners, external vendors, community partners, contractors, and other Health Science Campus leadership to understand their data needs and design, develop, implement client's data analytics platform.

Minimum Education: Bachelor’s Degree in Computer Science, Information Systems, Computer Engineering, or related field OR Combined work experience and education as equivalent In lieu of bachelor’s degree, minimum 20 years of relevant business support and/or information technology support experience.

Minimum Experience: Minimum 16 years of relevant experience including programming in data modeling, OLAP, Hadoop, Cloudera, Talend, RDBMS, NoSQL, Hadoop, enterprise data warehouse projects.

In lieu of bachelor’s degree, minimum 20 years of relevant business support and/or information technology support experience.

Minimum 5 years’ experience with with detailed knowledge of Enterprise Information architectures & Data Governance implementation.

Minimum 5 years’ experience in designing data infrastructure components for the complete data life cycle.

Minimum 3 years’ experience with Structured Query Language (MS SQL Server, Oracle).

Demonstrated experience with RDBMS, NoSQL, Hadoop.

Hands-on expertise in programming with database services.

Accountabilities: EDW Framework: Provides Technology thought leadership in Modern Data Warehouse Framework for the client’s Enterprise Data Analytics platform.

End To End Technology Architecture: Collaborate with Information Management & Information delivery team to provide end to end Technology architecture to support the complete data life cycle phases including core functions like Information management & Information delivery.

Architecture Design: Understand client’s Strategic plan, Data Strategy & Data Governance and conceptualize, architect Enterprise Data/Information Architecture framework that supports the execution of the Strategic plan successfully.

Training: Trains new/current staff members on applicable systems/applications.

Responsible for working with customer and/or vendors with training on new systems being implemented and rolled out for use in the departments.

Priority Management: Must work several assignments at one time, follow/meet priorities, deadlines and time.

The work is highly technical, requires collaboration across multiple disciplines and groups.

The ability to work independently is also required.

Customer Service: Addresses customer questions, concerns, enhancement requests, communicates with customers, handles services problems and tickets politely and efficiently, always available for customers, follows procedures, utilizes problem solving skills, maintains pleasant and professional image.

Customers may include both internal department users, vendors, and peers within IS.

Team Work and Project Management: Helps team leader/manager/director to establish project goals, milestones and procedures.

Leads projects, including team members, and facilitate team and cross-functional meetings; uses planning skills to manage and complete work efforts on time and on budget for projects.

Documentation: Create, publish, maintain all the documents about the Architecture, Design, Road map & framework assets in the appropriate collaboration tools.

Interested in hearing more? Easy Apply now by clicking the "Apply Now" button.

Jobot is an Equal Opportunity Employer.

We provide an inclusive work environment that celebrates diversity and all qualified candidates receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, age (40 and over), disability, military status, genetic information or any other basis protected by applicable federal, state, or local laws.

Jobot also prohibits harassment of applicants or employees based on any of these protected categories.

It is Jobot’s policy to comply with all applicable federal, state and local laws respecting consideration of unemployment status in making hiring decisions.

Sometimes Jobot is required to perform background checks with your authorization.

Jobot will consider qualified candidates with criminal histories in a manner consistent with any applicable federal, state, or local law regarding criminal backgrounds, including but not limited to the Los Angeles Fair Chance Initiative for Hiring and the San Francisco Fair Chance Ordinance.

Information collected and processed as part of your Jobot candidate profile, and any job applications, resumes, or other information you choose to submit is subject to Jobot's Privacy Policy, as well as the Jobot California Worker Privacy Notice and Jobot Notice Regarding Automated Employment Decision Tools which are available at /legal.

By applying for this job, you agree to receive calls, AI-generated calls, text messages, or emails from Jobot, and/or its agents and contracted partners.

Frequency varies for text messages.

Message and data rates may apply.

Carriers are not liable for delayed or undelivered messages.

You can reply STOP to cancel and HELP for help.

You can access our privacy policy here: /privacy-policy
Not Specified
View & Apply
EDW Architect II
🏒 Jobot
Salary not disclosed
Los Angeles 2 weeks ago
EDW Architect in Los Angeles, CA.

This Jobot Job is hosted by: Robert Reyes Are you a fit? Easy Apply now by clicking the "Apply Now" button and sending us your resume.

Salary: $120,000
- $165,000 per year A bit about us: Prestige Hospital System Placed #1 in both California in a broad assessment of excellence in hospital-based patient care.

Apply today to learn more! Why join us? Why join us? Competitive Salary$$ Variety of benefits and perks designed to support your well-being and professional growth.

Here are some of the key benefits: Health and Medical Benefits: Comprehensive health plan options, including medical, dental, and vision coverage, as well as flexible spending accounts to offset medical costs.

Retirement Benefits: Retirement plans to help you secure your financial future.

Tuition Benefits: Free tuition for yourself or an immediate family member after two years of employment.

Time Off: Paid and unpaid time off for vacation, personal health, and family care.

Well-being Programs: Resources to support your physical, mental, and spiritual health.

Employee Discounts: Discounts on sports tickets, gym memberships, event tickets, and more.

Professional Development: Opportunities for growth and development through various training programs and resources.

If you are passionate, thrive in a fast-paced environment and are ready to take your career to the next level, we would love to hear from you.

Job Details Job Details: We are seeking a passionate and experienced EDW (Enterprise Data Warehouse) Architect to join our dynamic Tech Services team.

This permanent position offers an opportunity to work on challenging projects, engaging with various departments across the organization.

The successful candidate will be responsible for the overall architecture, design, implementation, and management of our enterprise data warehouse and data integration strategy.

The candidate will work closely with business stakeholders, data analysts, and IT teams to ensure that the data architecture supports and enhances information delivery and data insights for the organization.

Summary: The EDW Architect II will create, implement, and maintain Hospitals Enterprise Data Platform & Information Architecture Framework.

The key responsibility of this role is to create architecture, processes, procedures & protocols to support the full life cycle of the data of Hospitals Analytics platform leveraging appropriate technologies.

Design the Implementation of Hospitals Data governance processes and protocols to secure and provision data at rest and in motion.

Collaborate with Clinical, Research, Administrative system owners, external vendors, community partners, contractors, and other Health Science Campus leadership to understand their data needs and design, develop, implement Hospitals data analytics platform.

Minimum Education: Bachelor’s Degree in Computer Science, Information Systems, Computer Engineering, or related field OR Combined work experience and education as equivalent In lieu of bachelor’s degree, minimum 20 years of relevant business support and/or information technology support experience.

Minimum Experience: Minimum 16 years of relevant experience including programming in data modeling, OLAP, Hadoop, Cloudera, Talend, RDBMS, NoSQL, Hadoop, enterprise data warehouse projects.

In lieu of bachelor’s degree, minimum 20 years of relevant business support and/or information technology support experience.

Minimum 5 years’ experience with with detailed knowledge of Enterprise Information architectures & Data Governance implementation.

Minimum 5 years’ experience in designing data infrastructure components for the complete data life cycle.

Minimum 3 years’ experience with Structured Query Language (MS SQL Server, Oracle).

Demonstrated experience with RDBMS, NoSQL, Hadoop.

Hands-on expertise in programming with database services.

Accountabilities: EDW Framework: Provides Technology thought leadership in Modern Data Warehouse Framework for the Hospitals Enterprise Data Analytics platform.

End To End Technology Architecture: Collaborate with Information Management & Information delivery team to provide end to end Technology architecture to support the complete data life cycle phases including core functions like Information management & Information delivery.

Architecture Design: Understand Hospitals Strategic plan, Data Strategy & Data Governance and conceptualize, architect Enterprise Data/Information Architecture framework that supports the execution of the Strategic plan successfully.

Training: Trains new/current staff members on applicable systems/applications.

Responsible for working with customer and/or vendors with training on new systems being implemented and rolled out for use in the departments.

Priority Management: Must work several assignments at one time, follow/meet priorities, deadlines and time.

The work is highly technical, requires collaboration across multiple disciplines and groups.

The ability to work independently is also required.

Customer Service: Addresses customer questions, concerns, enhancement requests, communicates with customers, handles services problems and tickets politely and efficiently, always available for customers, follows procedures, utilizes problem solving skills, maintains pleasant and professional image.

Customers may include both internal department users, vendors, and peers within IS.

Team Work and Project Management: Helps team leader/manager/director to establish project goals, milestones and procedures.

Leads projects, including team members, and facilitate team and cross-functional meetings; uses planning skills to manage and complete work efforts on time and on budget for projects.

Documentation: Create, publish, maintain all the documents about the Architecture, Design, Road map & framework assets in the appropriate collaboration tools.

Interested in hearing more? Easy Apply now by clicking the "Apply Now" button.

Jobot is an Equal Opportunity Employer.

We provide an inclusive work environment that celebrates diversity and all qualified candidates receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, age (40 and over), disability, military status, genetic information or any other basis protected by applicable federal, state, or local laws.

Jobot also prohibits harassment of applicants or employees based on any of these protected categories.

It is Jobot’s policy to comply with all applicable federal, state and local laws respecting consideration of unemployment status in making hiring decisions.

Sometimes Jobot is required to perform background checks with your authorization.

Jobot will consider qualified candidates with criminal histories in a manner consistent with any applicable federal, state, or local law regarding criminal backgrounds, including but not limited to the Los Angeles Fair Chance Initiative for Hiring and the San Francisco Fair Chance Ordinance.

Information collected and processed as part of your Jobot candidate profile, and any job applications, resumes, or other information you choose to submit is subject to Jobot's Privacy Policy, as well as the Jobot California Worker Privacy Notice and Jobot Notice Regarding Automated Employment Decision Tools which are available at /legal.

By applying for this job, you agree to receive calls, AI-generated calls, text messages, or emails from Jobot, and/or its agents and contracted partners.

Frequency varies for text messages.

Message and data rates may apply.

Carriers are not liable for delayed or undelivered messages.

You can reply STOP to cancel and HELP for help.

You can access our privacy policy here: /privacy-policy
Not Specified
View & Apply
jobs by JobLookup