Nuclei Data Jobs in Usa

9,743 positions found — Page 3

Data Integration & AI Engineer
Salary not disclosed
Edison, NJ 2 days ago

About Wakefern

Wakefern Food Corp. is the largest retailer-owned cooperative in the United States and supports its co-operative members' retail operations, trading under the ShopRite®, Price Rite®, The Fresh Grocer®, Dearborn Markets®, and Gourmet Garage® banners.


Employing an innovative approach to wholesale business services, Wakefern focuses on helping the independent retailer compete in a big business world. Providing the tools entrepreneurs need to stay a step ahead of the competition, Wakefern’s co-operative members benefit from the company’s extensive portfolio of services, including innovative technology, private label development, and best-in-class procurement practices.


The ideal candidate will have a strong background in designing, developing, and implementing complex projects, with focus on automating data processes and driving efficiency within the organization. This role requires a close collaboration with application developers, data engineers, data analysts, data scientists to ensure seamless data integration and automation across various platforms. The Data Integration & AI Engineer is responsible for identifying opportunities to automate repetitive data processes, reduce manual intervention, and improve overall data accessibility.


Essential Functions

  • Participate in the development life cycle (requirements definition, project approval, design, development, and implementation) and maintenance of the systems.
  • Implement and enforce data quality and governance standards to ensure the accuracy and consistency.
  • Provide input for project plans and timelines to align with business objectives.
  • Monitor project progress, identify risks, and implement mitigation strategies.
  • Work with cross-functional teams and ensure effective communication and collaboration.
  • Provide regular updates to the management team.
  • Follow the standards and procedures according to Architecture Review Board best practices, revising standards and procedures as requirements change and technological advancements are incorporated into the >tech_ structure.
  • Communicates and promotes the code of ethics and business conduct.
  • Ensures completion of required company compliance training programs.
  • Is trained – either through formal education or through experience – in software / hardware technologies and development methodologies.
  • Stays current through personal development and professional and industry organizations.

Responsibilities

  • Design, build, and maintain automated data pipelines and ETL processes to ensure scalability, efficiency, and reliability across data operations.
  • Develop and implement robust data integration solutions to streamline data flow between diverse systems and databases.
  • Continuously optimize data workflows and automation processes to enhance performance, scalability, and maintainability.
  • Design and develop end-to-end data solutions utilizing modern technologies, including scripting languages, databases, APIs, and cloud platforms.
  • Ensure data solutions and data sources meet quality, security, and compliance standards.
  • Monitor and troubleshoot automation workflows, proactively identifying and resolving issues to minimize downtime.
  • Provide technical training, documentation, and ongoing support to end users of data automation systems.
  • Prepare and maintain comprehensive technical documentation, including solution designs, specifications, and operational procedures.


Qualifications

  • A bachelor's degree or higher in computer science, information systems, or a related field.
  • Hands-on experience with cloud data platforms (e.g., GCP, Azure, etc.)
  • Strong knowledge and skills in data automation technologies, such as Python, SQL, ETL/ELT tools, Kafka, APIs, cloud data pipelines, etc.
  • Experience in GCP BigQuery, Dataflow, Pub/Sub, and Cloud storage.
  • Experience with workflow orchestration tools such as Cloud Composer or Airflow
  • Proficiency in iPaaS (Integration Platform as a Service) platforms, such as Boomi, SAP BTP, etc.
  • Develop and manage data integrations for AI agents, connecting them to internal and external APIs, databases, and knowledge sources to expand their capabilities.
  • Build and maintain scalable Retrieval-Augmented Generation (RAG) pipelines, including the curation and indexing of knowledge bases in vector databases (e.g., Pinecone, Vertex AI Vector Search).
  • Leverage cloud-based AI/ML platforms (e.g., Vertex AI, Azure ML) to build, train, and deploy machine learning models on a scale.
  • Establish and enforce data quality and governance standards for AI/ML datasets, ensuring the accuracy, completeness, and integrity of data used for model training and validation.
  • Collaborate closely with data scientists and machine learning engineers to understand data requirements and deliver optimized data solutions that support the entire machine learning lifecycle.
  • Hands-on experience with IBM DataStage and Alteryx is a plus.
  • Strong understanding of database design principles, including normalization, indexing, partitioning, and query optimization.
  • Ability to design and maintain efficient, scalable, and well-structured database schemas to support both analytical and transactional workloads,
  • Familiarity with BI visualization tools such as MicroStrategy, Power BI, Looker, or similar.
  • Familiarity with data modeling tools.
  • Familiarity with DevOps practices for data (CI/CD pipelines)
  • Proficiency in project management software (e.g., JIRA, Clarizen, etc.)
  • Familiarity with DevOps practices for data (CI/CD pipelines)
  • Strong knowledge and skills in data management, data quality, and data governance.
  • Strong communication, collaboration, and problem-solving skills.
  • Ability to work on multiple projects and prioritize tasks effectively.
  • Ability to work independently and in a team environment.
  • Ability to learn new technologies and tools quickly.
  • The ability to handle stressful situations.
  • Highly developed business acuity and acumen.
  • Strong critical thinking and decision-making skills.


Working Conditions & Physical Demands

This position requires in-person office presence at least 4x a week.


Compensation and Benefits

The salary range for this position is $75,868 - $150,644. Placement in the range depends on several factors, including experience, skills, education, geography, and budget considerations.

Wakefern is proud to offer a comprehensive benefits package designed to support the health, well-being, and professional development of our Associates. Benefits include medical, dental, and vision coverage, life and disability insurance, a 401(k) retirement plan with company match & annual company contribution, paid time off, holidays, and parental leave.


Associates also enjoy access to wellness and family support programs, fitness reimbursement, educational and training opportunities through our corporate university, and a collaborative, team-oriented work environment. Many of these benefits are fully or partially funded by the company, with some subject to eligibility requirements

Not Specified
Data Analyst Manager
✦ New
Salary not disclosed
Hickory, NC 1 day ago

Who We Are

At Feetures, movement is our business. And we believe that a meaningful business begins with authentic values—and our values were forged by the bonds of family.

What started as a bold idea around a kitchen table has grown into a fast-moving, purpose-driven brand redefining performance. As a family-owned company in North Carolina, we’re fueled by the belief that better is always possible—and that energy drives both our products and our culture.

Movement is at the heart of everything we do. From our socks to our team and to our communities, we are always pushing forward. If you are ready to grow, challenge the status quo, and help shape the next chapter of a brand that is always in stride, come move with us. Feetures is Meant to Move. Are you?


Role Summary:

The Data Analytics Manager is responsible for owning and optimizing the organization’s end-to-end data ecosystem, ensuring that data infrastructure, governance, and analytics processes effectively support business operations. This role leads the design and management of the data stack—from source system integrations and NetSuite Analytics Warehouse to reporting and business intelligence tools—while establishing strong data governance standards, quality monitoring, and documentation practices. The manager also oversees and mentors analytics team members, prioritizes analytics requests, and coordinates cross-functional data workflows. Acting as the central authority for data reliability and insights, the role ensures consistent metric definitions, scalable data models, and accurate reporting while translating complex data into clear, actionable insights for business stakeholders.


Responsibilities:

Data Architecture & Tooling

  • Own the end-to-end data stack — from source system integrations and the NetSuite Analytics Warehouse to downstream reporting layers
  • Evaluate, select, and implement tools that improve data accessibility, reliability, and performance
  • Ensure alignment between data infrastructure and evolving business needs across distribution operations
  • Design and maintain scalable data models, SuiteQL queries, and saved searches within NetSuite

Data Governance & Quality

  • Define and enforce data standards, metric definitions, and naming conventions across all business domains
  • Establish data ownership, lineage documentation, and access governance policies
  • Implement monitoring and alerting for data quality issues across source systems and the warehouse
  • Build and maintain a data dictionary that serves as the single source of truth for the organization

Orchestration of Analysts & Systems

  • Manage and mentor the Data Analyst and Business Analyst — prioritizing requests, unblocking work, and validating outputs
  • Triage and prioritize the analytics request queue in alignment with business stakeholders and IT leadership
  • Coordinate cross-functional data workflows and ensure handoffs between systems and analysts are clean and documented
  • Serve as the escalation point for data discrepancies, report failures, and analytical questions from the business


Qualifications:

Required

  • 3-5 years of experience in data analytics, business intelligence, or data engineering
  • 2+ years in a lead or management role overseeing analysts or data team members
  • Strong proficiency in SQL; experience with SuiteQL or similar ERP query languages
  • Hands-on experience with NetSuite, including Analytics Warehouse, saved searches, and reporting
  • Proven track record establishing data governance standards and documentation practices
  • Experience integrating and managing multiple data sources across SaaS and ERP platforms
  • Demonstrated ability to translate complex data into clear, actionable insights for non-technical stakeholders

Preferred

  • Experience in distribution, wholesale, or supply chain environments
  • Familiarity with SaaS BI platforms (e.g., Tableau, Power BI, Looker, or embedded analytics)
  • Exposure to scripting or automation (JavaScript, Python, or similar) for data workflows
  • Background working within IT-led or hybrid IT/Analytics teams


Benefits:

  • Health insurance
  • Dental insurance
  • Vision insurance
  • Life & Disability insurance
  • 401(K) with company match


Company Paid holidays and PTO:

  • Feetures offers 20 PTO Days which are available to you on day one of employment and are available to all employees, no matter your role. After working at Feetures for 5 years, your PTO days will increase to 25 days. Days can be used for vacations, appointments and sick days.
  • We offer 10 company paid holidays and 1 floating holiday per year.


Perks:

  • Parking provided (Charlotte office and onsite at Hickory office)
  • Employee Engagement team
  • Monthly stipend to pursue an active lifestyle


Feetures is an Equal Opportunity Employer that welcomes and encourages all applicants to apply regardless of age, race, sex, religion, color, national origin, disability, veteran status, sexual orientation, gender identity and/or expression, marital or parental status, ancestry, citizenship status, pregnancy or other reasons protected by law.

Not Specified
Senior Data Architect
✦ New
Salary not disclosed
Princeton, NJ 1 day ago

About Cygnus Professionals, Inc.

Cygnus is a Princeton, NJ-headquartered global Business IT consulting and software Services firm with offices in the USA and Asia. Cygnus offers and enables innovation and helps our clients accelerate time to market & grow their business. Over 15 years, we have taken great pride in continuing our deep relationships with our clients.


For further information about CYGNUS, please visit our website Title: Data Architect

Location: Princeton, New Jersey – Onsite

W2 Contract


Job Summary

We are seeking an experienced Data Architect to design, build, and maintain scalable data architecture solutions supporting enterprise analytics, data integration, and digital transformation initiatives. The ideal candidate will work closely with business stakeholders, data engineers, and application teams to design robust data models, data pipelines, and enterprise data platforms that support advanced analytics and reporting.

Key Responsibilities

  • Design and implement enterprise data architecture frameworks and best practices.
  • Develop logical and physical data models for enterprise data platforms.
  • Architect data lakes, data warehouses, and data integration solutions across cloud and on-prem environments.
  • Collaborate with data engineers and application teams to build scalable data pipelines and ETL/ELT processes.
  • Ensure data governance, data quality, security, and compliance standards are implemented across the data ecosystem.
  • Evaluate and recommend data technologies, tools, and frameworks aligned with enterprise strategy.
  • Provide architectural guidance for cloud-based data platforms (AWS/Azure/GCP).
  • Optimize performance for large-scale data processing and analytics workloads.
  • Support business intelligence, reporting, and advanced analytics initiatives.

Required Qualifications

  • 10+ years of experience in data architecture, data engineering, or enterprise data management.
  • Strong experience with data modeling (conceptual, logical, physical).
  • Expertise with data warehouse and data lake architectures.
  • Hands-on experience with ETL/ELT tools and data integration platforms.
  • Experience with SQL and large-scale data platforms (Snowflake, Redshift, BigQuery, etc.).
  • Experience working with cloud data platforms (AWS, Azure, or GCP).
  • Strong understanding of data governance, data quality, and metadata management.
  • Experience with big data technologies (Spark, Hadoop, Kafka) is a plus.

Preferred Skills

  • Experience in Healthcare, Pharmaceutical, or Life Sciences domain.
  • Knowledge of Master Data Management (MDM) and data catalog tools.
  • Familiarity with BI tools such as Tableau, Power BI, or Looker.
  • Strong communication skills to interact with business and technical teams.

Education

  • Bachelor’s or Master’s degree in Computer Science, Information Systems, Data Science, or related field.


Cygnus Belief

We believe in our commitment to diversity & inclusion.


Equal Employment Opportunity Statement

Cygnus is an Equal Opportunity Employer. We ensure that no one should be discriminated against because of their differences, such as age, disability, ethnicity, gender, gender identity and expression, religion, or sexual orientation.


All our employment decisions are taken without looking into age, race, creed, color, religion, sex, nationality, disability status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status, or any other aspects of employment protected by federal, state, or local law. Applicants for employment in the US must have work authorization.

Not Specified
Data Steward Senior Analyst (Record Retention & Deletion policy and processes )
Salary not disclosed
Phoenix, AZ 3 days ago

As a Data Steward Senior Analyst, you are part of a team responsible for enabling and supporting compliance with data-related enterprise policies within their domains/business units. You and your team are responsible for identifying critical data and associated risks, maintaining data definitions, classifying data, supporting data sourcing / usage requests, measuring Data Risk Controls, and confirming Data Issues are remediated. You have the opportunity to partner across various business units, technology teams, and product/platform teams to define and implement the data governance strategy, supervising and leading data quality, resolving data/platform issues, and driving consistency, usability, and governance of specific product data across the enterprise.


In addition, this role will play a key part in effectively communicating new and updated data-related policies to the teams responsible for compliance. The individual must be skilled in preparing clear, engaging presentations that translate formal policy language into practical, easy-to-understand guidance and “tell the story” behind the policy requirements. The role will also support the delivery of training sessions, facilitate policy office hours, and serve as a go-to resource for questions related to data governance and retention compliance.


Your Primary Responsibilities may include:

• Assist in identifying data-related risks and associated controls for key business processes. Risks relate to Record Retention (primary), Data Quality, Data Movement, Data Stewardship, Data Protection, Data Sharing, among others.

• Develop training materials and educate organization on Record Retention and Deletion processes and procedures.

• Develop deep understanding of key enterprise data-related policies and serve as the policy expert for the business unit, providing education to teams regarding policy implications for business.

• Collaborate with and influence product managers to ensure all new use cases are managed according to policies.

• Influence and contribute to strategic improvements to data assessment processes and analytical tools.

• Support current regulatory reporting needs via existing platforms, working with upstream data providers, downstream business partners, as well as technology teams.

• Subject matter expertise on multiple platforms.

• Responsible to partner with the Data Steward Manager in developing and managing the data compliance roadmap.


Qualifications include:

• 5 + years of experience in a similar role involved with ensuring compliance with Record Retention and Deletion policies.

• Strong communication skills and ability to influence and engage at multiple levels and cross functionally.

• Intermediate understanding of Data Management and Data Governance concepts (metadata, lineage, data quality, etc.) and prior experience.

• 5+ years of Data Quality Management experience.

• Strong familiarity with data architecture and/or data modeling concepts

• 5+ years of experience with Agile or SAFe project methodologies

• Bachelor’s degree in Finance, Engineering, Mathematics, Statistics, Computer Science or other similar fields.

• Preferred: Experience in Travel Industry.

• Preferred: Knowledge of RCSA (Risk Control Self-Assessment) methodology


Leadership Skills may include:

• Makes Decisions Quickly and Effectively: Drives effective outcome through decision making authority. Displays judgement and discretion in order to ensure deliverables are sufficient to the American Express policy and overall compliance.

• Drives Innovation & Change: Provides systematic and rational analysis to identify the root cause of problems. Is prepared to challenge the status quo and drive innovation. Makes informed judgments, recommends tailored solutions.

• Leverages Team - Collaboration: Coordinates efforts within and across teams to deliver goals, accountable to bring in ideas, information, suggestions, and expertise from others outside & inside the immediate team.

• Communication: Influences and holds others accountable and has ability to convince others. Identifies the specific data governance requirements and is able to communicate clearly and in a compelling way.

Not Specified
Product Data Analyst
Salary not disclosed
Dallas, TX 2 days ago

Loloi Rugs is a leading textile brand that designs and crafts rugs, pillows, and throws for the thoughtfully layered home. Family-owned and led since 2004, Loloi is growing more quickly than ever. To date, we’ve expanded our diverse team to hundreds of employees, invested in multiple distribution facilities, introduced thousands of products, and earned the respect and business of retailers and designers worldwide. A testament to our products and our team, Loloi has earned the ARTS Award for “Best Rug Manufacturer” in 2010, 2011, 2015, 2016, 2018, 2023, and 2025.


Security Advisory: Beware of Frauds

Protect yourself from potential fraud and verify the authenticity of any job offer you receive from Loloi. Rest assured that we never request payment or demand any sensitive personal information, such as bank details or social security numbers, at any stage of the recruiting process. To ensure genuine communication, our recruiters will solely reach out to applicants using an @ email address. Your security is of paramount importance to us at Loloi, and we are committed to maintaining a safe and trustworthy hiring experience for all candidates.


We are building a Business Operations Center of Excellence, and we need a Product Data Analyst to serve as the "Guardian of the Golden Record." In this role, you are the absolute owner of product data integrity as it relates to the digital customer experience. You ensure that every item we sell is accurately represented across every touchpoint—from our ERP and PIM to our website storefront and marketing feeds. This is not a data entry role; it is a high-impact technical logic and investigation role. You will work directly with our Data Platform and Software Engineering teams to define business rules, audit data health via complex SQL, and troubleshoot data transmission errors before they impact the customer.


Responsibilities

  • Storefront Governance: Serve as the absolute owner of product data integrity within the PIM. Ensure that all storefront-critical attributes (pricing, dimensions, weights, image links) are accurate and standardized for a seamless customer experience.
  • Technical Data Auditing: Write and run complex SQL queries against our centralized database to identify anomalies, "orphan" records, and data hygiene issues that need resolution. You will be expected to query across multiple schemas to validate data consistency between systems.
  • Feed Logic & Mapping: You will manage the logic of how data translates from our PIM to external endpoints. You will ensure that our products appear correctly on Google Shopping, Meta, Amazon, and other marketplaces by managing feed rules and mapping definitions.
  • API Payload Analysis: You will act as the first line of defense for data transmission errors. If a product isn't showing up on the site, you will review the JSON/XML response bodies to determine if it is a data payload error or a software code bug.
  • Cross-Functional Impact Analysis: You will act as the gatekeeper for data changes, predicting downstream impacts (e.g., "If Merchandising changes this Category Name, it will break the Finance reporting filter").
  • Hygiene Logic Definition: You will partner with our IT/Database team to define automated health checks. You identify the "rot" (bad data patterns), and they implement the database constraints to stop it.


What You Will NOT Do (The Boundaries)

  • No Web Development: You are not a Front-End Developer. You do not write HTML, CSS, or React code. You ensure the data powering those components is 100% accurate.
  • No Manual Data Entry: Your job is not to copy-paste descriptions. You build the systems, bulk processes, and logic that ensure data quality at scale.
  • No Database Administration: You do not manage server uptime or schema changes (IT owns this). You own the quality of the records inside the database.


Intersection with Technical Teams

  • With IT (Database Mgmt): IT owns the infrastructure and schema; you own the quality of the data within it. When you identify a systemic issue (e.g., "5,000 orphan records"), you partner with IT to implement the technical fix (scripts/constraints).
  • With Software Engineering (Commerce): If a product is missing from the site, you check the data payload. If the data is correct, you hand off to Engineering, confirming it is a code/caching bug rather than a data error.


Experience, Skills, & Ability Requirements

  • 5-8 years of experience in Data Management, PIM Administration, or technical eCommerce Operations.
  • SQL Proficiency: You are comfortable writing queries beyond simple SELECT *. You should be proficient with CTEs (Common Table Expressions), Window Functions (e.g., Rank, Lead/Lag), Subqueries, and complex Joins to act as a forensic data investigator.
  • API Fluency: You can read and understand JSON and XML. You know what a valid payload looks like and can spot formatting errors or missing keys.
  • Data Manipulation: You are an expert at handling large datasets (CSVs, Excel) and understand data types, formatting standards, and normalization concepts.
  • You love hunting down the root cause of an error. You don't just fix the wrong price; you find out why the price was wrong and build a rule to stop it from happening again.
  • You have high standards for accuracy. You understand that a wrong weight in the system means a financial loss on shipping for the business.


Bonus Points (Nice-to-Haves)

  • Familiarity with Visio/Lucidchart to visualize data flows.
  • Ability to build simple dashboards in Tableau to track data health scores.
  • Basic familiarity with Python or R for data manipulation.


What We Offer

  • Health, dental, and vision benefits
  • Paid parental leave
  • 401(k) with employer match
  • A culture of meritocracy that fosters ongoing growth opportunities
  • A stable, growing family-owned company that looks after its employees


Loloi Rugs does not discriminate on the basis of race, sex, color, religion, age, national origin, marital status, disability, veteran status, genetic information, sexual orientation, gender identity or any other reason prohibited by law in provision of employment opportunities and benefits. We seek a diverse pool of applicants and consider all qualified candidates regardless of race, ancestry, color, gender identity or expression, sexual orientation, religion, national origin, citizenship, disability, Veteran status, marital status, or any other protected status. If you have a special need or disability that requires accommodation, please let us know.

Not Specified
Sr Data Analyst
✦ New
Salary not disclosed
Dallas, TX 1 day ago

Title: Senior Data Analyst

Duration: Long term

Location: Dallas , TX



Job Description:

Primary responsibilities of the Senior Data Analyst include supporting and analyzing data anomalies for multiple environments including but not limited to Data Warehouse, ODS, Data Replication/ETL Data Management initiatives. The candidate will be in a supporting role and will work closely with Business, DBA, ETL and Data Management team providing analysis and support for complex Data related initiatives. This individual will also be responsible for assisting in initial setup and on-going documentation/configuration related to Data Governance and Master Data Management solutions. This candidate must have a passion for data, along with good SQL, analytical and communication skills.

Responsibilities

  • Investigate and Analyze data anomalies and data issues reported by Business
  • Work with ETL, Replication and DBA teams to determine data transformations, data movement and derivations and document accordingly
  • Work with support teams to ensure consistent and pro-active support methodologies are adhered to for all aspects of data movements and data transformations
  • Assist in break fix and production validation as it relates to data derivations, replication and structures
  • Assist in configuration and on-going setup of Data Virtualization and Master Data Management tools
  • Assist in keeping documentation up to date as it relates to Data Standardization definitions, Data Dictionary and Data Lineage
  • Gather information from various Sources and interpret Patterns and Trends
  • Ability to work in a team-oriented, fast-paced agile environment managing multiple
  • priorities


Qualifications

  • 4+ years of SQL experience working in OLTP, Data Warehouse and Big Data databases
  • 4+ years of experience working with Exadata and SQL Server databases
  • 4+ years in a Data Analyst role
  • Strong attention to Detail
  • 2+ years writing medium to complex stored procedures a plus
  • Ability to collaborate effectively and work as part of a team
  • Extensive background in writing complex queries
  • Extensive working knowledge of all aspects of Data Movement and Processing, including ETL, API, OLAP and best practices for data tracking
  • Good Communication skills
  • Self-Motivated
  • Works well in a team environment
  • Denodo Experience a plus
  • Master Data Management a plus
  • Big Data Experience a plus (Hadoop, MongoDB)
  • Postgres and Cloud Experience a plus
Not Specified
Associate Partner, Data and Technology Transformation
$250 +
Chicago, IL 2 days ago
Introduction
Your role and responsibilities
About the Opportunity

IBM Consulting is seeking an accomplished Data & Analytics Associate Partner to accelerate our growth within the Industrial & Communications sectors. This executive role is responsible for shaping client vision, cultivating senior executive relationships, and developing data-driven solutions that enable clients to successfully navigate complex transformation programs.


You will bring together deep industry expertise and IBM’s portfolio of data, analytics, and AI capabilities to help organizations modernize their data ecosystems—migrating from legacy platforms to modern hybrid cloud architectures—while adopting next-generation analytics, GenAI, and agentic AI to strengthen decision-making and deliver measurable business and financial outcomes.


This role is ideal for a seasoned leader who integrates industry depth, consulting excellence, and technical thought leadership, has a strong understanding of competitive market dynamics, and consistently delivers high-impact transformation at scale.


Key Responsibilities
Market Leadership & Growth

  • Expand IBM’s Data & Analytics presence by identifying new market opportunities, developing differentiated solutions, and building a strong pipeline.


  • Engage senior client executives to understand strategic priorities and shape data transformation roadmaps aligned to their business and financial goals.


  • Lead end-to-end sales cycles, including solution definition, proposal leadership, financial structuring, and contract negotiation.



Strategic Advisory & Transformation Delivery

  • Advise C-suite leaders on strategies to their data estate modernization, advanced analytics, GenAI, and agentic AI to drive business performance.


  • Architect integrated solutions that include:


  • Migration from legacy data platforms to modern cloud-based architectures


  • Data engineering and Information governance


  • Business intelligence and advanced analytics


  • GenAI-powered and agentic AI-driven automation and decisioning


  • Lead complex transformation programs from discovery through delivery, ensuring measurable outcomes and client satisfaction.



Engagement Excellence & Financial Stewardship

  • Oversee multi-disciplinary delivery teams to ensure high-quality, consistent execution across all program phases.


  • Manage engagement financials, including forecasting, margin performance, and overall portfolio profitability.


  • Align right client technologies, industry expertise, and global delivery capabilities to maximize client value.



Practice Building & Talent Development

  • Recruit, mentor, and grow top-tier consultants, architects, and data specialists.


  • Build and scale capabilities in data modernization, cloud data engineering, analytics, GenAI, and emerging agentic AI techniques.


  • Contribute to practice strategy, offering development, and capability growth across the global Data & Analytics team.



Thought Leadership & Market Presence

  • Stay ahead of sector and technology trends, including cloud modernization, GenAI, agentic system design, regulatory changes, and evolving competitive dynamics.


  • Represent IBM at industry conferences, client events, webinars, and executive roundtables.


  • Create original thought leadership—articles, perspectives, point-of-views—that positions IBM as a leading advisor in data and AI-driven transformation.



This position can be preformed anywhere in the US.


"Leaders are expected to spend time with their teams and clients and therefore are generally expected to be in the workplace a minimum of three days a week, subject to business needs."


Required technical and professional expertise
Qualifications

  • 12+ years of experience in consulting, data strategy, analytics, or digital transformation, with strong exposure to the Industrial or Communications sectors.


  • Hands-on experience modernizing data ecosystems, including migrating from legacy on-premise platforms to modern cloud-native or hybrid cloud architectures.


  • Deep expertise with major cloud platforms and their data/analytics stacks, including implementation experience with:


  • AWS (e.g., Redshift, S3, Glue, EMR, Athena, Lake Formation, Bedrock, SageMaker)


  • Microsoft Azure (e.g., Azure Data Lake, Synapse, Data Factory, Databricks on Azure, Fabric, Cognitive Services)


  • Google Cloud Platform (e.g., BigQuery, Cloud Storage, Dataflow, Dataproc, Vertex AI)


  • Experience designing and implementing end-to-end data pipelines, governance frameworks, and analytics solutions on one or more of these platforms.


  • Strong understanding of GenAI architectures, LLM integration patterns, vector databases, retrieval-augmented generation (RAG), and emerging agentic AI frameworks.


  • Proven track record of selling, structuring, and delivering large-scale data and AI transformation programs.


  • Robust technical and functional expertise in data engineering, cloud data platforms, analytics, AI/ML, information management, and governance.


  • Executive-level communication and presence, with demonstrated ability to influence senior stakeholders and convey complex topics through compelling narratives.


  • Financial management experience, including engagement economics, forecasting, margin optimization, and portfolio profitability.


  • Demonstrated leadership in building, scaling, and developing high-performing consulting and technical teams.



Preferred technical and professional experience

IBM is committed to creating a diverse environment and is proud to be an equal-opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, gender, gender identity or expression, sexual orientation, national origin, caste, genetics, pregnancy, disability, neurodivergence, age, veteran status, or other characteristics. IBM is also committed to compliance with all fair employment practices regarding citizenship and immigration status.


#J-18808-Ljbffr
Not Specified
Data Product Engineer
Salary not disclosed
Newark, NJ 3 days ago
Job Title: Marketplace Data Product Engineer

Duration: 6+ months

Location: 100% Remote

Job Overview

The Marketplace Data Product Engineer serves as the primary technical facilitator, and adoption champion for the Marketplace platform. This role bridges engineering, product, and business domains - leading workshops, demos, onboarding sessions, and cross?domain engagements to accelerate Marketplace adoption. You will configure demo environments, support development, translate complex technical concepts for business audiences, gather product feedback, and partner closely with product and engineering teams to shape the Marketplace roadmap. This will guide domains through the process of understanding, showcasing, and maturing their data products within the ecosystem.

Key Responsibilities


  • Facilitate workshops, demos, onboarding sessions, and cross?domain engagements to drive Marketplace adoption.
  • Serve as the primary technical presenter of the Marketplace for domain teams and stakeholders.
  • Engage with domain owners to understand their data products, help refine their articulation, and showcase how they integrate into the Marketplace ecosystem.
  • Configure and maintain demo environments for Marketplace capabilities, data products, and new features.
  • Support light development, proof?of?concept configurations, and sample integrations to demonstrate platform capabilities.
  • Translate technical Marketplace concepts into clear, business?friendly language for non?technical audiences.
  • Collect structured feedback from domain teams, synthesize insights, and partner with product and engineering to influence the roadmap.
  • Develop and refine training materials, demos, playbooks, and onboarding assets to support continuous adoption.
  • Act as an advocate for domains, ensuring their data product needs and challenges are well represented in Marketplace planning.
  • Support ongoing adoption initiatives, including community sessions, office hours, and cross?domain knowledge sharing.


Required Skills & Qualifications


  • 4-7+ years of experience in data engineering, platform engineering, solution engineering, technical consulting, or similar roles.
  • Strong understanding of data products, data modeling concepts, data APIs, enterprise integrations and metadata?driven architectures.
  • Ability to configure and demonstrate platform features, build light proofs?of?concept, and support technical onboarding.
  • Excellent communication and presentation skills, with experience translating technical concepts for business partners.
  • Experience facilitating workshops, leading demos, or driving customer/product adoption initiatives.
  • Ability to engage domain teams, understand their data product needs, and help articulate value within a larger ecosystem.
  • Strong collaboration and stakeholder management skills across engineering, product, and business teams.
  • Comfortable working in fast?moving environments and driving clarity through ambiguity.


Preferred Qualifications


  • Experience with data product and governance frameworks, data marketplaces, data mesh concepts, or platform adoption roles.
  • Hands?on experience with cloud data platforms (Azure, AWS, or GCP), data pipelines, or integration tooling.
  • Familiarity with REST/GraphQL APIs, event-driven patterns, and data ingestion workflows.
  • Background in solution architecture, customer engineering, or sales engineering.
  • Experience developing demo environments, sample apps, or repeatable platform enablement assets.
  • Strong storytelling ability when explaining data product value, domain capabilities, and Marketplace patterns.


Not Specified
Sr. Data Engineer, tvScientific
Salary not disclosed
San Francisco, CA 3 days ago

About Pinterest:


Millions of people around the world come to our platform to find creative ideas, dream about new possibilities and plan for memories that will last a lifetime. At Pinterest, we're on a mission to bring everyone the inspiration to create a life they love, and that starts with the people behind the product.


Discover a career where you ignite innovation for millions, transform passion into growth opportunities, celebrate each other's unique experiences and embrace theflexibility to do your best work. Creating a career you love? It's Possible.


At Pinterest, AI isn't just a feature, it's a powerful partner that augments our creativity and amplifies our impact, and we're looking for candidates who are excited to be a part of that. To get a complete picture of your experience and abilities, we'll explore your foundational skills and how you collaborate with AI.


Through our interview process, what matters most is that you can always explain your approach, showing us not just what you know, but how you think. You can read more about our AI interview philosophy and how we use AI in our recruiting process here.

About tvScientific


tvScientific is the first and only CTV advertising platform purpose-built for performance marketers. We leverage massive data and cutting-edge science to automate and optimize TV advertising to drive business outcomes. Our solution combines media buying, optimization, measurement, and attribution in one, efficient platform. Our platform is built by industry leaders with a long history in programmatic advertising, digital media, and ad verification who have now purpose-built a CTV performance platform advertisers can trust to grow their business.



As a Senior Data Engineer at tvScientific, you will be a key player in implementing the robust data infrastructure to power our data-heavy company. You will collaborate with our cross-functional teams to evolve our core data pipelines, design for efficiency as we scale, and store data in optimal engines and formats. This is an individual contributor role, where you will work to define and implement a strategic vision for data engineering within the organization.



What you'll do:



  • Implement robust data infrastructure in AWS, using Spark with Scala
  • Evolve our core data pipelines to efficiently scale for our massive growth
  • Store data in optimal engines and formats
  • Collaborate with our cross-functional teams to design data solutions that meet business needs
  • Built out fault-tolerant batch and streaming pipelines
  • Leverage and optimize AWS resources while designing for scale
  • Collaborate closely with our Data Science and Product teams
  • How we'll define success:

    • Successful implementation of scalable and efficient data infrastructure
    • Timely delivery and optimization of data assets and APIs
    • High attention to detail in implementation of automated data quality checks
    • Effective collaboration with cross-functional teams




What we're looking for:



  • Production data engineering experience
  • Proficiency in Spark and Scala, with proven experience building data infrastructure in Spark using Scala
  • Familiarity with data lakes, cloud warehouses, and storage formats
  • Strong proficiency in AWS services
  • Expertise in SQL for data manipulation and extraction
  • Excellent written and verbal communication skills
  • Bachelor's degree in Computer Science or a related field
  • Nice-to-Haves

    • Experience in adtech
    • Experience implementing data governance practices, including data quality, metadata management, and access controls
    • Strong understanding of privacy-by-design principles and handling of sensitive or regulated data
    • Familiarity with data table formats like Apache Iceberg, Delta




In-Office Requirement Statement:



  • We recognize that the ideal environment for work is situational and may differ across departments. What this looks like day-to-day can vary based on the needs of each organization or role.


Relocation Statement:



  • This position is not eligible for relocation assistance. Visit ourPinFlexpage to learn more about our working model.


#LI-SM4


#LI-REMOTE

At Pinterest we believe the workplace should be equitable, inclusive, and inspiring for every employee. In an effort to provide greater transparency, we are sharing the base salary range for this position. The position is also eligible for equity. Final salary is based on a number of factors including location, travel, relevant prior experience, or particular skills and expertise.


Information regarding the culture at Pinterest and benefits available for this position can be found here.

US based applicants only$123,696—$254,667 USD

Our Commitment to Inclusion:


Pinterest is an equal opportunity employer and makes employment decisions on the basis of merit. We want to have the best qualified people in every job. All qualified applicants will receive consideration for employment without regard to race, color, ancestry, national origin, religion or religious creed, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, age, marital status, status as a protected veteran, physical or mental disability, medical condition, genetic information or characteristics (or those of a family member) or any other consideration made unlawful by applicable federal, state or local laws. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. If you require a medical or religious accommodation during the job application process, please completethis formfor support.

Not Specified
Lecturer - Data Science Undergraduate Studies - College of Computing, DataScience, and Society
Salary not disclosed
Berkeley, CA 3 days ago
Position overview

Salary range:
The UC academic salary scales set the minimum pay at appointment. See the following table for the current salary scale for this position: . The current full-time salary range for this position is $70,977-$199,722. Placement on the scale is commensurate with college teaching experience.

Percent time:
15% to 100%

Anticipated start:
Positions usually start in July or August for Fall, January for Spring and June for Summer.

Review timeline:
Applications will be accepted and reviewed for unit needs through January 2027. Applications are typically considered in April and May for fall course needs, in September and October for spring course needs, and February and March for summer course needs. The pool will close January 2027; applicants wishing to remain in the pool after that time will need to submit a new application.

Application Window


Open date: June 9, 2025




Most recent review date: Tuesday, Jun 24, 2025 at 11:59pm (Pacific Time)

Applications received after this date will be reviewed by the search committee if the position has not yet been filled.




Final date: Tuesday, Jan 12, 2027 at 11:59pm (Pacific Time)

Applications will continue to be accepted until this date, but those received after the review date will only be considered if the position has not yet been filled.



Position description

Data Science Undergraduate Studies (DSUS) at the University of California, Berkeley invites applications for a pool of qualified temporary lecturers to teach DSUS courses should an opening arise. Screening of applicants is ongoing and will continue as needed. The number of positions varies from semester to semester (fall, spring and summer sessions), depending on the needs of the unit.



About DSUS



Data Science Undergraduate Studies (DSUS) offers a range of academic, co-curricular, and enrichment programs-including the Data Science major and minor-with a wide-reaching impact both across UC Berkeley and beyond.



Designed in collaboration with faculty from across Berkeley, Data Science invests students with deep technical knowledge, expertise in how to apply that knowledge in a field of their choosing, and an understanding of the social and human contexts and ethical implications of how data are collected, analyzed, and used. This combination positions graduates to help inform and develop solutions to a range of pressing challenges, from adapting industry to a new world of data to amplifying learning in education to helping communities recover from disaster.



DSUS is part of the College of Computing, Data Science, and Society (CDSS), which strives to develop, implement, and share high-quality, ethics-oriented, and accessible curricula, educating a diverse student body in data science, computing, and statistics. Core to the college is an understanding of how computing and data science affect equality, equity, and opportunity-and the capacity to respond to social challenges.



DSUS is committed to hiring and developing staff who want to work in a high performing culture that reflects the outstanding work of our faculty and students. DSUS seeks candidates who can support the success of all students through inclusive curriculum, classroom environment, and pedagogy.



Responsibilities



DSUS is seeking outstanding instructors to be appointed in the non-Senate Lecturer title series who can teach small and large courses in several areas. We are particularly interested in instructors who can combine computational and inferential thinking in a way that reflects the new field of Data Science Education.



Core courses include:

Fundamentals of Data Science

Principle and Techniques of Data Science

Human Contexts and Ethics of Data

Data and Justice

Data, Inference, and Decisions

Honors Thesis Seminar



Connector Courses: Instructors may be hired to teach Connector Courses that connect Foundations of Data Science with other disciplines, such as neuroscience, legal studies, public health, demography, English or others. Connector courses allow students to apply theoretical concepts from data science to a particular area of interest. Course design and syllabus will leverage the sequence of computational and statistical techniques that students learn in the Foundations course.



Teaching a Data Science course may include holding office hours, assign grades, advise students, prepare course materials (e.g., syllabus), provide clear and prompt feedback on student work, and maintain the course website.



Please note: The use of a lecturer pool does not guarantee that an open position exists. See the review date specified in AP Recruit to learn whether the unit is currently reviewing applications for a specific position. If there is no future review date specified, your application may not be considered at this time.



Department: dsus

Division:



Qualifications

Basic qualifications (required at time of application)

Must have an advanced degree or be enrolled in an advanced degree program at the time of application.



Additional qualifications (required at time of start)

Advanced degree. Candidates must already be authorized to work in the United States.



Preferred qualifications

A Ph.D. or equivalent international degree in computer science, statistics, information, applied mathematics, engineering, or the social sciences is preferred.



Ability to support the success of all students through inclusive curriculum, classroom environment, and pedagogy.



Application Requirements

Document requirements

  • Curriculum Vitae - Your most recently updated C.V.


  • Cover Letter


  • Statement of Teaching - Please discuss prior teaching experience, teaching approach, and future teaching interests. This can include, for example, specific efforts, accomplishments, and future plans to support the success of all students through inclusive curriculum, classroom environment, and pedagogy.




Reference requirements
  • 3-4 required (contact information only)


Apply link:
JPF04958

Help contact:



About UC Berkeley

UC Berkeley is committed to diversity, equity, inclusion, and belonging in our public mission of research, teaching, and service, consistent with UC Regents Policy 4400 and University of California Academic Personnel policy (APM 210 1-d). These values are embedded in our Principles of Community, which reflect our passion for critical inquiry, debate, discovery and innovation, and our deep commitment to contributing to a better world. Every member of the UC Berkeley community has a role in sustaining a safe, caring and humane environment in which these values can thrive.



The University of California, Berkeley is an Equal Opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, age, or protected veteran status.



For more information, please refer to the University of California's Affirmative Action and Nondiscrimination in Employment Policy and the University of California's Anti-Discrimination Policy.



In searches when letters of reference are required all letters will be treated as confidential per University of California policy and California state law. Please refer potential referees, including when letters are provided via a third party (i.e., dossier service or career center), to the UC Berkeley statement of confidentiality prior to submitting their letter.



As a University employee, you will be required to comply with all applicable University policies and/or collective bargaining agreements, as may be amended from time to time. Federal, state, or local government directives may impose additional requirements.


Unless stated otherwise, unambiguously, in the position description, this position does not include sponsorship of a new consular H-1B visa petition that would require payment of the $100,000 supplemental fee.



As a condition of employment, the finalist will be required to disclose if they are subject to any final administrative or judicial decisions within the last seven years determining that they committed any misconduct.




  • "Misconduct" means any violation of the policies or laws governing conduct at the applicant's previous place of employment, including, but not limited to, violations of policies or laws prohibiting sexual harassment, sexual assault, or other forms of harassment or discrimination, as defined by the employer.
  • UC Sexual Violence and Sexual Harassment Policy
  • UC Anti-Discrimination Policy
  • APM - 035: Affirmative Action and Nondiscrimination in Employment


Job location
Berkeley, CA
Not Specified
jobs by JobLookup
✓ All jobs loaded