Data Solutions Analyst Salary Jobs in Usa

10,535 positions found — Page 3

Data Integration Co-op
✦ New
Salary not disclosed
Quincy, MA 9 hours ago

Get an insider view of the fast-changing grocery retail industry while developing relevant business, technical and leadership skills geared towards enhancing your career. This paid Co-op experience is an opportunity to help drive business results in an environment designed to promote and reward diversity, innovation and leadership. Applicants must be currently enrolled in a bachelor's or master's degree program.


**Applicants must be currently authorized to work in the United States on a full-time basis and be available from July 13 2026, through December 4, 2026. We have a hybrid work environment that requires a minimum of three days a week in the office. Please submit your resume including your cumulative GPA. Transcripts may be requested at a future date.**


  • Approximate 6-month Co-op session with competitive pay
  • Impactful project work to develop your skills/knowledge
  • Leadership speaker sessions and development activities
  • One-on-one mentoring
  • Involvement in group community service events
  • Networking and professional engagement opportunities
  • Access to online career development tools and resources
  • Opportunity to present project work to company leaders

Duties & Responsibilities

The Data Integration team sits in the middle of ADUSA's Analytics business and IT teams. We bridge the gap by analyzing the business requirements, creating Tech intake forms and facilitating the deliverables within the IT team. This team also acts as Product Managers for the Agile squads that consist of developers and QA team members. As a Co-op, you will sharpen your SQL skills by working with data solutions that span grocery banners, implementing and optimizing views, creating custom tables, data connections and functions. Learn how to evaluate business processes, anticipate requirements, uncover areas for improvement, and develop and implement solutions. You will get exposure to business thinking through working closely with other analysts, managers, and executives across our banners. Work with stakeholders throughout the organization to identify opportunities for leveraging company data to drive business solutions, from reporting to advanced data modeling. You will also effectively communicate your insights and plans to cross-functional team members and management. Conduct meetings and presentations to share ideas and findings.

Qualifications

  • Must be enrolled in a BA/BS, MS or PhD program or recent graduate in a related field
  • Team player with great interpersonal and communication skills
  • Good time-management skills
  • Documentation skills
  • Sense of ownership and pride in your performance and its impact on company's success
  • Exposure to cloud technologies like Microsoft Azure Data Lake and Databricks concepts like Unity Catalog
  • Experience with Python & SQL
  • Good understanding of Atlassian suite and features specifically JIRA and Confluence


Skills:

  • Python
  • SQL
  • Atlassian Suite
  • JIRA
  • Confluence



Individual cohort pay rates vary based on location, academic year, and position.


ME/NC/PA/SC Salary Range: $20.90 - $35.70


IL/MA/MD Salary Range:$22.80 - $37.30

Not Specified
Lead Data Analyst
Salary not disclosed
Denver, CO 2 days ago

Role - Lead Data Analyst

Location : Denver, CO [Local Only] In-person client interview


Job Summary

This role is responsible for extracting meaningful information and providing the business with actionable recommendations to drive outcomes. Responsible for leveraging existing data sources and creating new analysis methods.


Major Duties And Responsibilities

  • Actively and consistently supports all efforts to simplify and enhance the customer experience.
  • Lead client teams to define clear business requirements for data analysis projects.
  • Provide metrics definition, data visualizations, and ETL requirements.
  • Extract, clean and engineer data to be ready for analysis.
  • Interpret data, formulate hypotheses and develop an analytical approach to meet business requirements
  • Create customer-readable reports using advanced visualization tools such as Tableau, PowerBI, Excel, etc.
  • Work to obtain and ingest new reference data sources required to deliver on business need.
  • Communicate results and make recommendations using data visualization and presentations.
  • Create analyses and dashboards that are usable, elegant and industry leading.


Required Qualifications

  • Ability to read, write, speak and understand English
  • Demonstrated in-depth ability to analyze, interpret and present data
  • Demonstrated in-depth ability to make decisions and solve problems while working under pressure
  • Demonstrated in-depth ability to prioritize and organize effectively
  • Demonstrated mastery of advanced analytics processes and reporting design principles
  • Demonstrated mastery in SQL, Python, or R
  • Demonstrated in-depth proficiency of design and implementation practices within data visualization tools
  • Effective communication skills, verbal and written, for internal and external customers
  • Ability to communicate complex technical concepts to all levels of an organization to aid in decision-making


Required Education

  • Bachelor's degree in Computer Science, Engineering or related field; or equivalent experience


Required Related Work Experience and Number of Years

  • 7+ years’ experience working within a data platform/data analysis environment
  • 7+ years’ experience in a customer facing products/services environment
  • Technical Lead, exp in more industries, more expert than Data Insight Analysts
Not Specified
Senior Data Scientist
✦ New
🏢 REVOLVE
Salary not disclosed
Cerritos, CA 1 day ago

Meet REVOLVE:

REVOLVE is the next-generation fashion retailer for Millennial and Generation Z consumers. As a trusted, premium lifestyle brand, and a go-to online source for discovery and inspiration, we deliver an engaging customer experience from a vast yet curated offering totaling over 45,000 apparel, footwear, accessories and beauty styles. Our dynamic platform connects a deeply engaged community of millions of consumers, thousands of global fashion influencers, and more than 500 emerging, established and owned brands. Through 16 years of continued investment in technology, data analytics, and innovative marketing and merchandising strategies, we have built a powerful platform and brand that we believe is connecting with the next generation of consumers and is redefining fashion retail for the 21st century. For more information please visit REVOLVE the most successful team members have a thirst and the creativity to make this the top e-commerce brand in the world. With a team of 1,000+ based out of Cerritos, California we are a dynamic bunch that are motivated by getting the company to the next level. It’s our goal to hire high-energy, diverse, bright, creative, and flexible individuals who thrive in a fast-paced work environment. In return, we promise to keep REVOLVE a company where inspired people will always thrive.

To take a behind the scenes look at the REVOLVE “corporate” lifestyle check out our Instagram @REVOLVEcareers or #lifeatrevolve.

Are you ready to set the standard for Premium apparel?

Main purpose of the Senior Data Science Analyst role:

Use a diverse skill sets across math and computer science, dedicated to solving complex and analytically challenging problems here at Revolve.


Major Responsibilities:

Essential Duties and Responsibilities include the following. Other duties may be assigned.

  • Partner closely with business leaders in Marketing, Product, Operations, Buying team to plan out valuable data science projects
  • Conduct complex analysis and build models to uncover key learning form data, leading to appropriate strategy recommendations.
  • Work closely with the DBA to improve BI’s infrastructure, architect the reporting system, and invest in time for technical proof of concept.
  • Work closely with the business intelligence and tech team to define, automate and validate the extraction of new metrics from various data sources for use in future analysis
  • Work alongside business stakeholders to apply our findings and models in website personalization, product recommendations, marketing optimization, to fraud detection, demand forecast, CLV prediction.


Required Competencies:

To perform the job successfully, an individual should demonstrate the following competencies:

  • Outstanding analytical skills, with strong academic background in statistics, math, science or technology.
  • High comfort level with programming, ability to learn and adopt new technology with short turn-around time.
  • Knowledge of quantitative methods in statistics and machine learning
  • Intense intellectual curiosity – strong desire to always be learning
  • Proven business acumen and results oriented.
  • Ability to demonstrate logical thinking and problem solving skills
  • Strong attention to detail


Minimum Qualifications:

  • Master Degree is required
  • 3+ years of DS and ML experience in a strong analytical environment.
  • Proficient in Python, NumPy and other packages
  • Familiar with statistical and ML methodology: causal inference, logistic regression, tree-based models, clustering, model validation and interpretations.
  • Experience with AB Testing and pseudo-A/B test setup and evaluations
  • Advanced SQL experience, query optimization, data extract
  • Ability to build, validate, and productionize models


Preferred Qualifications:

  • Strong business acumen
  • Experience in deploying end to end Machine Learning models
  • 5+ years of DS and ML experience preferred
  • Advanced SQL and Python, with query and coding optimization experience
  • Experience with E-commerce marketing and product analytics is a plus


A successful candidate works well in a dynamic environment with minimal supervision. At REVOLVE we all roll up our sleeves to pitch-in and do whatever it takes to get the job done. Each day is a little different, it’s what keeps us on our toes and excited to come to work every day.


A reasonable estimate of the current base salary range is $120,000 to $150,000 per year.

Not Specified
Data Engineering Manager
Salary not disclosed
Green Bay, WI 3 days ago
At Nicolet National Bank, our culture is based on the principles of community banking, putting the needs of our customers at the forefront of our decision-making. Our Core Values drive everything we do, and we are committed to serving our customers with excellence. We believe that every job in our organization is critical to our success, and we are dedicated to creating a work environment where our employees feel valued, respected, and supported. With locations in Wisconsin, Michigan, Minnesota, Iowa, Colorado, and Florida we are proud to service our local communities and make a positive impact on the lives of our customers. At Nicolet National Bank, we believe that our people are our most valuable asset, and we are committed to investing in their growth and development.

The Data Engineering Manager is responsible for leading and developing a team of Data Architects and Data Solutions Engineers while actively contributing to hands-on technical projects. This role will manage the data warehouse in Snowflake, engineering automations in Alteryx and/or other solutions, while ensuring efficient project intake and prioritization. The ideal candidate combines strong technical expertise with proven technical leadership skills to drive innovation and operational excellence across the data engineering function.

As a Data Engineering Manager, you will:


  • Set the technical strategy for data engineering solutions and data architecture which includes end to end data pipeline strategy, consumption management, project scoping, and data automation.
  • Design, develop, and optimize data engineering solutions using Snowflake, DBT, Azure Data Factory, and Alteryx.
  • Continuously assess and optimize the data engineering technology stack to ensure scalability, performance, and alignment with industry best practices.
  • Implement best practices for data modeling, ETL/ELT processes, and automation.
  • Own and maintain the Snowflake data warehouse roadmap and engineering standards.
  • Lead data project scoping, prioritization, and resource allocation to ensure timely delivery of data engineering solutions.
  • Ensure data integrity, security, and compliance across all engineering solutions.
  • Collaborate with IT and rest of data teams to align solutions with enterprise
  • Establish documentation and governance standards for data engineering workflows ensuring completeness, audit readiness, and traceability in alignment with enterprise architecture.
  • Directly supervise the Data Architecture & Data Engineering team in accordance with Nicolet's policies and applicable laws. Responsibilities include interviewing, hiring, and training employees; planning, assigning, and directing work; appraising performance; coaching, mentoring and development planning; rewarding and disciplining employees; addressing complaints and resolving problems.


Qualifications:


  • Bachelor's degree in Computer Science, Data Engineering, Data Analytics or related field.
  • 7+ years in data engineering or related data roles required.
  • 3+ years in leadership or management positions required.
  • Strong technical expertise in Snowflake, DBT, Azure Data Factory and SQL or like systems.
  • Familiarity with Alteryx, UiPath, Tableau, Power BI and Salesforce is preferred.
  • Ability to design and implement scalable data solutions.
  • Excellent leadership, communication, and organizational skills
  • Ability to balance hands-on development with team development.
  • Must be able to work fully in-office. This position does not allow for remote work.


Benefits:


  • Medical, Dental, Vision, & Life Insurance
  • 401(k) with a company match
  • PT0 & 11 1/2 Paid Holidays


The above statements are intended to describe the general nature and level of work being performed. They are not intended to be construed as an exhaustive list of all responsibilities and skills required for the position.

Equal Opportunity Employer/Veterans/Disabled
Not Specified
Data Steward
Salary not disclosed
Creve Coeur, MO 2 days ago

Job Summary:

Our client is seeking a Data Steward to join their team! This position is located Hybrid in Creve Coeur, Missouri.

Duties:

  • Understand business capability needs and processes as they relate to IT solutions through partnering with Product Managers and business and functional IT stakeholders
  • Participate in data scraping, data curation and data compilation efforts
  • Ensure high quality of the data to end users
  • Ensure high quality of the inhouse data via data stewardship
  • Implement and utilize data solutions for data analysis and profiling using a variety of tools such as SQL, Postman, R, or Python and following the team’s established processes and methodologies
  • Collaborate with other data stewards and engineers within the team and across teams on aligning delivery dates and integration efforts
  • Define data quality rules and implement automated monitoring, reporting, and remediation solutions
  • Coordinate intake and resolution of data support tickets
  • Support data migration from legacy systems, data inserts and updates not supported by applications
  • Partner with the Data Governance organization to ensure data is secured and access is being managed appropriately
  • Identify gaps within existing processes and capable of creating new documentation templates to improve the existing processes and procedures
  • Create mapping documents and templates to improve existing manual processes
  • Perform data discoveries to understand data formats, source systems, etc. and engage with business partners in this discovery process
  • Help answer questions from the end-users and coordinate with technical resources as needed
  • Build prototype SQL and continuously engage with end consumers with enhancements


Desired Skills/Experience:

  • Bachelor's Degree in Computer Science, Engineering, Science, or other related field
  • Applied experience with modern engineering technologies and data principles, for instance: Big Data Cloud Compute, NoSQL, etc..
  • Applied experience with querying SQL and/orNoSQL databases
  • Experience in designing data catalogs, including data design, metadata structures, object relations, catalog population, etc.
  • Data Warehousing experience
  • Strong written and verbal communication skills
  • Comfortable balancing demands across multiple projects / initiatives
  • Ability to identify gaps in requirements based on business subject matter domain expertise
  • Ability to deliver detailed technical documentation
  • Expert level experience in relevant business domain
  • Experience managing data within SAP
  • Experience managing data using APIs
  • Big Query experience

Benefits:

  • Medical, Dental, & Vision Insurance Plans
  • Employee-Owned Profit Sharing (ESOP)
  • 401K offered


The approximate pay range for this position starting at $104,000 - $115,000+ Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.

At KellyMitchell, our culture is world class. We’re movers and shakers! We don’t mind a bit of friendly competition, and we reward hard work with unlimited potential for growth. This is an exciting opportunity to join a company known for innovative solutions and unsurpassed customer service. We're passionate about helping companies solve their biggest IT staffing & project solutions challenges. As an employee-owned, women-led organization serving Fortune 500 companies nationwide, we deliver expert service at a moment's notice.

By applying for this job, you agree to receive calls, AI-generated calls, text messages, or emails from KellyMitchell and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy at

Not Specified
Senior Data Analytics Engineer (Customer Data)
🏢 KellyMitchell Group
Salary not disclosed
Irving, TX 2 days ago

Job Summary:

Our client is seeking a Senior Data Analytics Engineer (Customer Data) to join their team! This position is located in Irving, Texas.

Duties:

  • Support cross-functional teams including Marketing, Data Science, Product, and Digital
  • Build datasets that power: customer segmentation, personalization workflows, campaign and lifecycle analytics, BI dashboards and KPIs and real-time and ML-driven customer experiences
  • Build, optimize, and maintain customer data pipelines using PySpark/Databricks
  • Transform raw customer data into analytics‑ready datasets for reporting, segmentation, personalization, and AI/ML applications
  • Develop customer behavior metrics, campaign insights, and lifecycle reporting layers
  • Design datasets used by Power BI/Tableau; dashboard creation is a plus, not required
  • Optimize Databricks performance such as: skewed joins, partitioning, sorting, caching/persist strategy
  • Work across AWS/Azure/GCP and integrate pipelines with CDPs
  • Participate in ingestion and digestion phases to shape MarTech and BI analytical layers
  • Document and uphold data engineering standards, governance, and best practices across teams


Desired Skills/Experience:

  • 6+ years in Data Engineering or Analytics Engineering
  • Strong hands-on experience with: Databricks, PySpark, Python and SQL
  • Proven experience with customer/marketing data: segmentation, personalization, campaign analytics, retention, behavioral metrics
  • Ability to design performance‑optimized pipelines; batch or near real-time
  • Experience building datasets consumed by Power BI/Tableau
  • Understanding of CDP workflows, customer identity data, traits/feature modeling, and activation
  • Strong communication skills, translating marketing needs into technical data solutions
  • Power BI expertise, major plus
  • Experience with Delta Lake, orchestration, or feature engineering for ML
  • Background as an Analytics Engineer, BI/Data Modeling Engineer, or Data Engineer with strong analytics orientation


Benefits:

  • Medical, Dental, & Vision Insurance Plans
  • Employee-Owned Profit Sharing (ESOP)
  • 401K offered


The approximate pay range for this position starting at $140,000. Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.


At KellyMitchell, our culture is world class. We’re movers and shakers! We don’t mind a bit of friendly competition, and we reward hard work with unlimited potential for growth. This is an exciting opportunity to join a company known for innovative solutions and unsurpassed customer service. We're passionate about helping companies solve their biggest IT staffing & project solutions challenges. As an employee-owned, women-led organization serving Fortune 500 companies nationwide, we deliver expert service at a moment's notice.


By applying for this job, you agree to receive calls, AI-generated calls, text messages, or emails from KellyMitchell and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy at

Not Specified
Sr. Data Engineer, tvScientific
Salary not disclosed
San Francisco, CA 3 days ago

About Pinterest:


Millions of people around the world come to our platform to find creative ideas, dream about new possibilities and plan for memories that will last a lifetime. At Pinterest, we're on a mission to bring everyone the inspiration to create a life they love, and that starts with the people behind the product.


Discover a career where you ignite innovation for millions, transform passion into growth opportunities, celebrate each other's unique experiences and embrace theflexibility to do your best work. Creating a career you love? It's Possible.


At Pinterest, AI isn't just a feature, it's a powerful partner that augments our creativity and amplifies our impact, and we're looking for candidates who are excited to be a part of that. To get a complete picture of your experience and abilities, we'll explore your foundational skills and how you collaborate with AI.


Through our interview process, what matters most is that you can always explain your approach, showing us not just what you know, but how you think. You can read more about our AI interview philosophy and how we use AI in our recruiting process here.

About tvScientific


tvScientific is the first and only CTV advertising platform purpose-built for performance marketers. We leverage massive data and cutting-edge science to automate and optimize TV advertising to drive business outcomes. Our solution combines media buying, optimization, measurement, and attribution in one, efficient platform. Our platform is built by industry leaders with a long history in programmatic advertising, digital media, and ad verification who have now purpose-built a CTV performance platform advertisers can trust to grow their business.



As a Senior Data Engineer at tvScientific, you will be a key player in implementing the robust data infrastructure to power our data-heavy company. You will collaborate with our cross-functional teams to evolve our core data pipelines, design for efficiency as we scale, and store data in optimal engines and formats. This is an individual contributor role, where you will work to define and implement a strategic vision for data engineering within the organization.



What you'll do:



  • Implement robust data infrastructure in AWS, using Spark with Scala
  • Evolve our core data pipelines to efficiently scale for our massive growth
  • Store data in optimal engines and formats
  • Collaborate with our cross-functional teams to design data solutions that meet business needs
  • Built out fault-tolerant batch and streaming pipelines
  • Leverage and optimize AWS resources while designing for scale
  • Collaborate closely with our Data Science and Product teams
  • How we'll define success:

    • Successful implementation of scalable and efficient data infrastructure
    • Timely delivery and optimization of data assets and APIs
    • High attention to detail in implementation of automated data quality checks
    • Effective collaboration with cross-functional teams




What we're looking for:



  • Production data engineering experience
  • Proficiency in Spark and Scala, with proven experience building data infrastructure in Spark using Scala
  • Familiarity with data lakes, cloud warehouses, and storage formats
  • Strong proficiency in AWS services
  • Expertise in SQL for data manipulation and extraction
  • Excellent written and verbal communication skills
  • Bachelor's degree in Computer Science or a related field
  • Nice-to-Haves

    • Experience in adtech
    • Experience implementing data governance practices, including data quality, metadata management, and access controls
    • Strong understanding of privacy-by-design principles and handling of sensitive or regulated data
    • Familiarity with data table formats like Apache Iceberg, Delta




In-Office Requirement Statement:



  • We recognize that the ideal environment for work is situational and may differ across departments. What this looks like day-to-day can vary based on the needs of each organization or role.


Relocation Statement:



  • This position is not eligible for relocation assistance. Visit ourPinFlexpage to learn more about our working model.


#LI-SM4


#LI-REMOTE

At Pinterest we believe the workplace should be equitable, inclusive, and inspiring for every employee. In an effort to provide greater transparency, we are sharing the base salary range for this position. The position is also eligible for equity. Final salary is based on a number of factors including location, travel, relevant prior experience, or particular skills and expertise.


Information regarding the culture at Pinterest and benefits available for this position can be found here.

US based applicants only$123,696—$254,667 USD

Our Commitment to Inclusion:


Pinterest is an equal opportunity employer and makes employment decisions on the basis of merit. We want to have the best qualified people in every job. All qualified applicants will receive consideration for employment without regard to race, color, ancestry, national origin, religion or religious creed, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, age, marital status, status as a protected veteran, physical or mental disability, medical condition, genetic information or characteristics (or those of a family member) or any other consideration made unlawful by applicable federal, state or local laws. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. If you require a medical or religious accommodation during the job application process, please completethis formfor support.

Not Specified
Data Engineer
Salary not disclosed
Newark, NJ 2 days ago
Title: Data Engineer

Location: Newark, NJ (Hybri)

Duration: 06 months


Job Description


  • Build and maintain data pipelines that collect, store, and transform data to support analytics use cases and business outcomes.
  • Implement data ingestion and transformation workflows in Microsoft Fabric, using Fabric-native capabilities such as notebooks, pipelines, and lakehouse patterns.
  • Develop and operationalize data solutions across lakehouse layers (e.g., landing and standardized "Bronze" data through curated "Silver/Gold" outputs) aligned to the platform's workspace architecture and OneLake design.
  • Ensure data solutions are reliable and supportable by incorporating monitoring, issue resolution, and ongoing enhancements to pipelines and datasets.
  • Collaborate across teams (engineering, analytics, product, and stakeholders) to translate data needs into scalable, reusable solutions and improved workflow efficiency.
  • Support secure and appropriate use of Fabric assets by following established access and workspace practices.
Not Specified
Staff Data Engineer, tvScientific
🏢 Pinterest
Salary not disclosed
San Francisco, CA 2 days ago

About Pinterest:


Millions of people around the world come to our platform to find creative ideas, dream about new possibilities and plan for memories that will last a lifetime. At Pinterest, we're on a mission to bring everyone the inspiration to create a life they love, and that starts with the people behind the product.


Discover a career where you ignite innovation for millions, transform passion into growth opportunities, celebrate each other's unique experiences and embrace theflexibility to do your best work. Creating a career you love? It's Possible.


At Pinterest, AI isn't just a feature, it's a powerful partner that augments our creativity and amplifies our impact, and we're looking for candidates who are excited to be a part of that. To get a complete picture of your experience and abilities, we'll explore your foundational skills and how you collaborate with AI.


Through our interview process, what matters most is that you can always explain your approach, showing us not just what you know, but how you think. You can read more about our AI interview philosophy and how we use AI in our recruiting process here.

About tvScientific


tvScientific is the first and only CTV advertising platform purpose-built for performance marketers. We leverage massive data and cutting-edge science to automate and optimize TV advertising to drive business outcomes. Our solution combines media buying, optimization, measurement, and attribution in one, efficient platform. Our platform is built by industry leaders with a long history in programmatic advertising, digital media, and ad verification who have now purpose-built a CTV performance platform advertisers can trust to grow their business.



As aStaff Data Engineer at tvScientific, you will be a key player in implementing the robust data infrastructure to power our data-heavy company. You will collaborate with our cross-functional teams to evolve our core data pipelines, design for efficiency as we scale, and store data in optimal engines and formats.This is anindividual contributor role, where you will work to define and implement a strategic vision for data engineering within the organization.



What you'll do:



  • Design and implement robust data infrastructure in AWS, using Spark with Scala
  • Evolve our core data pipelines to efficiently scale for our massive growth
  • Store data in optimal engines and formats, matching your designs to our performance needs and cost factors
  • Collaborate with our cross-functional teams to design data solutions that meet business needs
  • Design and implement knowledge graphs, exposing their functionality both via Batch Processing and APIs
  • Leverage and optimize AWS resources while designing for scale
  • Collaborate closely with our Data Science and Product teams
  • How we'll define success:

    • Successful design and implementation of scalable and efficient data infrastructure
    • Timely delivery and optimization of data assets and APIs
    • High attention to detail in implementation of automated data quality checks
    • Effective collaboration with cross-functional teams




What we're looking for:



  • Production data engineering experience
  • Proficiency in Spark and Scala, with proven experience building data infrastructure in Spark using Scala
  • Experience in delivering significant technical initiatives and building reliable, large scale services
  • Experience in delivering APIs backed by relationship-heavy datasets
  • Familiarity with data lakes, cloud warehouses, and storage formats
  • Strong proficiency in AWS services
  • Expertise in SQL for data manipulation and extraction
  • Excellent written and verbal communication skills
  • Bachelor's degree in Computer Science or a related field
  • Nice-to-haves:

    • Experience in adtech
    • Experience implementing data governance practices, including data quality, metadata management, and access controls
    • Strong understanding of privacy-by-design principles and handling of sensitive or regulated data
    • Familiarity with data table formats like Apache Iceberg, Delta
    • Previous experience building out a Data Engineering function
    • Proven experience working closely with Data Science teams on machine learning pipelines




In-Office Requirement Statement:



  • We recognize that the ideal environment for work is situational and may differ across departments. What this looks like day-to-day can vary based on the needs of each organization or role.


Relocation Statement:



  • This position is not eligible for relocation assistance. Visit ourPinFlexpage to learn more about our working model.


#LI-SM4


#LI-REMOTE

At Pinterest we believe the workplace should be equitable, inclusive, and inspiring for every employee. In an effort to provide greater transparency, we are sharing the base salary range for this position. The position is also eligible for equity. Final salary is based on a number of factors including location, travel, relevant prior experience, or particular skills and expertise.


Information regarding the culture at Pinterest and benefits available for this position can be found here.

US based applicants only$155,584—$320,320 USD

Our Commitment to Inclusion:


Pinterest is an equal opportunity employer and makes employment decisions on the basis of merit. We want to have the best qualified people in every job. All qualified applicants will receive consideration for employment without regard to race, color, ancestry, national origin, religion or religious creed, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, age, marital status, status as a protected veteran, physical or mental disability, medical condition, genetic information or characteristics (or those of a family member) or any other consideration made unlawful by applicable federal, state or local laws. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. If you require a medical or religious accommodation during the job application process, please completethis formfor support.

Not Specified
Data Architect - Consumer Platform
Salary not disclosed
Manhattan Beach, CA 2 days ago

The pay range for this role is $150,000 - $200,000/yr USD.


WHO WE ARE:


Headquartered in Southern California, Skechers—the Comfort Technology Company®—has spent over 30 years helping men, women, and kids everywhere look and feel good. Comfort innovation is at the core of everything we do, driving the development of stylish, high-quality products at a great value. From our diverse footwear collections to our expanding range of apparel and accessories, Skechers is a complete lifestyle brand.


ABOUT THE ROLE:


Skechers Digital Team is seeking a Digital Data Architect reporting to the Director, Digital Architecture, Consumer Domain. This role is responsible for designing and governing Skechers’ Consumer Data 360 ecosystem, enabling identity resolution, high-quality data foundations, personalization, loyalty intelligence, and machine learning capabilities across digital and retail channels.


The ideal candidate will be a strong technical leader, have hands-on full-stack technical knowledge in enterprise technologies related to Skecher’s consumer domain, and have the ability to work in a fast-paced agile environment. You should have knowledge of consumer programs from an architecture/industry perspective, and you should have strong hands-on experience designing solutions on the Salesforce Core Platform (including configuration, integration, and data model best practices).


You will work cross-functionally with Digital Engineering, Data Engineering, Data Science, Loyalty, and Marketing teams to architect scalable, secure, and high-performance data platforms that support advanced personalization and recommender systems.


WHAT YOU’LL DO:


  • Responsible for the full technical life cycle of consumer platform capabilities which includes:
  • Capability roadmap and technical architecture in alignment to consumer experience
  • Technical planning, design, and execution
  • Operations, analytics/reporting, and adoption
  • Define and evolve Skechers’ Consumer Data 360 architecture, including identity resolution (deterministic and probabilistic matching) and unified customer profiles.
  • Architect scalable data models and pipelines across CDP, CRM, e-commerce, marketing automation, data lake, and warehouse platforms.
  • Establish enterprise data quality frameworks including validation, deduplication, anomaly detection, and observability.
  • Optimize SQL workloads and large-scale distributed queries through performance tuning, partitioning, indexing, and workload management strategies.
  • Design and oversee ML pipelines supporting personalization, churn modeling, and recommender systems.
  • Partner with Data Science teams to productionize models using distributed platforms such as Databricks (Spark, Delta Lake, MLflow preferred).
  • Ensure secure data governance, access control (RBAC/ABAC), and compliance with GDPR, CCPA, and related privacy regulations.
  • Provide architectural oversight ensuring performance, scalability, resilience, and maintainability.
  • Collaborate with stakeholders to translate business objectives (LTV growth, personalization lift, engagement) into scalable data solutions.


REQUIREMENTS:


  • Computer Science, Data Engineering, or related degree or equivalent experience.
  • 12+ years experience architecting enterprise data platforms in cloud environments.
  • 9+ years experience with data engineering with a focus on consumer data.
  • 6+ years experience working with Salesforce platforms, including data models and enterprise integrations.
  • Strong experience with Data 360 and identity resolution architectures.
  • Proven expertise in SQL performance tuning and large-scale data modeling.
  • Hands-on experience implementing ML pipelines and recommender systems in production environments.
  • Experience with cloud technologies (AWS, GCP, or Azure).
  • Experience with integration patterns (API, ETL, event streaming).
  • Experience providing technical leadership and guidance across multiple projects and development teams.
  • Experience translating business requirements into detailed technical specifications and working with development teams through implementation, including issue resolution and stakeholder communication.
  • Strong project management skills including scope assessment, estimation, and clear technical communication with both business users and technical teams.
  • Must hold at least one of the following Salesforce Certifications (Platform App Builder, Platform Developer 1, JavaScript Developer 1).
  • Experience with Databricks or similar distributed data/ML platforms preferred.
Not Specified
jobs by JobLookup
✓ All jobs loaded