Data Entry Job Description Sample Jobs in Usa

12,310 positions found — Page 8

Senior Data Modeler
🏢 Harnham
Salary not disclosed
Phoenix, AZ 3 days ago

Senior Data Modeler

Hybrid 3-4 days onsite

Location: Phoenix, Arizona

Salary: $130,000 - $150,000 base


A large, operationally complex organization is undergoing a major modernization of its data platform and is building a new, cloud-native analytics foundation from the ground up. This is a greenfield opportunity for a senior-level data modeler to establish best practices, influence architecture, and help shape how data is organized and used across the business.

This role sits at the center of a multi-year transformation focused on modern analytics, scalable data products, and strong collaboration between data and business teams.


What You’ll Be Working On

  • Designing and implementing enterprise data models across conceptual, logical, and physical layers
  • Establishing Medallion architecture patterns and reusable modeling assets
  • Building dimensional and semantic models that support analytics and reporting
  • Partnering closely with domain experts and functional leaders to translate business needs into data structures
  • Collaborating with data engineers to align models with ELT pipelines and analytics frameworks
  • Helping define modeling standards and upskilling senior engineers in modern data modeling practices
  • Contributing hands-on to data engineering work where needed (SQL, transformations, optimization)
  • Proactively identifying analytics opportunities and recommending data structures to support them

This role is roughly 40% data modeling, 30% hands-on engineering, and 30% cross-functional collaboration.


Must-Have Experience

  • Strong, hands-on experience with data modeling (dimensional, canonical, semantic)
  • Deep understanding of Medallion architecture
  • Advanced SQL and experience working with a modern cloud data warehouse
  • Experience with dbt for transformations and modeling
  • Hands-on experience in cloud-native data environments (AWS preferred)
  • Ability to work directly with business stakeholders and explain technical concepts clearly
  • Experience collaborating closely with data engineers on execution


Nice to Have

  • Python experience
  • Familiarity with Informatica or reverse-engineering legacy data models
  • Exposure to streaming or near-real-time data pipelines
  • Experience with visualization tools (tool choice is flexible)


Who Will Thrive in This Role

  • A senior individual contributor who enjoys building from scratch
  • Someone who can act as a modeling expert and mentor in an organization formalizing this practice
  • Comfortable working in ambiguity and taking initiative
  • Strong communicator who enjoys partnering with both technical and non-technical teams
  • Equally comfortable discussing business concepts and physical data models


Why This Role Is Unique

  • Greenfield data modeling initiative with real influence
  • Opportunity to define standards that will be used across the organization
  • Work on large-scale, real-world operational and analytical data
  • High visibility within a growing data organization
  • Flexible work setup for individual contributors


If you’re excited about shaping a modern data foundation and want to be the person who defines how data is modeled, understood, and used, this is a rare opportunity to make a lasting impact.

Not Specified
Lead Data Engineer
Salary not disclosed
Atlanta, GA 3 days ago

Job Title – Lead Data Engineer

Please note this role is not able to offer visa transfer or sponsorship now or in the future


About the role


As a Lead Data Engineer, you will make an impact by designing, building, and operating scalable, cloud‑native data platforms supporting batch and streaming use cases, with strong focus on governance, performance, and reliability. You will be a valued member of the Data Engineering team and work collaboratively with cross‑functional engineering, cloud, and architecture stakeholders.


In this role, you will:

  • Design, build, and operate scalable cloud‑native data platforms supporting batch and streaming workloads with strong governance, performance, and reliability.
  • Develop and operate data systems on AWS, Azure, and GCP, designing cloud‑native, scalable, and cost‑efficient data solutions.
  • Build modern data architectures including data lakes, data lakehouses, and data hubs, with strong understanding of ingestion patterns, data governance, data modeling, observability, and platform best practices.
  • Develop data ingestion and collection pipelines using Kafka and AWS Glue; work with modern storage formats such as Apache Iceberg and Parquet.
  • Design and develop real‑time streaming pipelines using Kafka, Flink, or similar streaming frameworks, with understanding of event‑driven architectures and low‑latency data processing.
  • Perform data transformation and modeling using SQL‑based frameworks and orchestration tools such as dbt, AWS Glue, and Airflow, including Slowly Changing Dimensions (SCD) and schema evolution.
  • Use Apache Spark extensively for large‑scale data transformations across batch and streaming workloads.


Work model

We believe hybrid work is the way forward as we strive to provide flexibility wherever possible. Based on this role’s business requirements, this is a hybrid position requiring 4 days a week in a client or Cognizant office in Atlanta, GA. Regardless of your working arrangement, we are here to support a healthy work-life balance though our various wellbeing programs.


The working arrangements for this role are accurate as of the date of posting. This may change based on the project you’re engaged in, as well as business and client requirements. Rest assured; we will always be clear about role expectations.


What you need to have to be considered

  • Hands‑on experience developing and operating data systems on AWS, Azure, and GCP.
  • Proven ability to design cloud‑native, scalable, and cost‑efficient data solutions.
  • Experience building data lakes, data lakehouses, and data hubs with strong understanding of ingestion patterns, governance, modeling, observability, and platform best practices.
  • Expertise in data ingestion and collection using Kafka and AWS Glue, with experience in Apache Iceberg and Parquet.
  • Strong experience designing and developing real‑time streaming pipelines using Kafka, Flink, or similar streaming frameworks.
  • Deep expertise in data transformation and modeling using SQL‑based frameworks and orchestration tools including dbt, AWS Glue, and Airflow, with knowledge of SCD and schema evolution.
  • Extensive experience using Apache Spark for large‑scale batch and streaming data transformations.


These will help you stand out

  • Experience with event‑driven architectures and low‑latency data processing.
  • Strong understanding of schema evolution, SCD modeling, and modern data modeling concepts.
  • Experience with Apache Iceberg, Parquet, and modern ingestion/storage patterns.
  • Strong knowledge of observability, governance, and platform best practices.
  • Ability to partner effectively with cloud, architecture, and engineering teams.



Salary and Other Compensation:

Applications will be accepted until March 17, 2025.

The annual salary for this position is between $81,000 - $135,000, depending on experience and other qualifications of the successful candidate.

This position is also eligible for Cognizant’s discretionary annual incentive program, based on performance and subject to the terms of Cognizant’s applicable plans.

Benefits: Cognizant offers the following benefits for this position, subject to applicable eligibility requirements:

  • Medical/Dental/Vision/Life Insurance
  • Paid holidays plus Paid Time Off
  • 401(k) plan and contributions
  • Long‑term/Short‑term Disability
  • Paid Parental Leave
  • Employee Stock Purchase Plan


Disclaimer: The salary, other compensation, and benefits information is accurate as of the date of this posting. Cognizant reserves the right to modify this information at any time, subject to applicable law.

Not Specified
Data Manager
Salary not disclosed
Minneapolis, MN 3 days ago

Company/Role Overview:

CliftonLarsonAllen (CLA) Search has been retained by Midwestern Higher Education Compact to identify a Data Manager to serve their team. The Midwestern Higher Education Compact (MHEC) brings together leaders from 12 Midwestern states to strengthen postsecondary education, advance student success, and promote regional economic vitality.


MHEC programs and initiatives save member states and students millions of dollars annually through time- and cost-savings opportunities. MHEC research supports workforce readiness and improves the quality, accessibility, and affordability of postsecondary education. MHEC convenings bring together leaders and subject experts to share knowledge, generate ideas, and develop collaborative solutions.


To learn more, click here:


What You’ll Do:

  • Administer and maintain Microsoft Fabric, OneLake, and Azure environments.
  • Design and deliver sophisticated data solutions that are innovative and sustainable.
  • Ensure data infrastructure is secure, reliable, and scalable.
  • Manage and improve how data is brought into the organization from multiple sources.
  • Maintain accurate, well-structured, consistent, and complete data that ensure high quality and useability for internal staff.
  • Develop and oversee standards on how data is collected, stored, and protected across departments.
  • Manage MHEC’s customer relationship management (CRM) system, ensuring data integrity, integration with other platforms, and alignment with organizational needs.
  • Partner with teams across the organization to monitor processes and make recommendations.
  • Partner with research staff to understand data access patterns and develop storage strategies that accelerate research and analytics
  • Develop and maintain Power BI dashboards and reports to deliver clear insights to senior leaders and decision-makers.
  • Ensure staff have access to timely, clear, and meaningful data visualizations.
  • Train staff to use reports and dashboards effectively.
  • Support departments in using data to guide decision-making.
  • Document data pipelines, integrations, and system processes.
  • Recommend tools and practices that help MHEC grow its data capacity.
  • Monitor developments in Microsoft’s data platforms and assess future needs.


What You’ll Need:

  • Bachelor's degree or equivalent experience preferred.
  • 5+ years’ experience, preferably with Microsoft data platforms including Power BI, Azure, and/or Fabric.
  • Experience designing and maintaining data systems and dashboards.
  • Experience in higher education or nonprofit sectors preferred.
  • Strong technical understanding of Microsoft Fabric, OneLake, and Azure.
  • Proficiency demonstrated in Python, R, SAS, SQL or other statistical/data management software
  • Experience with data visualization platforms (Tableau, Power BI, or similar)
  • Experience with Microsoft Dynamics and Power Automate is a plus but not required.
  • Ability to plan, optimize, build, and maintain data pipelines and dashboards.
Not Specified
Senior Manager, Data Science (Marketing)
$137,000
Evanston, Illinois 3 days ago
By clicking the "Apply" button, I understand that my employment application process with Takeda will commence and that the information I provide in my application will be processed in line with Takeda's Privacy Notice ( ) and Terms of Use ( ) . I further attest that all information I submit in my employment application is true to the best of my knowledge.

Job Description

About BioLife Plasma Services

BioLife Plasma Services, a subsidiary of Takeda Pharmaceutical Company Limited, is an industry leader in the collection of high-quality plasma, which is processed into life-saving plasma-based therapies. Some diseases can only be treated with medicines made with plasma. Since plasma can't be made synthetically, many people rely on plasma donors to live healthier, happier lives. BioLife operates 250+ state-of-the-art plasma donation centers across the United States. Our employees are dedicated to enhancing the quality of life for patients and ensuring that the donation process is safe, straightforward, and rewarding for donors who wish to make a positive impact.

When you work at BioLife, you'll feel good knowing that what we do helps improve the lives of patients with rare diseases. While you focus on our donors, we'll support you. We offer a purpose you can believe in, a team you can count on, opportunities for career growth, and a comprehensive benefits program, all in a fast-paced, friendly environment.

This position is currently classified as "hybrid" in accordance with Takeda's Hybrid and Remote Work policy.

BioLife Plasma Services is a subsidiary of Takeda Pharmaceutical Company Ltd.

OBJECTIVES/PURPOSE
The Sr. Manager of Marketing Science drives and executes strategic initiatives that improve our marketing data and analytics capabilities. This role will leverage advanced analytics techniques and data-driven insights to inform marketing strategies, optimize campaigns, and drive business growth. This role requires a deep understanding of paid, owned, and earned media measurement, strong analytics and insights skills, broad knowledge of marketing technologies, and the ability to communicate complex data insights to senior stakeholders. This role is critically important for the success of the Global Forecasting, Pricing, and Analytics (FPA) team and reports to the Head of Analytics within the team.

ACCOUNTABILITIES

Leadership

* Lead marketing science initiatives in the development and execution of advanced analytics to support marketing strategies and goals.
* Provide thought leadership on marketing measurement techniques, including the trade-offs between controlled experiments, natural experiments, and multivariate statistical models for different situations.

Marketing Science

* Partner with our media agency to ensure we are maximizing the output of our media mix model (MMM) partner.
* Deep understanding and experience with creating and managing marketing attribution solutions, i.e., multi-touch attribution (MTA). Ability to build/maintain in-house solutions and/or work with outside partners as necessary.
* Identify and maintain marketing analytics key performance indicators (KPIs) to track and measure performance.
* Partner with data scientists, IT, and consultants to develop advanced analytical models and dashboards related to marketing.
* Ability to perform statistical analyses and tests to quantify the business value of an opportunity.
* Familiarity with AI/ML applications in marketing.

Reporting and Data Management

* Ensure the accurate and timely delivery of marketing performance reports and insights.
* Able to translate data into contextualized insights that can be shared across the business
* Know digital media terminology and concepts (e.g., Demand Side Platforms (DSPs), effectiveness vs. efficiency, SEO/SEM, etc.)
* Leverage existing experience with Google Analytics and Google Tag Manager
* Partner with the Data, Digital, and Technology (DD&T) Team to ensure marketing data accuracy, integration, and integrity, and that good data governance practices are in place.
* Develop solutions (dashboards, data visualizations, reports) for real-time operations performance assessment and agile decision-making.
* Design and automate regular data extracts needed by marketing and other partners.

Collaboration and Adaptability

* Build strong relationships with cross-functional partners for efficient alignment, coordination, and information sharing across teams.

DIMENSIONS AND ASPECTS

Technical/Functional Expertise

* Extensive experience across many areas of marketing science; MMM, MTA, Loyalty, Website, Surveys, Paid/Owned/Earned Media.
* Experience with SQL, Python, and R for data analysis and model development.
* Strong analytical skills with a solid foundation in many of the following statistical and AI/ML methods: regression analysis (continuous, categorical, survival, time-series, and count models, etc.); classification (CART, SVM, Neural Networks, etc.), clustering (k-means/medoid, hierarchical, self-organizing maps, etc.), and other AI/ML techniques; experimental design; and forecasting/sensitivity analysis.
* Comfortable working daily in cloud-based data platforms.
* Expert level MS Excel skills, including advanced functions (e.g., Solver), data analysis, pivot tables, macros, and VBA (Visual Basic for Applications), and applicability of these features for developing and managing financial models for business case development and forecasting.
* Experience working with Power BI, Tableau, or other data visualization software.
* Strong foundation in statistical techniques for quantifying the impact of marketing activities.

Communication

* Excellent verbal and written communication. Proven data analysis background with the ability to transform analysis into insights, recommendations, and proposals for senior management.
* Ability to communicate complex concepts simply and succinctly.

Decision-making and Autonomy

* High self-reliance, self-efficacy, initiative, and learning agility.
* Strong at both structured and unstructured problem solving.

Interaction

* Manage and/or partner on projects with vendors and consultants.

EDUCATION, BEHAVIOURAL COMPETENCIES AND SKILLS:

Required

* Bachelor's and/or master's degree in any area of social science, business, marketing, advertising, or a closely related field.
* Experience with data analytics from end-to-end, i.e., including ideation, proposal creation, getting stakeholder buy-in, gathering requirements, designing analytics models/solutions, building prototypes, and working with IT/Data Science teams to deploy and scale solutions.
* 7+ years of experience in advanced analytics and statistical modeling in the areas of business performance analysis, forecasting, promotion and media effectiveness and optimization, and consumer behavior
* Excellent verbal and written communication and presentation skills. Able to communicate effectively to all levels of the organization, including senior leadership.
* Bring a growth mindset, curiosity, positivity, intuitive thinking, and a passion for excellence.

Preferred

* Media agency or retail industry analytics experience a plus.
* Experience with survival analysis (time-to-event, duration, event history analysis, etc.) a plus.
* Knowledge of CRM systems and marketing automation tools a plus.

ADDITIONAL INFORMATION (Add any information legally required for your country here)

* Domestic travel required (up to 10%).

BioLife Compensation and Benefits Summary

We understand compensation is an important factor as you consider the next step in your career. W e are committed to equitable pay for all employees, and we strive to be more transparent with our pay practices.

For Location: Bannockburn, IL

U.S. Base Salary Range: $137,000.00 - $215,270.00

The estimated salary range reflects an anticipated range for this position. The actual base salary offered may depend on a variety of factors, including the qualifications of the individual applicant for the position, years of relevant experience, specific and unique skills, level of education attained, certifications or other professional licenses held, and the location in which the applicant lives and/or from which they will be performing the job. The actual base salary offered will be in accordance with state or local minimum wage requirements for the job location.

U.S. based employees may be eligible for short-term and/or long-term incentives. U.S. based employees may be eligible to participate in medical, dental, vision insurance, a 401(k) plan and company match, short-term and long-term disability coverage, basic life insurance, a tuition reimbursement program, paid volunteer time off, company holidays, and well-being benefits, among others. U.S. based employees are also eligible to receive, per calendar year, up to 80 hours of sick time, and new hires are eligible to accrue up to 120 hours of paid vacation.

EEO Statement
Takeda is proud in its commitment to creating a diverse workforce and providing equal employment opportunities to all employees and applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, gender expression, parental status, national origin, age, disability, citizenship status, genetic information or characteristics, marital status, status as a Vietnam era veteran, special disabled veteran, or other protected veteran in accordance with applicable federal, state and local laws, and any other characteristic protected by law.

Locations

Bannockburn, IL

Worker Type

Employee

Worker Sub-Type

Regular

Time Type

Full time

Job Exempt Yes
Not Specified
Databricks Architect/ Senior Data Engineer
✦ New
🏢 OZ
Salary not disclosed
Boca Raton, FL 1 day ago

OZ – Databricks Architect/ Senior Data Engineer


Note: Only applications from U.S. citizens or lawful permanent residents (Green Card Holders) will be considered.


We believe work should be innately rewarding and a team-building venture. Working with our teammates and clients should be an enjoyable journey where we can learn, grow as professionals, and achieve amazing results. Our core values revolve around this philosophy. We are relentlessly committed to helping our clients achieve their business goals, leapfrog the competition, and become leaders in their industry. What drives us forward is the culture of creativity combined with a disciplined approach, passion for learning & innovation, and a ‘can-do’ attitude!


What We're Looking For:

We are seeking a highly experienced Databricks professional with deep expertise in data engineering, distributed computing, and cloud-based data platforms. The ideal candidate is both an architect and a hands-on engineer who can design scalable data solutions while actively contributing to development, optimization, and deployment.


This role requires strong technical leadership, a deep understanding of modern data architectures, and the ability to implement best practices in DataOps, performance optimization, and data governance.


Experience with modern AI/GenAI-enabled data platforms and real-time data processing environments is highly desirable.


Position Overview:

The Databricks Senior Data Engineer will play a critical role in designing, implementing, and optimizing enterprise-scale data platforms using the Databricks Lakehouse architecture. This role combines architecture leadership with hands-on engineering, focusing on building scalable, secure, and high-performance data pipelines and platforms. The ideal candidate will establish coding standards, define data architecture frameworks such as the Medallion Architecture, and guide the end-to-end development lifecycle of modern data solutions.


This individual will collaborate with cross-functional stakeholders, including data engineers, BI developers, analysts, and business leaders, to deliver robust data platforms that enable advanced analytics, reporting, and AI-driven decision-making.


Key Responsibilities:

  • Architecture & Design: Architect and design scalable, reliable data platforms and complex ETL/ELT and streaming workflows for the Databricks Lakehouse Platform (Delta Lake, Spark).
  • Hands-On Development: Write, test, and optimize code in Python, PySpark, and SQL for data ingestion, transformation, and processing.
  • DataOps & Automation: Implement CI/CD, monitoring, and automation (e.g., with Azure DevOps, DBX) for data pipelines.
  • Stakeholder Collaboration: Work with BI developers, analysts, and business users to define requirements and deliver data-driven solutions.
  • Performance Optimization: Tune delta tables, Spark jobs, and SQL queries for maximum efficiency and scalability.
  • GenAI Applications Development: It is a big plus to have experience in GenAI application development


Requirements:

  • 8+ years of experience in data engineering, with strong hands-on expertise in Databricks and Apache Spark.
  • Proven experience designing and implementing scalable ETL/ELT pipelines in cloud environments.
  • Strong programming skills in Python and SQL; experience with PySpark required.
  • Hands-on experience with Databricks Lakehouse, Delta Lake, and distributed data processing.
  • Experience working with cloud platforms such as Microsoft Azure, AWS, or GCP (Azure preferred).
  • Experience with CI/CD pipelines, Git, and DevOps practices for data engineering.
  • Strong understanding of data architecture, data modeling, and performance optimization.
  • Experience working with cross-functional teams to deliver enterprise data solutions.
  • Tackles complex data challenges, ensuring data quality and reliable delivery.


Qualifications:

  • Bachelor’s degree in Computer Science, Information Technology, Engineering, or a related field.
  • Experience designing enterprise-scale data platforms and modern data architectures.
  • Experience with data integration tools such as Azure Data Factory or similar platforms.
  • Familiarity with cloud data warehouses such as Databricks, Snowflake, or Azure Fabric.
  • Experience supporting analytics, reporting, or AI/ML workloads is highly desirable.
  • Databricks, Azure, or cloud certifications are preferred.
  • Strong problem-solving, communication, and technical leadership skills.


Technical Proficiency in:

  • Databricks, Apache Spark, PySpark, Delta Lake
  • Python, SQL, Scala (preferred)
  • Cloud platforms: Azure (preferred), AWS, or GCP
  • Azure Data Factory, Kafka, and modern data integration tools
  • Data warehousing: Databricks, Snowflake, or Azure Fabric
  • DevOps tools: Git, Azure DevOps, CI/CD pipelines
  • Data architecture, ETL/ELT design, and performance optimization


What You’re Looking For:

Join a fast-growing organization that thrives on innovation and collaboration. You’ll work alongside talented, motivated colleagues in a global environment, helping clients solve their most critical business challenges. At OZ, your contributions matter – you’ll have the chance to be a key player in our growth and success. If you’re driven, bold, and eager to push boundaries, we invite you to join a company where you can truly make a difference.


About Us:

OZ is a 28-year-old global technology consulting, services, and solutions leader specializing in creating business-focused solutions for our clients by leveraging disruptive digital technologies and innovation.


OZ is committed to creating a continuum between work and life by allowing people to work remotely. We offer competitive compensation and a comprehensive benefits package. You’ll enjoy our work style within an incredible culture. We’ll give you the tools you need to succeed so you can grow and develop with us and become part of a team that lives by its core values.

Not Specified
Data Engineer
✦ New
Salary not disclosed
Montvale, NJ 1 day ago

Summary

We are seeking a highly skilled Data Engineer to build and manage our data infrastructure. The ideal candidate will be an expert in writing complex SQL queries, designing efficient database schemas, and developing ETL/ELT pipelines. You will ensure data is accurate, accessible, and optimized for performance to support business intelligence, analytics, and reporting needs.


Key Responsibilities

  • Database Design & Management: Design, develop, and maintain relational databases (e.g. SQL Server, ProgressSQL, Oracle) and cloud-based data warehouses.
  • Strategic SQL and Data Engineering: Develop sophisticated, optimized SQL queries, stored procedures, and functions to process and analyze large, complex datasets for actionable business insights.
  • Data Pipeline Automation & Orchestration:Help build, automate, and orchestrate ETL/ELT workflows utilizing SQL, Python, and cloud-native tools to integrate and transform data from diverse, distributed sources.
  • Performance Optimization: Tune queries and optimize database schema (indexing, partitioning, normalization) to improve data retrieval and processing speeds.
  • Data Integrity & Security: Ensure data quality, consistency, and integrity across systems. Implement data masking, encryption, and role-based access control (RBAC).
  • Documentation: Maintain technical documentation for database schemas, data dictionaries, and ETL workflows.


Required Skills and Qualifications

  • Education: Bachelor’s degree in computer science, Information Systems, or a related field.
  • SQL Mastery: 5+ years of experience with advanced SQL (window functions, CTEs, query optimization).
  • Database Expertise: Deep understanding of relational database management systems (RDBMS) and data modeling techniques.
  • Cloud Platforms: Demonstrated experience with Azure Data Services and other data warehouse technologies.
  • Programming: Proficiency in Python for scripting and data manipulation.
  • ETL Tools: Familiarity with tools like SSIS or Azure Data Factory.
  • Soft Skills: Strong analytical thinking, problem-solving, and communication skills.

Nice to Have

  • Experience with NoSQL databases (Cosmos DB, MongoDB).
  • Experience with big data frameworks (Apache Spark, Kafka).
  • Relevant certifications (e.g., Microsoft Certified: Azure Data Engineer Associate, Google Professional Data Engineer).

Typical Work Environment

  • Tools Used: SQL IDEs (DBeaver, SSMS), Cloud Consoles, Git, Jira, SSIS.
  • Industry: Leasing.


Salary is $130-$140k

Not Specified
Data Steward
✦ New
Salary not disclosed
Creve Coeur, MO 1 day ago

Job Summary:

Our client is seeking a Data Steward to join their team! This position is located Hybrid in Creve Coeur, Missouri.

Duties:

  • Understand business capability needs and processes as they relate to IT solutions through partnering with Product Managers and business and functional IT stakeholders
  • Participate in data scraping, data curation and data compilation efforts
  • Ensure high quality of the data to end users
  • Ensure high quality of the inhouse data via data stewardship
  • Implement and utilize data solutions for data analysis and profiling using a variety of tools such as SQL, Postman, R, or Python and following the team’s established processes and methodologies
  • Collaborate with other data stewards and engineers within the team and across teams on aligning delivery dates and integration efforts
  • Define data quality rules and implement automated monitoring, reporting, and remediation solutions
  • Coordinate intake and resolution of data support tickets
  • Support data migration from legacy systems, data inserts and updates not supported by applications
  • Partner with the Data Governance organization to ensure data is secured and access is being managed appropriately
  • Identify gaps within existing processes and capable of creating new documentation templates to improve the existing processes and procedures
  • Create mapping documents and templates to improve existing manual processes
  • Perform data discoveries to understand data formats, source systems, etc. and engage with business partners in this discovery process
  • Help answer questions from the end-users and coordinate with technical resources as needed
  • Build prototype SQL and continuously engage with end consumers with enhancements


Desired Skills/Experience:

  • Bachelor's Degree in Computer Science, Engineering, Science, or other related field
  • Applied experience with modern engineering technologies and data principles, for instance: Big Data Cloud Compute, NoSQL, etc..
  • Applied experience with querying SQL and/orNoSQL databases
  • Experience in designing data catalogs, including data design, metadata structures, object relations, catalog population, etc.
  • Data Warehousing experience
  • Strong written and verbal communication skills
  • Comfortable balancing demands across multiple projects / initiatives
  • Ability to identify gaps in requirements based on business subject matter domain expertise
  • Ability to deliver detailed technical documentation
  • Expert level experience in relevant business domain
  • Experience managing data within SAP
  • Experience managing data using APIs
  • Big Query experience

Benefits:

  • Medical, Dental, & Vision Insurance Plans
  • Employee-Owned Profit Sharing (ESOP)
  • 401K offered


The approximate pay range for this position starting at $104,000 - $115,000+ Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.

At KellyMitchell, our culture is world class. We’re movers and shakers! We don’t mind a bit of friendly competition, and we reward hard work with unlimited potential for growth. This is an exciting opportunity to join a company known for innovative solutions and unsurpassed customer service. We're passionate about helping companies solve their biggest IT staffing & project solutions challenges. As an employee-owned, women-led organization serving Fortune 500 companies nationwide, we deliver expert service at a moment's notice.

By applying for this job, you agree to receive calls, AI-generated calls, text messages, or emails from KellyMitchell and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy at

Not Specified
Data Integration & AI Engineer
✦ New
Salary not disclosed
Edison, NJ 1 day ago

About Wakefern

Wakefern Food Corp. is the largest retailer-owned cooperative in the United States and supports its co-operative members' retail operations, trading under the ShopRite®, Price Rite®, The Fresh Grocer®, Dearborn Markets®, and Gourmet Garage® banners.


Employing an innovative approach to wholesale business services, Wakefern focuses on helping the independent retailer compete in a big business world. Providing the tools entrepreneurs need to stay a step ahead of the competition, Wakefern’s co-operative members benefit from the company’s extensive portfolio of services, including innovative technology, private label development, and best-in-class procurement practices.


The ideal candidate will have a strong background in designing, developing, and implementing complex projects, with focus on automating data processes and driving efficiency within the organization. This role requires a close collaboration with application developers, data engineers, data analysts, data scientists to ensure seamless data integration and automation across various platforms. The Data Integration & AI Engineer is responsible for identifying opportunities to automate repetitive data processes, reduce manual intervention, and improve overall data accessibility.


Essential Functions

  • Participate in the development life cycle (requirements definition, project approval, design, development, and implementation) and maintenance of the systems.
  • Implement and enforce data quality and governance standards to ensure the accuracy and consistency.
  • Provide input for project plans and timelines to align with business objectives.
  • Monitor project progress, identify risks, and implement mitigation strategies.
  • Work with cross-functional teams and ensure effective communication and collaboration.
  • Provide regular updates to the management team.
  • Follow the standards and procedures according to Architecture Review Board best practices, revising standards and procedures as requirements change and technological advancements are incorporated into the >tech_ structure.
  • Communicates and promotes the code of ethics and business conduct.
  • Ensures completion of required company compliance training programs.
  • Is trained – either through formal education or through experience – in software / hardware technologies and development methodologies.
  • Stays current through personal development and professional and industry organizations.

Responsibilities

  • Design, build, and maintain automated data pipelines and ETL processes to ensure scalability, efficiency, and reliability across data operations.
  • Develop and implement robust data integration solutions to streamline data flow between diverse systems and databases.
  • Continuously optimize data workflows and automation processes to enhance performance, scalability, and maintainability.
  • Design and develop end-to-end data solutions utilizing modern technologies, including scripting languages, databases, APIs, and cloud platforms.
  • Ensure data solutions and data sources meet quality, security, and compliance standards.
  • Monitor and troubleshoot automation workflows, proactively identifying and resolving issues to minimize downtime.
  • Provide technical training, documentation, and ongoing support to end users of data automation systems.
  • Prepare and maintain comprehensive technical documentation, including solution designs, specifications, and operational procedures.


Qualifications

  • A bachelor's degree or higher in computer science, information systems, or a related field.
  • Hands-on experience with cloud data platforms (e.g., GCP, Azure, etc.)
  • Strong knowledge and skills in data automation technologies, such as Python, SQL, ETL/ELT tools, Kafka, APIs, cloud data pipelines, etc.
  • Experience in GCP BigQuery, Dataflow, Pub/Sub, and Cloud storage.
  • Experience with workflow orchestration tools such as Cloud Composer or Airflow
  • Proficiency in iPaaS (Integration Platform as a Service) platforms, such as Boomi, SAP BTP, etc.
  • Develop and manage data integrations for AI agents, connecting them to internal and external APIs, databases, and knowledge sources to expand their capabilities.
  • Build and maintain scalable Retrieval-Augmented Generation (RAG) pipelines, including the curation and indexing of knowledge bases in vector databases (e.g., Pinecone, Vertex AI Vector Search).
  • Leverage cloud-based AI/ML platforms (e.g., Vertex AI, Azure ML) to build, train, and deploy machine learning models on a scale.
  • Establish and enforce data quality and governance standards for AI/ML datasets, ensuring the accuracy, completeness, and integrity of data used for model training and validation.
  • Collaborate closely with data scientists and machine learning engineers to understand data requirements and deliver optimized data solutions that support the entire machine learning lifecycle.
  • Hands-on experience with IBM DataStage and Alteryx is a plus.
  • Strong understanding of database design principles, including normalization, indexing, partitioning, and query optimization.
  • Ability to design and maintain efficient, scalable, and well-structured database schemas to support both analytical and transactional workloads,
  • Familiarity with BI visualization tools such as MicroStrategy, Power BI, Looker, or similar.
  • Familiarity with data modeling tools.
  • Familiarity with DevOps practices for data (CI/CD pipelines)
  • Proficiency in project management software (e.g., JIRA, Clarizen, etc.)
  • Familiarity with DevOps practices for data (CI/CD pipelines)
  • Strong knowledge and skills in data management, data quality, and data governance.
  • Strong communication, collaboration, and problem-solving skills.
  • Ability to work on multiple projects and prioritize tasks effectively.
  • Ability to work independently and in a team environment.
  • Ability to learn new technologies and tools quickly.
  • The ability to handle stressful situations.
  • Highly developed business acuity and acumen.
  • Strong critical thinking and decision-making skills.


Working Conditions & Physical Demands

This position requires in-person office presence at least 4x a week.


Compensation and Benefits

The salary range for this position is $75,868 - $150,644. Placement in the range depends on several factors, including experience, skills, education, geography, and budget considerations.

Wakefern is proud to offer a comprehensive benefits package designed to support the health, well-being, and professional development of our Associates. Benefits include medical, dental, and vision coverage, life and disability insurance, a 401(k) retirement plan with company match & annual company contribution, paid time off, holidays, and parental leave.


Associates also enjoy access to wellness and family support programs, fitness reimbursement, educational and training opportunities through our corporate university, and a collaborative, team-oriented work environment. Many of these benefits are fully or partially funded by the company, with some subject to eligibility requirements

Not Specified
Data Architect - Consumer Platform
✦ New
Salary not disclosed

The pay range for this role is $150,000 - $200,000/yr USD.


WHO WE ARE:


Headquartered in Southern California, Skechers—the Comfort Technology Company®—has spent over 30 years helping men, women, and kids everywhere look and feel good. Comfort innovation is at the core of everything we do, driving the development of stylish, high-quality products at a great value. From our diverse footwear collections to our expanding range of apparel and accessories, Skechers is a complete lifestyle brand.


ABOUT THE ROLE:


Skechers Digital Team is seeking a Digital Data Architect reporting to the Director, Digital Architecture, Consumer Domain. This role is responsible for designing and governing Skechers’ Consumer Data 360 ecosystem, enabling identity resolution, high-quality data foundations, personalization, loyalty intelligence, and machine learning capabilities across digital and retail channels.


The ideal candidate will be a strong technical leader, have hands-on full-stack technical knowledge in enterprise technologies related to Skecher’s consumer domain, and have the ability to work in a fast-paced agile environment. You should have knowledge of consumer programs from an architecture/industry perspective, and you should have strong hands-on experience designing solutions on the Salesforce Core Platform (including configuration, integration, and data model best practices).


You will work cross-functionally with Digital Engineering, Data Engineering, Data Science, Loyalty, and Marketing teams to architect scalable, secure, and high-performance data platforms that support advanced personalization and recommender systems.


WHAT YOU’LL DO:


  • Responsible for the full technical life cycle of consumer platform capabilities which includes:
  • Capability roadmap and technical architecture in alignment to consumer experience
  • Technical planning, design, and execution
  • Operations, analytics/reporting, and adoption
  • Define and evolve Skechers’ Consumer Data 360 architecture, including identity resolution (deterministic and probabilistic matching) and unified customer profiles.
  • Architect scalable data models and pipelines across CDP, CRM, e-commerce, marketing automation, data lake, and warehouse platforms.
  • Establish enterprise data quality frameworks including validation, deduplication, anomaly detection, and observability.
  • Optimize SQL workloads and large-scale distributed queries through performance tuning, partitioning, indexing, and workload management strategies.
  • Design and oversee ML pipelines supporting personalization, churn modeling, and recommender systems.
  • Partner with Data Science teams to productionize models using distributed platforms such as Databricks (Spark, Delta Lake, MLflow preferred).
  • Ensure secure data governance, access control (RBAC/ABAC), and compliance with GDPR, CCPA, and related privacy regulations.
  • Provide architectural oversight ensuring performance, scalability, resilience, and maintainability.
  • Collaborate with stakeholders to translate business objectives (LTV growth, personalization lift, engagement) into scalable data solutions.


REQUIREMENTS:


  • Computer Science, Data Engineering, or related degree or equivalent experience.
  • 12+ years experience architecting enterprise data platforms in cloud environments.
  • 9+ years experience with data engineering with a focus on consumer data.
  • 6+ years experience working with Salesforce platforms, including data models and enterprise integrations.
  • Strong experience with Data 360 and identity resolution architectures.
  • Proven expertise in SQL performance tuning and large-scale data modeling.
  • Hands-on experience implementing ML pipelines and recommender systems in production environments.
  • Experience with cloud technologies (AWS, GCP, or Azure).
  • Experience with integration patterns (API, ETL, event streaming).
  • Experience providing technical leadership and guidance across multiple projects and development teams.
  • Experience translating business requirements into detailed technical specifications and working with development teams through implementation, including issue resolution and stakeholder communication.
  • Strong project management skills including scope assessment, estimation, and clear technical communication with both business users and technical teams.
  • Must hold at least one of the following Salesforce Certifications (Platform App Builder, Platform Developer 1, JavaScript Developer 1).
  • Experience with Databricks or similar distributed data/ML platforms preferred.
Not Specified
Data QA Engineer
✦ New
Salary not disclosed
Dallas, TX 1 day ago

Title : Data QA Engineer

Location: Minneapolis , Dallas , Atlanta (Onsite)

Job Type : Contract

Exp : 8-15 Years


Key Responsibilities:

  • Design, build, and maintain automated data quality frameworks to validate accuracy, completeness, consistency, and timeliness of data.
  • Develop automation scripts using Python/SQL to test data pipelines, ETL/ELT processes, and analytics workflows.
  • Implement data quality checks and monitoring within Azure-based data platforms.
  • Work extensively with Azure services (ADF, ADLS, Synapse) and Databricks for large-scale data processing.
  • Integrate data quality validations into CI/CD pipelines and support proactive issue detection.
  • Perform root cause analysis for data issues and collaborate with data engineering, analytics, and business teams to resolve them.
  • Define and enforce data quality standards, metrics, and SLAs.

Required Skills & Qualifications:

  • Strong experience (8–15 years) in data engineering, data quality, or data automation roles.
  • Hands-on expertise with Azure data ecosystem and Databricks.
  • Strong programming skills in Python and SQL.
  • Experience building automated data validation and reconciliation frameworks.
  • Solid understanding of data warehousing, data lakes, and distributed data processing.
  • Familiarity with DevOps/CI-CD practices for data platforms.

Preferred Skills:

  • Experience with data observability or data quality tools.
  • Exposure to cloud-scale analytics and performance optimization.
  • Strong communication and stakeholder management skills.
Not Specified
jobs by JobLookup
✓ All jobs loaded