Smart Data Solutions Address Jobs in Usa
16,341 positions found — Page 2
About Wakefern
Wakefern Food Corp. is the largest retailer-owned cooperative in the United States and supports its co-operative members' retail operations, trading under the ShopRite®, Price Rite®, The Fresh Grocer®, Dearborn Markets®, and Gourmet Garage® banners.
Employing an innovative approach to wholesale business services, Wakefern focuses on helping the independent retailer compete in a big business world. Providing the tools entrepreneurs need to stay a step ahead of the competition, Wakefern’s co-operative members benefit from the company’s extensive portfolio of services, including innovative technology, private label development, and best-in-class procurement practices.
The ideal candidate will have a strong background in designing, developing, and implementing complex projects, with focus on automating data processes and driving efficiency within the organization. This role requires a close collaboration with application developers, data engineers, data analysts, data scientists to ensure seamless data integration and automation across various platforms. The Data Integration & AI Engineer is responsible for identifying opportunities to automate repetitive data processes, reduce manual intervention, and improve overall data accessibility.
Essential Functions
- Participate in the development life cycle (requirements definition, project approval, design, development, and implementation) and maintenance of the systems.
- Implement and enforce data quality and governance standards to ensure the accuracy and consistency.
- Provide input for project plans and timelines to align with business objectives.
- Monitor project progress, identify risks, and implement mitigation strategies.
- Work with cross-functional teams and ensure effective communication and collaboration.
- Provide regular updates to the management team.
- Follow the standards and procedures according to Architecture Review Board best practices, revising standards and procedures as requirements change and technological advancements are incorporated into the >tech_ structure.
- Communicates and promotes the code of ethics and business conduct.
- Ensures completion of required company compliance training programs.
- Is trained – either through formal education or through experience – in software / hardware technologies and development methodologies.
- Stays current through personal development and professional and industry organizations.
Responsibilities
- Design, build, and maintain automated data pipelines and ETL processes to ensure scalability, efficiency, and reliability across data operations.
- Develop and implement robust data integration solutions to streamline data flow between diverse systems and databases.
- Continuously optimize data workflows and automation processes to enhance performance, scalability, and maintainability.
- Design and develop end-to-end data solutions utilizing modern technologies, including scripting languages, databases, APIs, and cloud platforms.
- Ensure data solutions and data sources meet quality, security, and compliance standards.
- Monitor and troubleshoot automation workflows, proactively identifying and resolving issues to minimize downtime.
- Provide technical training, documentation, and ongoing support to end users of data automation systems.
- Prepare and maintain comprehensive technical documentation, including solution designs, specifications, and operational procedures.
Qualifications
- A bachelor's degree or higher in computer science, information systems, or a related field.
- Hands-on experience with cloud data platforms (e.g., GCP, Azure, etc.)
- Strong knowledge and skills in data automation technologies, such as Python, SQL, ETL/ELT tools, Kafka, APIs, cloud data pipelines, etc.
- Experience in GCP BigQuery, Dataflow, Pub/Sub, and Cloud storage.
- Experience with workflow orchestration tools such as Cloud Composer or Airflow
- Proficiency in iPaaS (Integration Platform as a Service) platforms, such as Boomi, SAP BTP, etc.
- Develop and manage data integrations for AI agents, connecting them to internal and external APIs, databases, and knowledge sources to expand their capabilities.
- Build and maintain scalable Retrieval-Augmented Generation (RAG) pipelines, including the curation and indexing of knowledge bases in vector databases (e.g., Pinecone, Vertex AI Vector Search).
- Leverage cloud-based AI/ML platforms (e.g., Vertex AI, Azure ML) to build, train, and deploy machine learning models on a scale.
- Establish and enforce data quality and governance standards for AI/ML datasets, ensuring the accuracy, completeness, and integrity of data used for model training and validation.
- Collaborate closely with data scientists and machine learning engineers to understand data requirements and deliver optimized data solutions that support the entire machine learning lifecycle.
- Hands-on experience with IBM DataStage and Alteryx is a plus.
- Strong understanding of database design principles, including normalization, indexing, partitioning, and query optimization.
- Ability to design and maintain efficient, scalable, and well-structured database schemas to support both analytical and transactional workloads,
- Familiarity with BI visualization tools such as MicroStrategy, Power BI, Looker, or similar.
- Familiarity with data modeling tools.
- Familiarity with DevOps practices for data (CI/CD pipelines)
- Proficiency in project management software (e.g., JIRA, Clarizen, etc.)
- Familiarity with DevOps practices for data (CI/CD pipelines)
- Strong knowledge and skills in data management, data quality, and data governance.
- Strong communication, collaboration, and problem-solving skills.
- Ability to work on multiple projects and prioritize tasks effectively.
- Ability to work independently and in a team environment.
- Ability to learn new technologies and tools quickly.
- The ability to handle stressful situations.
- Highly developed business acuity and acumen.
- Strong critical thinking and decision-making skills.
Working Conditions & Physical Demands
This position requires in-person office presence at least 4x a week.
Compensation and Benefits
The salary range for this position is $75,868 - $150,644. Placement in the range depends on several factors, including experience, skills, education, geography, and budget considerations.
Wakefern is proud to offer a comprehensive benefits package designed to support the health, well-being, and professional development of our Associates. Benefits include medical, dental, and vision coverage, life and disability insurance, a 401(k) retirement plan with company match & annual company contribution, paid time off, holidays, and parental leave.
Associates also enjoy access to wellness and family support programs, fitness reimbursement, educational and training opportunities through our corporate university, and a collaborative, team-oriented work environment. Many of these benefits are fully or partially funded by the company, with some subject to eligibility requirements
Job Title: Senior Data Engineer / Analytics Engineer
Location: West Los Angeles, CA (Onsite)
Compensation: $180,000 base salary + 10% bonus
Overview
We are looking for a Senior Data Engineer / Analytics Engineer to help architect and build scalable data solutions that power business insights for sales and marketing teams. This role is ideal for someone who enjoys being both strategic and hands-on, designing modern data architectures while actively building pipelines, models, and dashboards.
The ideal candidate has deep experience in modern data stack technologies and has worked closely with high-volume sales and marketing organizations, particularly supporting Salesforce-driven environments.
Key Responsibilities
Data Architecture & Engineering
- Design and build scalable data pipelines and data models that support analytics and reporting across the organization.
- Architect and implement solutions using Snowflake, DBT, Python, and Fivetran within a modern data stack.
- Optimize Snowflake environments for cost and performance, including warehouse configuration, query optimization, and storage strategies.
- Build and maintain robust data transformation pipelines using DBT for modeling, testing, and validation.
Analytics & Business Intelligence
- Develop high-impact dashboards and reporting solutions using Power BI to support decision-making across the business.
- Partner with stakeholders to define KPIs, metrics, and data models that support sales and marketing performance tracking.
- Ensure data reliability, consistency, and accessibility across analytics platforms.
CRM Data & Sales Analytics
- Work extensively with Salesforce data, helping clean, structure, and optimize complex CRM datasets.
- Design scalable data models that support reporting on sales performance, marketing attribution, pipeline analytics, and revenue metrics.
- Implement solutions to improve data quality and usability across CRM-driven reporting.
Business Partnership
- Partner closely with Sales and Marketing teams in a high-volume sales environment to understand reporting needs and deliver actionable insights.
- Translate business questions into scalable data solutions and analytics frameworks.
- Communicate technical concepts clearly to non-technical stakeholders and collaborate effectively across teams.
Required Qualifications
- 5+ years of BI Engineering, Data Engineering, or Analytics Engineering experience.
- Proven experience acting as both a data architect and hands-on builder.
- Strong experience with:
- Snowflake (including cost and performance optimization)
- DBT for transformations, modeling, and data validations
- Python
- Power BI - must have
- Experience working with Salesforce data, including cleaning, structuring, and building scalable reporting solutions for complex CRM datasets. or similar CRM tools.
- Experience supporting Sales and Marketing teams in high-volume sales environments.
- Strong communication skills and ability to work collaboratively with cross-functional stakeholders.
Preferred Qualifications
- Experience with Salesforce data architecture and CRM analytics.
- Background working with large-scale sales operations or marketing analytics teams.
- Experience building modern ELT data pipelines and scalable analytics frameworks.
Work Environment
- Onsite role in West Los Angeles
- Highly collaborative environment working closely with data, sales, marketing, and leadership teams.
*At Securian Financial the internal position title is Data Science Sr Analyst or Data Science Consultant. The title and salary will be determined based on experience and applied skills.*
Summary
As an Operational Support Data Scientist at Securian Financial, you will bridge advanced analytics and day-to-day business operations by designing, deploying, monitoring, and continuously improving AI-driven solutions that support enterprise processes.
This role focuses on supporting reliable, scalable, and explainable AI solutions that enhance operational efficiency, decision support, customer experience, and risk management across Digital, Marketing, Sales, and Servicing functions.
You will operate at the intersection of data science, MLOps, and the business - ensuring models are maintained, enhanced, monitored, and aligned with Securian's Enterprise Data Strategy Vision and Operating Principles.
Responsibilities include but are not limited to:
AI Solution Development & Deployment
Work with business teams to enhance existing solutions to enhance and optimize existing AI/ML solutions.
Deploy and manage solutions using cloud-native tools (e.g., AWS SageMaker).
Operational Model Support & Optimization
Monitor model performance, data drift, and operational KPIs.
Troubleshoot production issues and continuously enhance and optimize models for performance, stability, and cost efficiency.
Establish measurement frameworks to quantify operational impact of deployed solutions.
Data Engineering & Analytical Execution
Transform structured, semi-structured, and unstructured data into actionable features and insights.
Perform exploratory analysis and visualization to identify operational improvement opportunities.
Collaborate with engineering teams to productionize data solutions.
Stakeholder Engagement & Explainability
Partner with cross-functional operational stakeholders to understand business workflows and translate them into AI-enabled solutions.
Communicate complex AI methodologies and results clearly to technical and non-technical audiences.
Ensure model transparency, explainability, fairness, and ethical AI application in alignment with enterprise governance standards.
Required Qualifications
Demonstrated experience developing, deploying, or supporting production AI/ML models in cloud environments.
Strong proficiency in Python and experience with tools such as AWS SageMaker and GitHub.
Experience building operationalized data science solutions (not just prototypes).
Strong understanding of statistical modeling, machine learning algorithms, and model validation techniques.
Ability to clearly explain technical concepts, model outputs, and operational trade-offs to stakeholders.
Strong ethical judgment with a commitment to responsible and unbiased AI development.
Preferred Qualifications
2+ years of hands-on experience in data science, applied AI, or machine learning.
Experience supporting AI solutions in operational or production environments.
Familiarity with MLOps practices, model governance frameworks, and automation tooling.
Experience working in regulated industries (financial services preferred).
#LI-hybrid **This position will be in a hybrid working arrangement.**
Securian Financial believes in hybrid work as an integral part of our culture. Associates get the benefit of working both virtually and in our offices. If you're in a commutable distance (90 minutes) you'll join us 3 days each week in our offices to collaborate and build relationships. Our policy allows flexibility for the reality of business and personal schedules.
The estimated base pay range for this job is:
$72,000.00 - $134,000.00Pay may vary depending on job-related factors and individual experience, skills, knowledge, etc. More information on base pay and incentive pay (if applicable) can be discussed with a member of the Securian Financial Talent Acquisition team.
Be you. With us. At Securian Financial, we understand that attracting top talent means offering more than just a job - it means providing a rewarding and fulfilling career. As a valued member of our high-performing team, we want you to connect with your work, your relationships and your community. Enjoy our comprehensive range of benefits designed to enhance your professional growth, well-being and work-life balance, including the advantages listed here:
Paid time off:
We want you to take time off for what matters most to you. Our PTO program provides flexibility for associates to take meaningful time away from work to relax, recharge and spend time doing what's important to them. And Securian Financial rewards associates for their service by providing additional PTO the longer you stay at Securian.
Leave programs: Securian's flexible leave programs allow time off from work for parental leave, caregiver leave for family members, bereavement and military leave.
Holidays: Securian provides nine company paid holidays.
Company-funded pension plan and a 401(k) retirement plan: Share in the success of our company. Securian's 401(k) company contribution is tied to our performance up to 10 percent of eligible earnings, with a target of 5 percent. The amount is based on company results compared to goals related to earnings, sales and service.
Health insurance: From the first day of employment, associates and their eligible family members - including spouses, domestic partners and children - are eligible for medical, dental and vision coverage.
Volunteer time: We know the importance of community. Through company-sponsored events, volunteer paid time off, a dollar-for-dollar matching gift program and more, we encourage you to support organizations important to you.
Associate Resource Groups: Build connections, be yourself and develop meaningful relationships at work through associate-led ARGs. Dedicated groups focus on a variety of interests and affinities, including:
Mental Wellness and Disability
Pride at Securian Financial
Securian Young Professionals Network
Securian Multicultural Network
Securian Women and Allies Network
Servicemember Associate Resource Group
For more information regarding Securian's benefits, please review our Benefits page.
This information is not intended to explain all the provisions of coverage available under these plans. In all cases, the plan document dictates coverage and provisions.
Securian Financial Group, Inc. does not discriminate based on race, color, religion, national origin, sex, gender, gender identity, sexual orientation, age, marital or familial status, pregnancy, disability, genetic information, political affiliation, veteran status, status in regard to public assistance or any other protected status. If you are a job seeker with a disability and require an accommodation to apply for one of our jobs, please contact us by email at , by telephone (voice), or 711 (Relay/TTY).
To view our privacy statement click here
To view our legal statement click here
Remote working/work at home options are available for this role.
Job: Data-MDM Architect (Profisee) with BA/PM experience
Location: Waukesha/Milwaukee, Wisconsin
Mode: Work from office, at least 3 days in a week
Primary Purpose
- Responsible for designing and architecting data/MDM solutions, analyzing, implementing, and deploying these solutions both on-premises and in the cloud. By collaborating with diverse business teams and utilizing extensive knowledge of big data tools and products, creates scalable, flexible, and comprehensive data solutions that tackle complex business challenges.
Major Responsibilities
- Manage the technical delivery of medium to large, moderately complex projects on-time with targeted zero defects.
- Provide planning, estimation, scheduling, prioritization and coordination of technical activities related to Enterprise-wide data solutions on both cloud and on premises.
- Ensure solutions alignment to Enterprise Architecture policies and best practices; ensure that process methodologies are followed in development.
- Accountable to business and technology management for end-to-end application scoping, planning, development and delivery that meets and exceeds quality standards.
- Identify and manage dependencies and downstream impacts of the project to minimize adverse effects on other projects and / or programs.
- Assist Project manager with the estimation of technical timelines and allocation of the technical resources to specific task.
- Communicate Expectations, Roles and Responsibilities to team members and hold them accountable to meet the expectations.
- Collaborate with IT partners to devise capacity plan and ensure appropriate infrastructure for the end-to-end system delivery.
- Supervise contingent workers and their daily tasks including onshore and offshore staff.
- Identify valuable data sources and automate collection processes.
- Maintain data accuracy and timeliness, a critical highly visible aspect of the position as it impacts supply chain and sales effectiveness, financial performance of the business, and customer perception through on-time delivery, working capital, financial reporting accuracy and product quality.
- Architect and design master data to drive towards “Single source of the truth”.
- Regularly monitor and measure performance of MDM standards.
- Performs problem and trend analyses to identify and correct problems and increase data quality.
- Review / Approve execution of data changes.
- Track and report through the CAB review board.
- Develop SLA’s and ensure they are met.
- Drive data mapping workshops for migrations.
- Coordinate and participate in the ETL (extract, transform, load) process for any migrations.
- Plan and architect M&A initiatives and integrations
This role is 3 days onsite. NO REMOTE or Relocation. Must be a US Citizen or Green Card Holder. Please do not apply if you are a EAD or H1-VIsa.
Job Description:
We are looking for a Data Architect to take ownership of designing and evolving a modern enterprise data ecosystem that supports analytics, reporting, and business decision-making. This role will focus on building and maintaining a secure, scalable data warehouse leveraging Microsoft cloud technologies such as Azure, Synapse, and Microsoft Fabric, while ensuring strong data quality, accessibility, and consistency across the organization.
This position will play a key role in establishing data standards, driving best practices in data modeling and governance, and partnering with both technical and business stakeholders. The ideal candidate is comfortable working independently and translating complex data concepts into actionable insights for non-technical audiences.
Key Responsibilities
Data Architecture & Modeling
- Design, implement, and maintain enterprise data warehouse solutions within Azure and Microsoft Fabric
- Develop and manage semantic data models to support reporting through Power BI and Azure Analysis Services
- Establish and document standards for data modeling, naming conventions, and dataset design
Data Governance & Quality
- Define and enforce data governance frameworks, including data definitions, access controls, and data policies
- Implement automated processes to monitor and improve data quality and integrity
- Partner with business users to understand requirements and resolve data inconsistencies
Technical Leadership
- Act as a subject matter expert for data architecture and enterprise data strategy
- Translate business requirements into scalable and efficient data solutions
- Provide guidance to stakeholders on data architecture decisions and trade-offs
Enablement & Collaboration
- Create documentation, data dictionaries, and standards to support self-service analytics
- Work closely with BI developers and business teams to ensure data solutions align with reporting needs
Required Qualifications
- 6+ years of experience in data architecture, data engineering, or BI-related roles
- Strong expertise with Microsoft Azure Data Services, including SQL, Synapse, Data Factory, and Fabric
- Advanced SQL skills with experience in query optimization
- Experience with Python or R for data processing and automation
- Deep understanding of semantic modeling in Power BI or Azure Analysis Services
- Hands-on experience with Power BI (Desktop, Service, DAX, Power Query)
- Experience integrating ERP (Epicor preferred) and CRM data for reporting and analytics
- Strong understanding of end-to-end ERP business processes (Quote-to-Cash, AR, AP, GL)
- Knowledge of enterprise data architecture principles and data lifecycle management
- Proven experience establishing and maintaining data governance and quality standards
- Strong communication skills with the ability to work with non-technical stakeholders
Preferred Qualifications
- Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related field
- Experience with modern data architecture patterns such as lakehouse, star schema, or medallion architecture
- Background in operational or supply chain environments
- Exposure to planning tools such as Anaplan or similar platforms
We areseekingan experienced and forward-thinkingSolution Architect - Data Engineeringto lead the design and implementation of scalable, secure, and high-performance data solutions. The ideal candidate will have deep expertise withPython and SQL, experience with data warehouses (Snowflake or something similar), a strong command ofengineering best practices(includinglinters and code formatters, project organization, and managing environments), and practical experience buildingCI/CD pipelinesto ensure robust, automated delivery of data pipelines and services.
Responsibilities
- Architect Scalable Data Solutions
Design and implement end-to-end data engineering architectures that are scalable, maintainable, and performant across batch and real-time processing systems.
- Engineering Leadership
Lead by example with high-quality Python code,utilizinglinters (e.g.,pylint,flake8,black) and enforcing code cleanliness, readability, and best practices across teams.
- CI/CD Pipeline Development
Build, manage, and optimize CI/CD pipelines using tools such asGitHub Actions,GitLab CI,CircleCI, orJenkinsto automate testing, code quality checks, and deployment of data engineering components.
- Data Governance & Quality
Establish data validation, logging, and monitoring strategies to ensure data integrity and reliability at scale.
- Collaborate Cross-Functionally
Work closely with data scientists, software engineers, DevOps, and business stakeholders to translate requirements into technical solutions and ensure alignment with overall enterprise architecture.
- Mentorship & Code Reviews
Provide guidance to junior developers, lead technical reviews, and enforce clean coding standards throughout the data engineering team.
Required Skills & Experience
- 7+ years of experience in software or data engineering, with 3+ years in an architectural or technical leadership role.
- Expert-levelproficiencyinPython and SQL, with a deep understanding of best practices, performance tuning, and maintainable code patterns.
- Proven experience withlinters,formatters, and other static analysis tools to ensure code quality and compliance.
- Hands-on experience designing and implementingCI/CD pipelinesfor data pipelines, APIs, and other backend services.
- Solid knowledge of modern data platforms and technologies (e.g., Spark, Airflow,dbt, Kafka, Snowflake,BigQuery, etc.).
- Strong understanding of software engineering practices such as version control, testing, and continuous integration.
Desired Skills & Experience
- Experience working in cloud environments (AWS, GCP, or Azure).
- Familiarity with Infrastructure as Code (IaC) tools like Terraform or CloudFormation.
- Understanding of security, compliance, and governance in data pipelines.
- Excellent communication and documentation skills.
- Strong leadership presence with the ability to mentor and influence teams.
- Problem-solver with a focus on delivering value and simplicity through technology.
Wage and Benefits
We offer a Total Rewards package that includes medical and dental coverage, 401(k) plans, flex spending, life insurance, disability, employee discount program, employee stock purchase program and paid family benefits to support you and your family.The salary range for this position is posted below. Where an employee or prospective employee is paid within this range will depend on, among other factors, actual ranges for current/former employees in the subject position, market considerations, budgetary considerations, tenure and standing with the Company (applicable to current employees), as well as the employee's/applicant's skill set, level of experience, and qualifications.
Employment Transparency
It is the policy of our company to provide equal employment opportunities to all employees and applicants for employment without regard to race, color, ethnicity, gender, age, religion, creed, national origin, sexual orientation, gender identity, marital status, citizenship, genetic information, veteran status, disability, or any other basis prohibited by applicable federal, state, or local law.
Please note this job description is not designed to cover or contain a comprehensive listing of activities, duties, or responsibilities that are required of the employee for this job. Duties, responsibilities, and activities may change at any time with or without notice.
The employer will make reasonable accommodations in compliance with the American with Disabilities Act of 1990. The job description will be reviewed periodically as duties and responsibilities change with business necessity. Essential and other job functions are subject to modification. Reasonable accommodations may be provided to enable individuals with disabilities to perform the essential functions.
For applicants to jobs in the United States: In compliance with the current Americans with Disabilities Act and state and local laws, if you have a disability and would like to request an accommodation to apply for a position with our company, please email .
Salary Range$200,000—$220,000 USDJob Summary:
Our client is seeking a Senior Data Analytics Engineer (Customer Data) to join their team! This position is located in Irving, Texas.
Duties:
- Support cross-functional teams including Marketing, Data Science, Product, and Digital
- Build datasets that power: customer segmentation, personalization workflows, campaign and lifecycle analytics, BI dashboards and KPIs and real-time and ML-driven customer experiences
- Build, optimize, and maintain customer data pipelines using PySpark/Databricks
- Transform raw customer data into analytics‑ready datasets for reporting, segmentation, personalization, and AI/ML applications
- Develop customer behavior metrics, campaign insights, and lifecycle reporting layers
- Design datasets used by Power BI/Tableau; dashboard creation is a plus, not required
- Optimize Databricks performance such as: skewed joins, partitioning, sorting, caching/persist strategy
- Work across AWS/Azure/GCP and integrate pipelines with CDPs
- Participate in ingestion and digestion phases to shape MarTech and BI analytical layers
- Document and uphold data engineering standards, governance, and best practices across teams
Desired Skills/Experience:
- 6+ years in Data Engineering or Analytics Engineering
- Strong hands-on experience with: Databricks, PySpark, Python and SQL
- Proven experience with customer/marketing data: segmentation, personalization, campaign analytics, retention, behavioral metrics
- Ability to design performance‑optimized pipelines; batch or near real-time
- Experience building datasets consumed by Power BI/Tableau
- Understanding of CDP workflows, customer identity data, traits/feature modeling, and activation
- Strong communication skills, translating marketing needs into technical data solutions
- Power BI expertise, major plus
- Experience with Delta Lake, orchestration, or feature engineering for ML
- Background as an Analytics Engineer, BI/Data Modeling Engineer, or Data Engineer with strong analytics orientation
Benefits:
- Medical, Dental, & Vision Insurance Plans
- Employee-Owned Profit Sharing (ESOP)
- 401K offered
The approximate pay range for this position starting at $140,000. Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.
At KellyMitchell, our culture is world class. We’re movers and shakers! We don’t mind a bit of friendly competition, and we reward hard work with unlimited potential for growth. This is an exciting opportunity to join a company known for innovative solutions and unsurpassed customer service. We're passionate about helping companies solve their biggest IT staffing & project solutions challenges. As an employee-owned, women-led organization serving Fortune 500 companies nationwide, we deliver expert service at a moment's notice.
By applying for this job, you agree to receive calls, AI-generated calls, text messages, or emails from KellyMitchell and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy at
Job Title – Lead Data Engineer
Please note this role is not able to offer visa transfer or sponsorship now or in the future
About the role
As a Lead Data Engineer, you will make an impact by designing, building, and operating scalable, cloud‑native data platforms supporting batch and streaming use cases, with strong focus on governance, performance, and reliability. You will be a valued member of the Data Engineering team and work collaboratively with cross‑functional engineering, cloud, and architecture stakeholders.
In this role, you will:
- Design, build, and operate scalable cloud‑native data platforms supporting batch and streaming workloads with strong governance, performance, and reliability.
- Develop and operate data systems on AWS, Azure, and GCP, designing cloud‑native, scalable, and cost‑efficient data solutions.
- Build modern data architectures including data lakes, data lakehouses, and data hubs, with strong understanding of ingestion patterns, data governance, data modeling, observability, and platform best practices.
- Develop data ingestion and collection pipelines using Kafka and AWS Glue; work with modern storage formats such as Apache Iceberg and Parquet.
- Design and develop real‑time streaming pipelines using Kafka, Flink, or similar streaming frameworks, with understanding of event‑driven architectures and low‑latency data processing.
- Perform data transformation and modeling using SQL‑based frameworks and orchestration tools such as dbt, AWS Glue, and Airflow, including Slowly Changing Dimensions (SCD) and schema evolution.
- Use Apache Spark extensively for large‑scale data transformations across batch and streaming workloads.
Work model
We believe hybrid work is the way forward as we strive to provide flexibility wherever possible. Based on this role’s business requirements, this is a hybrid position requiring 4 days a week in a client or Cognizant office in Atlanta, GA. Regardless of your working arrangement, we are here to support a healthy work-life balance though our various wellbeing programs.
The working arrangements for this role are accurate as of the date of posting. This may change based on the project you’re engaged in, as well as business and client requirements. Rest assured; we will always be clear about role expectations.
What you need to have to be considered
- Hands‑on experience developing and operating data systems on AWS, Azure, and GCP.
- Proven ability to design cloud‑native, scalable, and cost‑efficient data solutions.
- Experience building data lakes, data lakehouses, and data hubs with strong understanding of ingestion patterns, governance, modeling, observability, and platform best practices.
- Expertise in data ingestion and collection using Kafka and AWS Glue, with experience in Apache Iceberg and Parquet.
- Strong experience designing and developing real‑time streaming pipelines using Kafka, Flink, or similar streaming frameworks.
- Deep expertise in data transformation and modeling using SQL‑based frameworks and orchestration tools including dbt, AWS Glue, and Airflow, with knowledge of SCD and schema evolution.
- Extensive experience using Apache Spark for large‑scale batch and streaming data transformations.
These will help you stand out
- Experience with event‑driven architectures and low‑latency data processing.
- Strong understanding of schema evolution, SCD modeling, and modern data modeling concepts.
- Experience with Apache Iceberg, Parquet, and modern ingestion/storage patterns.
- Strong knowledge of observability, governance, and platform best practices.
- Ability to partner effectively with cloud, architecture, and engineering teams.
Salary and Other Compensation:
Applications will be accepted until March 17, 2025.
The annual salary for this position is between $81,000 - $135,000, depending on experience and other qualifications of the successful candidate.
This position is also eligible for Cognizant’s discretionary annual incentive program, based on performance and subject to the terms of Cognizant’s applicable plans.
Benefits: Cognizant offers the following benefits for this position, subject to applicable eligibility requirements:
- Medical/Dental/Vision/Life Insurance
- Paid holidays plus Paid Time Off
- 401(k) plan and contributions
- Long‑term/Short‑term Disability
- Paid Parental Leave
- Employee Stock Purchase Plan
Disclaimer: The salary, other compensation, and benefits information is accurate as of the date of this posting. Cognizant reserves the right to modify this information at any time, subject to applicable law.
Company/Role Overview:
CliftonLarsonAllen (CLA) Search has been retained by Midwestern Higher Education Compact to identify a Data Manager to serve their team. The Midwestern Higher Education Compact (MHEC) brings together leaders from 12 Midwestern states to strengthen postsecondary education, advance student success, and promote regional economic vitality.
MHEC programs and initiatives save member states and students millions of dollars annually through time- and cost-savings opportunities. MHEC research supports workforce readiness and improves the quality, accessibility, and affordability of postsecondary education. MHEC convenings bring together leaders and subject experts to share knowledge, generate ideas, and develop collaborative solutions.
To learn more, click here:
What You’ll Do:
- Administer and maintain Microsoft Fabric, OneLake, and Azure environments.
- Design and deliver sophisticated data solutions that are innovative and sustainable.
- Ensure data infrastructure is secure, reliable, and scalable.
- Manage and improve how data is brought into the organization from multiple sources.
- Maintain accurate, well-structured, consistent, and complete data that ensure high quality and useability for internal staff.
- Develop and oversee standards on how data is collected, stored, and protected across departments.
- Manage MHEC’s customer relationship management (CRM) system, ensuring data integrity, integration with other platforms, and alignment with organizational needs.
- Partner with teams across the organization to monitor processes and make recommendations.
- Partner with research staff to understand data access patterns and develop storage strategies that accelerate research and analytics
- Develop and maintain Power BI dashboards and reports to deliver clear insights to senior leaders and decision-makers.
- Ensure staff have access to timely, clear, and meaningful data visualizations.
- Train staff to use reports and dashboards effectively.
- Support departments in using data to guide decision-making.
- Document data pipelines, integrations, and system processes.
- Recommend tools and practices that help MHEC grow its data capacity.
- Monitor developments in Microsoft’s data platforms and assess future needs.
What You’ll Need:
- Bachelor's degree or equivalent experience preferred.
- 5+ years’ experience, preferably with Microsoft data platforms including Power BI, Azure, and/or Fabric.
- Experience designing and maintaining data systems and dashboards.
- Experience in higher education or nonprofit sectors preferred.
- Strong technical understanding of Microsoft Fabric, OneLake, and Azure.
- Proficiency demonstrated in Python, R, SAS, SQL or other statistical/data management software
- Experience with data visualization platforms (Tableau, Power BI, or similar)
- Experience with Microsoft Dynamics and Power Automate is a plus but not required.
- Ability to plan, optimize, build, and maintain data pipelines and dashboards.
The pay range for this role is $150,000 - $200,000/yr USD.
WHO WE ARE:
Headquartered in Southern California, Skechers—the Comfort Technology Company®—has spent over 30 years helping men, women, and kids everywhere look and feel good. Comfort innovation is at the core of everything we do, driving the development of stylish, high-quality products at a great value. From our diverse footwear collections to our expanding range of apparel and accessories, Skechers is a complete lifestyle brand.
ABOUT THE ROLE:
Skechers Digital Team is seeking a Digital Data Architect reporting to the Director, Digital Architecture, Consumer Domain. This role is responsible for designing and governing Skechers’ Consumer Data 360 ecosystem, enabling identity resolution, high-quality data foundations, personalization, loyalty intelligence, and machine learning capabilities across digital and retail channels.
The ideal candidate will be a strong technical leader, have hands-on full-stack technical knowledge in enterprise technologies related to Skecher’s consumer domain, and have the ability to work in a fast-paced agile environment. You should have knowledge of consumer programs from an architecture/industry perspective, and you should have strong hands-on experience designing solutions on the Salesforce Core Platform (including configuration, integration, and data model best practices).
You will work cross-functionally with Digital Engineering, Data Engineering, Data Science, Loyalty, and Marketing teams to architect scalable, secure, and high-performance data platforms that support advanced personalization and recommender systems.
WHAT YOU’LL DO:
- Responsible for the full technical life cycle of consumer platform capabilities which includes:
- Capability roadmap and technical architecture in alignment to consumer experience
- Technical planning, design, and execution
- Operations, analytics/reporting, and adoption
- Define and evolve Skechers’ Consumer Data 360 architecture, including identity resolution (deterministic and probabilistic matching) and unified customer profiles.
- Architect scalable data models and pipelines across CDP, CRM, e-commerce, marketing automation, data lake, and warehouse platforms.
- Establish enterprise data quality frameworks including validation, deduplication, anomaly detection, and observability.
- Optimize SQL workloads and large-scale distributed queries through performance tuning, partitioning, indexing, and workload management strategies.
- Design and oversee ML pipelines supporting personalization, churn modeling, and recommender systems.
- Partner with Data Science teams to productionize models using distributed platforms such as Databricks (Spark, Delta Lake, MLflow preferred).
- Ensure secure data governance, access control (RBAC/ABAC), and compliance with GDPR, CCPA, and related privacy regulations.
- Provide architectural oversight ensuring performance, scalability, resilience, and maintainability.
- Collaborate with stakeholders to translate business objectives (LTV growth, personalization lift, engagement) into scalable data solutions.
REQUIREMENTS:
- Computer Science, Data Engineering, or related degree or equivalent experience.
- 12+ years experience architecting enterprise data platforms in cloud environments.
- 9+ years experience with data engineering with a focus on consumer data.
- 6+ years experience working with Salesforce platforms, including data models and enterprise integrations.
- Strong experience with Data 360 and identity resolution architectures.
- Proven expertise in SQL performance tuning and large-scale data modeling.
- Hands-on experience implementing ML pipelines and recommender systems in production environments.
- Experience with cloud technologies (AWS, GCP, or Azure).
- Experience with integration patterns (API, ETL, event streaming).
- Experience providing technical leadership and guidance across multiple projects and development teams.
- Experience translating business requirements into detailed technical specifications and working with development teams through implementation, including issue resolution and stakeholder communication.
- Strong project management skills including scope assessment, estimation, and clear technical communication with both business users and technical teams.
- Must hold at least one of the following Salesforce Certifications (Platform App Builder, Platform Developer 1, JavaScript Developer 1).
- Experience with Databricks or similar distributed data/ML platforms preferred.