Syniti Data Migration Jobs in Usa
9,959 positions found — Page 2
About the Role
Our Decision Intelligence (DI) team is seeking a Senior / Lead Data Architect to drive enterprise data strategy and accelerate AI‑enabled transformation across McKesson. DI plays a critical role in enabling data‑driven change and delivering measurable business value through high‑quality data, advanced analytics, and intelligent automation.
This role will define and evolve the enterprise‑wide data and semantic architecture required to support AI‑driven insights, agentic automation, and next‑generation data products. The ideal candidate is a strategic thought partner, a hands‑on architect, and a leader capable of translating business outcomes into scalable technical solutions.
Responsibilities
Data Architecture Leadership
- Architect canonical data domains across customer, product, pricing, supply chain, contracting, and financial performance.
- Design semantic layers, business ontologies, subject‑area models, and metric definition frameworks to power enterprise AI agents and decisioning systems.
- Define architectural principles for data interoperability, lineage, access control, security, and multi‑cloud integration.
- Align data platform and architecture decisions with the USPD AI Roadmap and enterprise AI strategy.
Establish standards and patterns for:
- RAG pipelines
- Vector search
- Metadata-driven orchestration
- Multi-modal ingestion (text, events, real-time signals)
Provide architectural oversight and strategic guidance across enterprise data products including:
- Finance, Pricing, and Supply Chain Data Products
- FIA
- ContractIQ
- Specialty Leakage Agents
- Design a robust, scalable, and interoperable data environment that supports AI-ready, governed, high-quality enterprise data.
- Influence programs and project teams on best practices related to data quality, architecture, modeling, observability, and governance.
- Leverage data architecture frameworks to translate complex relational entities into business cases, use cases, and AI-enablement requirements.
- Partner with product, engineering, and analytics leaders to accelerate data product creation and improve enterprise decision intelligence maturity.
Advanced Data System Design
- Architect complex distributed data systems that ensure scalability, performance, reliability, and real-time integration across business-critical operations.
- Design and govern enterprise-wide data models, data flows, reference architectures, and integration patterns.
Produce high-quality data design deliverables including:
- Data models
- Entity relationship diagrams (ERDs)
- Data flow diagrams
- System interface schemas
- Comprehensive data dictionaries and metadata documentation
- Ensure optimal functioning of AI/ML pipelines, including data quality controls, observability patterns, and architecture for low-latency analytics.
- Guide engineering teams on reusable patterns for ingestion, transformation, curation, semantic enrichment, and operationalization.
Minimum Qualifications
- 7+ years of experience in data engineering, data architecture, or enterprise data platform development.
- Bachelor’s or Master’s degree in Computer Science, Engineering, Information Systems, or a related field.
Required Skills
- 7+ years designing enterprise data architecture across large, complex organizations.
- Demonstrated experience with Enterprise data modeling, semantic layers, canonical domains
- Large-scale integration across heterogeneous systems
- Databricks, Snowflake, MDM platforms, SAP, Salesforce/Conga
- Designing intuitive architectural patterns to simplify complex data landscapes.
- Strong understanding of data quality frameworks, governance, lineage, metadata, and regulatory compliance.
Leadership Skills
- Ownership-driven leader with a track record of guiding engineering teams through delivery.
- Acts as a change champion, elevating architecture maturity and influencing cross-functional adoption of best practices.
Strategic Thinking
- Strong analytical capability and the ability to develop long-term data strategies aligned to enterprise objectives and future-state AI readiness.
Problem Solving
- Creative, innovative problem solver capable of architecting solutions for highly complex data and AI challenges.
You’ve done a ton of Leetcode.
You’ve racked up certificates, aced LeetCode challenges, and you know your way around system design like the back of your hand.
On paper, you’re everything a tech company wants.
However tech stacks and requirement change every day.
Also, tech clients want in depth Tech stack knowledge and the school projects don’t make the cut.
Since 2010, we’ve helped thousands of candidates land full-time jobs at tech leaders like Google, Apple, PayPal, Visa, Western Union, Wells Fargo, Intel, Paypal, JPMC, Wayfair, BOA, CITI and hundreds more with Job offers of $95k to $154k.
Synergisticit’s JOPP focuses on closing the gap between your tech skills and what employers want now.
Open Roles We're Hiring For our clients: Entry-Level Software Programmers (Java/Python) Java Full Stack Developers Data Analysts & BI Engineers Data Scientists & ML Engineers All visa types and U.S.
citizens are encouraged to apply.
please check the below links Why do Tech Companies not Hire recent Computer Science Graduates | SynergisticIT Technical Skills or Experience? | Which one is important to get a Job? | SynergisticIT Backend vs.
Full Stack Development: Job Prospects | SynergisticIT What Recruiters Look for in Junior Developers | SynergisticIT Software engineering or Data Science as a career? How OPT Students Can Land Tech Jobs – SynergisticIT We Focus on Java /Full stack/Devops and Data Science /Data Engineers/Data analysts/BI Analysts/ Machine learning/AI candidates Ideal Candidates: Recent grads in CS, Engineering, Math, or Statistics with limited or no job experience Jobseekers who had layoffs due to Downsizing and want to get in demand tech stack Professionals seeking a career switch to tech Candidates with career gaps or lacking real-world experience Individuals looking to boost their skill portfolio for better job prospects Computer Science grads with limited or no job experience Students who recently finished their Bachelor’s or Master’s programs Those struggling to land interviews despite having experience Candidates on F1/OPT needing a job for STEM extension or H-1B filing Currently, We are looking for entry-level software programmers, Java Full stack developers, Python/Java developers, Data analysts/Data Engineers/ Data Scientists, Machine Learning engineers for full time positions with clients.
Top tech companies are flooded with smart grads.
What gets you in the door now is real-world application, confidence in delivery, and the soft skills to own a room—or a Zoom.
That’s what we upskill on Please check the below links: Job Placement Program (JOPP): Java Job Placement Program Data Science / Data Jobs Program Event videos (OCW, JavaOne, Gartner): USA Today feature Contact: The Market’s Changed—Have You? Develop and implement statistical predictive models and machine learning algorithms Develop predictive models and machine learning algorithms using advanced methodologies Develop statistical learning models for data analysis Design and develop data mining and machine learning models and algorithms Using machine learning and statistical techniques Develop statistical models for data analysis Demonstrate an advanced understanding of data mining, predictive analytics, machine learning and data visualization Support analytics and machine learning model development Implement data driven business solutions using advanced statistical methods and machine learning techniques Clean and explore data and implement statistical/machine learning models Create machine learning data models that develop new insights over time Providing advanced predictive data analytics using big data and data science technology for healthcare innovation Generate analytics on big data Solve statistical, machine learning, analytical and data mining problems Leverage statistical data modeling, data mining and machine learning techniques to provide solutions to new business problems Deploy predictive models and/or machine learning algorithms on large static and/or streaming data sets Optimizing classifiers using machine learning techniques Building the coolest machine data analytics systems Using various types of algorithms and machine learning modeling techniques Define and develop machine learning and data mining strategies
This role requires a strong background in data engineering, hands-on experience building cloud data solutions, and a talent for communicating complex designs through clear diagrams and documentation.
Core Responsibilities Cloud Data Architecture Design & Strategy: Design and implement secure, scalable cloud-based data pipelines, data warehouses, and data lakes.
Drive the selection and integration of cloud data services (e.g., storage, databases, analytics tools).
Develop comprehensive cloud data strategies in alignment with business goals.
Diagramming & Documentation: Produce clear and informative visual diagrams (e.g., data flow diagrams, entity-relationship diagrams, system architecture diagrams) to guide implementation and knowledge sharing.
Maintain detailed documentation of data architecture, design decisions, and processes.
Hands-on Implementation & Optimization: Actively contribute to the hands-on implementation of cloud data solutions.
Proactively identify and implement performance optimization strategies for cloud data systems.
Troubleshoot and resolve issues related to data pipelines, data quality, and data accessibility.
Must Have: Bachelor's of Engineering in Computer Science "Engineering degree in another branch such as Electrical, Civil, Mechanical, or IT, etc it will not be considered" Minimum of 5 years of hands-on data engineering experience using distributed computing approaches (Spark, Map Reduce, DataBricks) Proven track record of successfully designing and implementing cloud-based data solutions in Azure Deep understanding of data modeling concepts and techniques.
Strong proficiency with database systems (relational and non-relational).
Exceptional diagramming skills with tools like Visio, Lucidchart, or other data visualization software.
Preferred Qualifications Advanced knowledge of cloud-specific data services (e.g., DataBricks, Azure Data Lake).
Expertise in big data technologies (e.g., Hadoop, Spark).
Strong understanding of data security and governance principles.
Experience in scripting languages (Python, SQL).
Additional Skills Communication: Exemplary written and verbal communication skills to collaborate effectively with all teams and stakeholders.
Problem-solving: Outstanding analytical and problem-solving skills for complex data challenges.
Teamwork & Leadership: Ability to work effectively in cross-functional teams and demonstrate potential for technical leadership.
8116 - Midtown Office - 2220 W. Broad Street, Richmond, Virginia, 23220
Job Description
What you will do – Essential Responsibilities
- Given long term strategic goals, can lay out a path across many versions.
- Participates in and supports initiatives outside of main area of responsibility.
- High degree of influence of data product direction and has ownership over large components.
- Thinks both strategically and tactically, keeping in mind both technical goals and company goals.
- Provides technical leadership for projects including 3–4 senior level individuals.
- The data engineer will be considered a blend of data and analytics “guru.” This role will promote the available data and analytics capabilities and expertise to business unit leaders and educate them in leveraging these capabilities in achieving their business goals.
- Work with data governance team members and information stewards and participate in vetting and promoting content created in the business and by data scientists to the curated data catalog for governed reuse.
- May be required to present at conferences to demonstrate company’s technical prowess.
Purpose of the role
Senior Principal Engineers partner with Engineers and Solution Architects to develop solutions and implement standards that ensure an unrivaled data experience. You are an expert in your craft and seen as a platform and implementation owner. You are an active contributor in the industry and have a passion for continuous learning.
Senior Principal Engineers practice hands-on development, have oversight of the technical tasks of others, and are the owners of the standards and best practices. Our Senior Principal Engineers act as a technical mentor to others and is an expert in supporting multiple areas of the business.
Qualifications and Requirements
Basic Qualifications
- Bachelor’s Degree in Computer Science, Decision Science, Engineering, Statistics, or a related field, or equivalent alternative education, skills, and/or practical experience is required and 8+ years of work experience required in data management disciplines including [data integration, modeling, optimization and data quality], and/or other areas directly relevant to data engineering responsibilities and tasks; multiple certifications preferred or
- Master’s Degree in Computer Science, Decision Science, Engineering, Statistics, or a related field, or equivalent alternative education, skills, and/or practical experience is required and 6+ years of work experience required in data management disciplines including [data integration, modeling, optimization and data quality], and/or other areas directly relevant to data engineering responsibilities and tasks; multiple certifications preferred.
Preferred Qualifications
- Expert experience working with large, heterogeneous datasets in building and optimizing data pipelines, pipeline architectures and integrated datasets using traditional data integration technologies. These should include [ETL/ELT, data replication/CDC, message-oriented data movement]
- Strong/expert experience with multiple advanced analytics tools languages such as [R, Python, Java, C++, Scala, others].
- Strong/expert experience with popular database programming languages including [SQL, PL/SQL, others] on both relational and non-relational databases.
- Strong experience with cloud data platforms such as Databricks, Snowflake
- Expert experience with data discovery, analytics, and data quality controls
- Expert experience in data modeling and ontologies
- Strong experience with microservices to Serve Data
- Strong experience in cloud platforms such as Azure, AWS, GCP
Work Location and Arrangement: This role will be based out of the CarMax Midtown office, Richmond VA or CarMax Technology Hub, Plano TX and have a Hybrid work arrangement.
- Associates based in Richmond work onsite 5 days per week.
- Associates based in Plano work onsite 2 days per week.
Work Authorization: Applicants must be currently authorized to work in the United States on a full-time basis. Sponsorship will not be considered for this specific role.
About CarMax
CarMax disrupted the auto industry by delivering the honest, transparent and high-integrity experience customers want and deserve. This innovative thinking around the way cars are bought and sold has helped us become the nation’s largest retailer of used cars, with over 250 locations nationwide.
Our amazing team of more than 25,000 associates work together to deliver iconic customer experiences. Along the way, we help every associate grow their career and achieve their best, at work and in their community. We are recognized for our commitment to training and diversity and are one of the FORTUNE 100 Best Companies to Work For®.
Our Commitment to Diversity and Inclusion:
CarMax is committed to bringing together people from different backgrounds and perspectives, providing employees with a safe, welcoming, and inclusive work environment.
CarMax is an equal opportunity employer, and all qualified candidates will receive consideration for employment without regard to age, race, color, religion, sex, sexual orientation, gender identity, genetic information, national origin, protected veteran status, disability status, or any other characteristic protected by law.
Job Title: Senior Manager, Data Architecture (Ref: 195759)
Location: Charlotte, North Carolina – In-Office (5 Days Per Week)
Salary: Up to $175,000 + Bonus
Contact:
We’re looking for an experienced and forward-thinking Senior Manager, Data Architecture to define and lead the enterprise data architecture strategy within a large-scale, data-driven organization. This is a high-impact leadership role where you’ll shape the long-term data roadmap, modernize architecture standards, and guide the evolution of a cloud-based data platform.
In this role, you’ll lead a team of data architects and modelers while partnering closely with Data Engineering, Analytics, BI, Platform, and business stakeholders. You’ll ensure scalable, secure, and high-performing data solutions that enable advanced analytics, operational reporting, and strategic decision-making across the enterprise.
What You’ll Do
- Define and maintain the enterprise data architecture vision aligned to business and technology strategy
- Lead, mentor, and grow a team of data architects and modelers, establishing best practices and standards
- Design and govern scalable data platforms leveraging Azure, Snowflake, and Databricks
- Establish enterprise standards for data modeling (Dimensional, 3NF, Data Vault), integration, and storage
- Define architecture patterns for ingestion, transformation, and cross-domain data integration
- Drive architectural consistency across analytics, BI, and operational data products
- Partner with Data Governance teams to enforce data quality, lineage, metadata, and compliance standards
- Ensure solutions meet security, privacy, and regulatory requirements
- Collaborate with Engineering and Platform teams on cloud architecture and long-term technical roadmap
- Communicate complex architectural designs clearly to both technical and executive stakeholders
What You’ll Bring
- 7+ years of experience in data architecture or advanced data engineering roles
- 5+ years in a dedicated Data Architect or equivalent leadership capacity
- Deep experience designing enterprise-scale data platforms in cloud environments
- Strong expertise in Microsoft Azure data services
- Expert-level knowledge of Snowflake and Databricks
- Extensive experience with enterprise data modeling methodologies (Dimensional, 3NF, Data Vault)
- Experience with data modeling tools such as Erwin (preferred)
- Proven experience leading or mentoring architects or senior technical professionals
- Strong understanding of governance, security, and regulatory considerations in enterprise data environments
- Exceptional communication skills with the ability to influence senior stakeholders
Qualifications
- Bachelor’s degree in Computer Science, Engineering, Information Systems, or related field (or equivalent experience)
- 10+ years of progressive experience in data architecture, engineering, or enterprise data platform design
Be part of an amazing story
Macy’s is more than just a store. We’re a story. One that’s captured the hearts and minds of America for more than 160 years. A story about innovations and traditions…about inspiring stores and irresistible products…about the excitement of the Macy’s 4th of July Fireworks, and the wonder of the Thanksgiving Day Parade. We’ve been part of memorable moments and milestones for countless customers and colleagues. Those stories are part of what makes this such a special place to work.
Job Overview
The Data Steward plays a direct role in making Macy’s data usable, trusted, and decision-ready. This role owns the quality and clarity of data definitions in the enterprise catalog, ensuring teams can quickly find, understand, and confidently use data to drive business outcomes. Partnering closely with analytics, product, and engineering teams, the Data Steward helps turn complex data into a reliable asset that powers everyday decisions across the company.
What You Will Do
- Maintain and enhance the Enterprise Data Catalog, including domains, assets, attributes, KPIs, definitions, and relationships.
- Collaborate with business and technical stakeholders to define and enforce metadata standards, naming conventions, and certification workflows.
- Validate technical metadata and lineage ingested from multiple sources.
- Monitor catalog usage and provide training and support to end users.
- Partner with data owners and stewards to ensure proper data ownership and stewardship assignments.
- Develop and maintain SOPs, training materials, and documentation.
- Perform data profiling and quality checks to ensure metadata accuracy and completeness.
- Define data quality checks with business stakeholders and validate implementation results.
- Support the Data Governance Architect and other governance team members as needed.
- Serve as subject matter expert for assigned data domains.
Skills You Will Need
- Data Stewardship: Applies governance principles to maintain accurate and complete metadata across enterprise systems.
- Collibra Expertise: Utilizes Collibra tools to manage data catalog assets and workflows effectively.
- Metadata Management: Ensures consistency and compliance with established standards for metadata and lineage.
- Data Quality Analysis: Conducts profiling and validation to maintain trusted data assets.
- Communication: Builds strong relationships with stakeholders and conveys technical concepts clearly.
- Regulatory Knowledge: Understands data privacy regulations such as GDPR and CCPA.
Who You Are
- 2 to 3 years of experience in data stewardship, governance, and metadata management.
- Skilled in Collibra with certification preferred.
- Possesses a curious mindset to build a foundational understanding of the retail business sector.
- Possesses high levels of ownership, innovation, and simplification with a strong bias for action.
- Knowledgeable about data privacy regulations and data classification practices.
- Regularly required to sit, talk, hear; use hands/fingers to touch, handle, and feel. Occasionally required to move about the workplace and reach with hands and arms. Requires close vision.
- Able to work a flexible schedule based on department and company needs.
What We Can Offer You
Join a team where work is as rewarding as it is fun! We offer a dynamic, inclusive environment with competitive pay and benefits. Enjoy comprehensive health and wellness coverage and a 401(k) match to invest in your future. Prioritize your well-being with paid time off and eight paid holidays. Grow your career with continuous learning and leadership development. Plus, build community by joining one of our Colleague Resource Groups and make a difference through our volunteer opportunities.
Some additional benefits we offer include:
- Merchandise discounts
- Performance-based incentives
- Annual merit review
- Employee Assistance Program with mental health counseling and legal/financial advice
- Tuition reimbursement
Access the full menu of benefits offerings here.
Wakefern Food Corp. is the largest retailer-owned cooperative in the United States and supports its co-operative members' retail operations, trading under the ShopRite®, Price Rite®, The Fresh Grocer®, Dearborn Markets®, and Gourmet Garage® banners.
Employing an innovative approach to wholesale business services, Wakefern focuses on helping the independent retailer compete in a big business world. Providing the tools entrepreneurs need to stay a step ahead of the competition, Wakefern’s co-operative members benefit from the company’s extensive portfolio of services, including innovative technology, private label development, and best-in-class procurement practices.
Summary
The ideal candidate will have a strong background in designing, developing, and implementing complex projects, with a focus on cloud-based data warehousing and reporting solutions and driving efficiency within the organization. The role plays a pivotal role in defining data cloud architecture that requires close collaboration with application developers, data engineers, data analysts, data scientists, and BI developers to ensure seamless data integration and automation across various platforms. The Cloud Data Warehouse Architect is responsible for evaluating and selecting the most effective cloud technologies, data governance and compliance, and data warehouse process alignment with security best practices and industry regulations. The role demands passion for cutting-edge cloud solutions, performance optimization, and a proactive approach to troubleshooting complex data challenges in a fast-paced, highly collaborative environment. This role will enable organization to build scalable, cost-efficient systems that support advanced analytics, business intelligence, and machine learning use cases.
Essential Functions
- Participate in the development life cycle (requirements definition, project approval, design, development, and implementation) and maintenance of the systems.
- Define architecture standards and best practices for data warehousing and cloud infrastructure.
- Develop and manage backup strategies, disaster recovery plans, and failover mechanisms to ensure business continuity.
- Provide input for project plans and timelines to align with business objectives.
- Monitor project progress, identify risks, and implement mitigation strategies.
- Work with cross-functional teams and ensure effective communication and collaboration.
- Provide regular updates to the management team.
- Follow the standards and procedures according to Architecture Review Board best practices, revising standards and procedures as requirements change and technological advancements are incorporated into the >tech_ structure.
- Communicates and promotes the code of ethics and business conduct.
- Ensures completion of required company compliance training programs.
- Is trained – either through formal education or through experience – in software / hardware technologies and development methodologies.
- Stays current through personal development and professional and industry organizations.
Additional Functions
- Design scalable, secure, and efficient data warehouse solutions on cloud platforms such as Azure, Google Cloud, AWS.
- Implement robust security measures to ensure data privacy and comply with regulatory standards.
- Leverage cloud-native automation tools to streamline data management and reduce manual processes.
- Design, build, and maintain automated data pipelines and ETL/ELT processes, ensuring scalability and reliability in data operations.
- Design and implement data integration solutions to automate data flow between systems and databases.
- Designs and develops cloud automation solutions using various technologies, such as scripting languages, databases, APIs, and cloud services.
- Monitors and troubleshoots the cloud data warehouse solutions, resolving any issues or errors.
- Provides training and support to the end users of the cloud solutions.
- Maintain detailed architecture documentation and best practices for the organization’s data cloud infrastructure.
- Stay up-to-date with cloud technologies and data architecture trends to recommend and implement new tools and solutions.
- Understands cloud FinOps including chargeback and alert monitoring
Qualifications
- 5+ years of experience in cloud data warehouse design, cloud computing, and data architecture.
- A bachelor's degree or higher in computer science, information systems, or a related field.
- Deep understanding of cloud-based data warehousing solutions (e.g., Azure Fabric, Google BigQuery, AWS etc.)
- Knowledge of data security, encryption, and compliance in cloud environments.
- Understanding of DevOps practices and cloud infrastructure automation (CI/CD, Teraforms)
- Strong knowledge and skills in data automation technologies, such as Python, SQL, ETL/ELT tools, Kafka, APIs, cloud data pipelines, etc.
- Experience with data modeling tools.
- Familiarity with BI visualization tools such as Looker, Tableau, Microstrategy, PowerBI, or similar.
- Strong knowledge and skills in data management, data quality, and data governance.
- Strong communication, collaboration, and problem-solving skills.
- Ability to work on multiple projects and prioritize tasks effectively.
- Ability to work independently and in a team environment.
- Ability to learn new technologies and tools quickly.
- The ability to handle stressful situations.
- Highly developed business acuity and acumen.
- Strong critical thinking and decision-making skills.
Working Conditions & Physical Demands
- This position requires in-person office presence at least 4x a week.
"Candidates must be authorized to work in the United States without the need for current or future visa sponsorship."
Summary of Position (Job Purpose)
This is an exciting opportunity to join the Data & Analytics Delivery Organization at Family Dollar. As the company undergoes major transformation, insights derived from data will play a critical role.
We are seeking an experienced Data Engineering Manager with deep technical expertise and the ability to be hands‑on. This leader will help transform how Family Dollar leverages internal & external data and ensure structured & unstructured data can be securely and efficiently utilized across the organization. You will collaborate closely with business users, data product owners, platform teams, enterprise architecture, data management, and business intelligence team to build high‑performing, scalable, and maintainable data capabilities using current and emerging cloud technologies.
This role is responsible for the design and execution of data engineering initiatives in partnership with the Director of Data Analytics & Reporting. The candidate must be highly collaborative, organized, and effective at communication and problem‑solving. Leadership experience managing individual contributors, contractors, and vendors is essential
Principal Duties and Responsibilities
- Build, lead, and manage the Data Engineering team, driving rapid delivery of data solutions that enhance analytics and insights capabilities.
- Manage 3–5 individual contributors as well as contract and vendor resources.
- Collaborate closely with stakeholders across product, architecture, development, business intelligence, and executive teams.
- Make recommendations for enterprise‑wide data, data onboarding, and self‑service analytics roadmap and architecture.
- Design data pipelines and data models optimized for BigQuery performance, cost control, and reliability
- Provide technical direction and solution guidance to ensure projects enable effective data availability and meet business requirements.
- Serve as an escalation point for issues or roadblocks impacting delivery timelines.
- Stay current on emerging trends and technologies in Data Warehousing and Analytics—particularly within retail.
- Deliver solutions in a fast‑paced, dynamic, and agile environment.
- Initiate proof‑of‑concepts (POCs) and prototypes to validate recommendations and test new approaches.
- Act as the subject‑matter expert on data acquisition, ingestion, and information delivery.
- Lead the creation of standards for data quality, lineage, governance, observability, and CI/CD processes across the data engineering organization
- Collaborate with data product owners to define, prioritize, and execute the data engineering roadmap aligned with business objectives
- Coach and mentor engineers, fostering a culture of technical excellence and continuous improvement
Minimum Requirements/Qualifications
- Bachelor’s degree or higher.
- 5+ years of experience working with large‑scale, enterprise data sets.
- 2+ years managing full‑time employees, contract partners, or vendor resources.
- Strong curiosity and a passion for identifying new ways to leverage data to creates business value.
- Proven experience delivering end‑to‑end data solutions, with emphasis on enabling self‑service analytics across diverse user groups.
- Experience working with end users to gather requirements and translate them into technical solutions from concept through implementation.
- Self‑starter capable of independently delivering outcomes with minimal oversight.
- Hands‑on experience working with structured and unstructured data and modern data technologies—including GCP, BigQuery, Dataflow, Python, etc.
- Experience delivering data as a product as plus
- Retail, supply chain, or e‑commerce experience is a plus but not required.
Family Dollar is an equal opportunity employer and committed to recruiting, hiring, training, and promoting qualified people of all backgrounds, and make all employment decisions without regard to any protected status. We are committed to complying with the Americans with Disabilities Act (ADA) and providing reasonable accommodations to qualified individuals with disabilities.
The pay range for this role is $150,000 - $200,000/yr USD.
WHO WE ARE:
Headquartered in Southern California, Skechers—the Comfort Technology Company®—has spent over 30 years helping men, women, and kids everywhere look and feel good. Comfort innovation is at the core of everything we do, driving the development of stylish, high-quality products at a great value. From our diverse footwear collections to our expanding range of apparel and accessories, Skechers is a complete lifestyle brand.
ABOUT THE ROLE:
Skechers Digital Team is seeking a Digital Data Architect reporting to the Director, Digital Architecture, Consumer Domain. This role is responsible for designing and governing Skechers’ Consumer Data 360 ecosystem, enabling identity resolution, high-quality data foundations, personalization, loyalty intelligence, and machine learning capabilities across digital and retail channels.
The ideal candidate will be a strong technical leader, have hands-on full-stack technical knowledge in enterprise technologies related to Skecher’s consumer domain, and have the ability to work in a fast-paced agile environment. You should have knowledge of consumer programs from an architecture/industry perspective, and you should have strong hands-on experience designing solutions on the Salesforce Core Platform (including configuration, integration, and data model best practices).
You will work cross-functionally with Digital Engineering, Data Engineering, Data Science, Loyalty, and Marketing teams to architect scalable, secure, and high-performance data platforms that support advanced personalization and recommender systems.
WHAT YOU’LL DO:
- Responsible for the full technical life cycle of consumer platform capabilities which includes:
- Capability roadmap and technical architecture in alignment to consumer experience
- Technical planning, design, and execution
- Operations, analytics/reporting, and adoption
- Define and evolve Skechers’ Consumer Data 360 architecture, including identity resolution (deterministic and probabilistic matching) and unified customer profiles.
- Architect scalable data models and pipelines across CDP, CRM, e-commerce, marketing automation, data lake, and warehouse platforms.
- Establish enterprise data quality frameworks including validation, deduplication, anomaly detection, and observability.
- Optimize SQL workloads and large-scale distributed queries through performance tuning, partitioning, indexing, and workload management strategies.
- Design and oversee ML pipelines supporting personalization, churn modeling, and recommender systems.
- Partner with Data Science teams to productionize models using distributed platforms such as Databricks (Spark, Delta Lake, MLflow preferred).
- Ensure secure data governance, access control (RBAC/ABAC), and compliance with GDPR, CCPA, and related privacy regulations.
- Provide architectural oversight ensuring performance, scalability, resilience, and maintainability.
- Collaborate with stakeholders to translate business objectives (LTV growth, personalization lift, engagement) into scalable data solutions.
REQUIREMENTS:
- Computer Science, Data Engineering, or related degree or equivalent experience.
- 12+ years experience architecting enterprise data platforms in cloud environments.
- 9+ years experience with data engineering with a focus on consumer data.
- 6+ years experience working with Salesforce platforms, including data models and enterprise integrations.
- Strong experience with Data 360 and identity resolution architectures.
- Proven expertise in SQL performance tuning and large-scale data modeling.
- Hands-on experience implementing ML pipelines and recommender systems in production environments.
- Experience with cloud technologies (AWS, GCP, or Azure).
- Experience with integration patterns (API, ETL, event streaming).
- Experience providing technical leadership and guidance across multiple projects and development teams.
- Experience translating business requirements into detailed technical specifications and working with development teams through implementation, including issue resolution and stakeholder communication.
- Strong project management skills including scope assessment, estimation, and clear technical communication with both business users and technical teams.
- Must hold at least one of the following Salesforce Certifications (Platform App Builder, Platform Developer 1, JavaScript Developer 1).
- Experience with Databricks or similar distributed data/ML platforms preferred.
Senior Data Modeler
Hybrid 3-4 days onsite
Location: Phoenix, Arizona
Salary: $130,000 - $150,000 base
A large, operationally complex organization is undergoing a major modernization of its data platform and is building a new, cloud-native analytics foundation from the ground up. This is a greenfield opportunity for a senior-level data modeler to establish best practices, influence architecture, and help shape how data is organized and used across the business.
This role sits at the center of a multi-year transformation focused on modern analytics, scalable data products, and strong collaboration between data and business teams.
What You’ll Be Working On
- Designing and implementing enterprise data models across conceptual, logical, and physical layers
- Establishing Medallion architecture patterns and reusable modeling assets
- Building dimensional and semantic models that support analytics and reporting
- Partnering closely with domain experts and functional leaders to translate business needs into data structures
- Collaborating with data engineers to align models with ELT pipelines and analytics frameworks
- Helping define modeling standards and upskilling senior engineers in modern data modeling practices
- Contributing hands-on to data engineering work where needed (SQL, transformations, optimization)
- Proactively identifying analytics opportunities and recommending data structures to support them
This role is roughly 40% data modeling, 30% hands-on engineering, and 30% cross-functional collaboration.
Must-Have Experience
- Strong, hands-on experience with data modeling (dimensional, canonical, semantic)
- Deep understanding of Medallion architecture
- Advanced SQL and experience working with a modern cloud data warehouse
- Experience with dbt for transformations and modeling
- Hands-on experience in cloud-native data environments (AWS preferred)
- Ability to work directly with business stakeholders and explain technical concepts clearly
- Experience collaborating closely with data engineers on execution
Nice to Have
- Python experience
- Familiarity with Informatica or reverse-engineering legacy data models
- Exposure to streaming or near-real-time data pipelines
- Experience with visualization tools (tool choice is flexible)
Who Will Thrive in This Role
- A senior individual contributor who enjoys building from scratch
- Someone who can act as a modeling expert and mentor in an organization formalizing this practice
- Comfortable working in ambiguity and taking initiative
- Strong communicator who enjoys partnering with both technical and non-technical teams
- Equally comfortable discussing business concepts and physical data models
Why This Role Is Unique
- Greenfield data modeling initiative with real influence
- Opportunity to define standards that will be used across the organization
- Work on large-scale, real-world operational and analytical data
- High visibility within a growing data organization
- Flexible work setup for individual contributors
If you’re excited about shaping a modern data foundation and want to be the person who defines how data is modeled, understood, and used, this is a rare opportunity to make a lasting impact.
Job Title – Lead Data Engineer
Please note this role is not able to offer visa transfer or sponsorship now or in the future
About the role
As a Lead Data Engineer, you will make an impact by designing, building, and operating scalable, cloud‑native data platforms supporting batch and streaming use cases, with strong focus on governance, performance, and reliability. You will be a valued member of the Data Engineering team and work collaboratively with cross‑functional engineering, cloud, and architecture stakeholders.
In this role, you will:
- Design, build, and operate scalable cloud‑native data platforms supporting batch and streaming workloads with strong governance, performance, and reliability.
- Develop and operate data systems on AWS, Azure, and GCP, designing cloud‑native, scalable, and cost‑efficient data solutions.
- Build modern data architectures including data lakes, data lakehouses, and data hubs, with strong understanding of ingestion patterns, data governance, data modeling, observability, and platform best practices.
- Develop data ingestion and collection pipelines using Kafka and AWS Glue; work with modern storage formats such as Apache Iceberg and Parquet.
- Design and develop real‑time streaming pipelines using Kafka, Flink, or similar streaming frameworks, with understanding of event‑driven architectures and low‑latency data processing.
- Perform data transformation and modeling using SQL‑based frameworks and orchestration tools such as dbt, AWS Glue, and Airflow, including Slowly Changing Dimensions (SCD) and schema evolution.
- Use Apache Spark extensively for large‑scale data transformations across batch and streaming workloads.
Work model
We believe hybrid work is the way forward as we strive to provide flexibility wherever possible. Based on this role’s business requirements, this is a hybrid position requiring 4 days a week in a client or Cognizant office in Atlanta, GA. Regardless of your working arrangement, we are here to support a healthy work-life balance though our various wellbeing programs.
The working arrangements for this role are accurate as of the date of posting. This may change based on the project you’re engaged in, as well as business and client requirements. Rest assured; we will always be clear about role expectations.
What you need to have to be considered
- Hands‑on experience developing and operating data systems on AWS, Azure, and GCP.
- Proven ability to design cloud‑native, scalable, and cost‑efficient data solutions.
- Experience building data lakes, data lakehouses, and data hubs with strong understanding of ingestion patterns, governance, modeling, observability, and platform best practices.
- Expertise in data ingestion and collection using Kafka and AWS Glue, with experience in Apache Iceberg and Parquet.
- Strong experience designing and developing real‑time streaming pipelines using Kafka, Flink, or similar streaming frameworks.
- Deep expertise in data transformation and modeling using SQL‑based frameworks and orchestration tools including dbt, AWS Glue, and Airflow, with knowledge of SCD and schema evolution.
- Extensive experience using Apache Spark for large‑scale batch and streaming data transformations.
These will help you stand out
- Experience with event‑driven architectures and low‑latency data processing.
- Strong understanding of schema evolution, SCD modeling, and modern data modeling concepts.
- Experience with Apache Iceberg, Parquet, and modern ingestion/storage patterns.
- Strong knowledge of observability, governance, and platform best practices.
- Ability to partner effectively with cloud, architecture, and engineering teams.
Salary and Other Compensation:
Applications will be accepted until March 17, 2025.
The annual salary for this position is between $81,000 - $135,000, depending on experience and other qualifications of the successful candidate.
This position is also eligible for Cognizant’s discretionary annual incentive program, based on performance and subject to the terms of Cognizant’s applicable plans.
Benefits: Cognizant offers the following benefits for this position, subject to applicable eligibility requirements:
- Medical/Dental/Vision/Life Insurance
- Paid holidays plus Paid Time Off
- 401(k) plan and contributions
- Long‑term/Short‑term Disability
- Paid Parental Leave
- Employee Stock Purchase Plan
Disclaimer: The salary, other compensation, and benefits information is accurate as of the date of this posting. Cognizant reserves the right to modify this information at any time, subject to applicable law.
Company/Role Overview:
CliftonLarsonAllen (CLA) Search has been retained by Midwestern Higher Education Compact to identify a Data Manager to serve their team. The Midwestern Higher Education Compact (MHEC) brings together leaders from 12 Midwestern states to strengthen postsecondary education, advance student success, and promote regional economic vitality.
MHEC programs and initiatives save member states and students millions of dollars annually through time- and cost-savings opportunities. MHEC research supports workforce readiness and improves the quality, accessibility, and affordability of postsecondary education. MHEC convenings bring together leaders and subject experts to share knowledge, generate ideas, and develop collaborative solutions.
To learn more, click here:
What You’ll Do:
- Administer and maintain Microsoft Fabric, OneLake, and Azure environments.
- Design and deliver sophisticated data solutions that are innovative and sustainable.
- Ensure data infrastructure is secure, reliable, and scalable.
- Manage and improve how data is brought into the organization from multiple sources.
- Maintain accurate, well-structured, consistent, and complete data that ensure high quality and useability for internal staff.
- Develop and oversee standards on how data is collected, stored, and protected across departments.
- Manage MHEC’s customer relationship management (CRM) system, ensuring data integrity, integration with other platforms, and alignment with organizational needs.
- Partner with teams across the organization to monitor processes and make recommendations.
- Partner with research staff to understand data access patterns and develop storage strategies that accelerate research and analytics
- Develop and maintain Power BI dashboards and reports to deliver clear insights to senior leaders and decision-makers.
- Ensure staff have access to timely, clear, and meaningful data visualizations.
- Train staff to use reports and dashboards effectively.
- Support departments in using data to guide decision-making.
- Document data pipelines, integrations, and system processes.
- Recommend tools and practices that help MHEC grow its data capacity.
- Monitor developments in Microsoft’s data platforms and assess future needs.
What You’ll Need:
- Bachelor's degree or equivalent experience preferred.
- 5+ years’ experience, preferably with Microsoft data platforms including Power BI, Azure, and/or Fabric.
- Experience designing and maintaining data systems and dashboards.
- Experience in higher education or nonprofit sectors preferred.
- Strong technical understanding of Microsoft Fabric, OneLake, and Azure.
- Proficiency demonstrated in Python, R, SAS, SQL or other statistical/data management software
- Experience with data visualization platforms (Tableau, Power BI, or similar)
- Experience with Microsoft Dynamics and Power Automate is a plus but not required.
- Ability to plan, optimize, build, and maintain data pipelines and dashboards.
Job Description
About BioLife Plasma Services
BioLife Plasma Services, a subsidiary of Takeda Pharmaceutical Company Limited, is an industry leader in the collection of high-quality plasma, which is processed into life-saving plasma-based therapies. Some diseases can only be treated with medicines made with plasma. Since plasma can't be made synthetically, many people rely on plasma donors to live healthier, happier lives. BioLife operates 250+ state-of-the-art plasma donation centers across the United States. Our employees are dedicated to enhancing the quality of life for patients and ensuring that the donation process is safe, straightforward, and rewarding for donors who wish to make a positive impact.
When you work at BioLife, you'll feel good knowing that what we do helps improve the lives of patients with rare diseases. While you focus on our donors, we'll support you. We offer a purpose you can believe in, a team you can count on, opportunities for career growth, and a comprehensive benefits program, all in a fast-paced, friendly environment.
This position is currently classified as "hybrid" in accordance with Takeda's Hybrid and Remote Work policy.
BioLife Plasma Services is a subsidiary of Takeda Pharmaceutical Company Ltd.
OBJECTIVES/PURPOSE
The Sr. Manager of Marketing Science drives and executes strategic initiatives that improve our marketing data and analytics capabilities. This role will leverage advanced analytics techniques and data-driven insights to inform marketing strategies, optimize campaigns, and drive business growth. This role requires a deep understanding of paid, owned, and earned media measurement, strong analytics and insights skills, broad knowledge of marketing technologies, and the ability to communicate complex data insights to senior stakeholders. This role is critically important for the success of the Global Forecasting, Pricing, and Analytics (FPA) team and reports to the Head of Analytics within the team.
ACCOUNTABILITIES
Leadership
* Lead marketing science initiatives in the development and execution of advanced analytics to support marketing strategies and goals.
* Provide thought leadership on marketing measurement techniques, including the trade-offs between controlled experiments, natural experiments, and multivariate statistical models for different situations.
Marketing Science
* Partner with our media agency to ensure we are maximizing the output of our media mix model (MMM) partner.
* Deep understanding and experience with creating and managing marketing attribution solutions, i.e., multi-touch attribution (MTA). Ability to build/maintain in-house solutions and/or work with outside partners as necessary.
* Identify and maintain marketing analytics key performance indicators (KPIs) to track and measure performance.
* Partner with data scientists, IT, and consultants to develop advanced analytical models and dashboards related to marketing.
* Ability to perform statistical analyses and tests to quantify the business value of an opportunity.
* Familiarity with AI/ML applications in marketing.
Reporting and Data Management
* Ensure the accurate and timely delivery of marketing performance reports and insights.
* Able to translate data into contextualized insights that can be shared across the business
* Know digital media terminology and concepts (e.g., Demand Side Platforms (DSPs), effectiveness vs. efficiency, SEO/SEM, etc.)
* Leverage existing experience with Google Analytics and Google Tag Manager
* Partner with the Data, Digital, and Technology (DD&T) Team to ensure marketing data accuracy, integration, and integrity, and that good data governance practices are in place.
* Develop solutions (dashboards, data visualizations, reports) for real-time operations performance assessment and agile decision-making.
* Design and automate regular data extracts needed by marketing and other partners.
Collaboration and Adaptability
* Build strong relationships with cross-functional partners for efficient alignment, coordination, and information sharing across teams.
DIMENSIONS AND ASPECTS
Technical/Functional Expertise
* Extensive experience across many areas of marketing science; MMM, MTA, Loyalty, Website, Surveys, Paid/Owned/Earned Media.
* Experience with SQL, Python, and R for data analysis and model development.
* Strong analytical skills with a solid foundation in many of the following statistical and AI/ML methods: regression analysis (continuous, categorical, survival, time-series, and count models, etc.); classification (CART, SVM, Neural Networks, etc.), clustering (k-means/medoid, hierarchical, self-organizing maps, etc.), and other AI/ML techniques; experimental design; and forecasting/sensitivity analysis.
* Comfortable working daily in cloud-based data platforms.
* Expert level MS Excel skills, including advanced functions (e.g., Solver), data analysis, pivot tables, macros, and VBA (Visual Basic for Applications), and applicability of these features for developing and managing financial models for business case development and forecasting.
* Experience working with Power BI, Tableau, or other data visualization software.
* Strong foundation in statistical techniques for quantifying the impact of marketing activities.
Communication
* Excellent verbal and written communication. Proven data analysis background with the ability to transform analysis into insights, recommendations, and proposals for senior management.
* Ability to communicate complex concepts simply and succinctly.
Decision-making and Autonomy
* High self-reliance, self-efficacy, initiative, and learning agility.
* Strong at both structured and unstructured problem solving.
Interaction
* Manage and/or partner on projects with vendors and consultants.
EDUCATION, BEHAVIOURAL COMPETENCIES AND SKILLS:
Required
* Bachelor's and/or master's degree in any area of social science, business, marketing, advertising, or a closely related field.
* Experience with data analytics from end-to-end, i.e., including ideation, proposal creation, getting stakeholder buy-in, gathering requirements, designing analytics models/solutions, building prototypes, and working with IT/Data Science teams to deploy and scale solutions.
* 7+ years of experience in advanced analytics and statistical modeling in the areas of business performance analysis, forecasting, promotion and media effectiveness and optimization, and consumer behavior
* Excellent verbal and written communication and presentation skills. Able to communicate effectively to all levels of the organization, including senior leadership.
* Bring a growth mindset, curiosity, positivity, intuitive thinking, and a passion for excellence.
Preferred
* Media agency or retail industry analytics experience a plus.
* Experience with survival analysis (time-to-event, duration, event history analysis, etc.) a plus.
* Knowledge of CRM systems and marketing automation tools a plus.
ADDITIONAL INFORMATION (Add any information legally required for your country here)
* Domestic travel required (up to 10%).
BioLife Compensation and Benefits Summary
We understand compensation is an important factor as you consider the next step in your career. W e are committed to equitable pay for all employees, and we strive to be more transparent with our pay practices.
For Location: Bannockburn, IL
U.S. Base Salary Range: $137,000.00 - $215,270.00
The estimated salary range reflects an anticipated range for this position. The actual base salary offered may depend on a variety of factors, including the qualifications of the individual applicant for the position, years of relevant experience, specific and unique skills, level of education attained, certifications or other professional licenses held, and the location in which the applicant lives and/or from which they will be performing the job. The actual base salary offered will be in accordance with state or local minimum wage requirements for the job location.
U.S. based employees may be eligible for short-term and/or long-term incentives. U.S. based employees may be eligible to participate in medical, dental, vision insurance, a 401(k) plan and company match, short-term and long-term disability coverage, basic life insurance, a tuition reimbursement program, paid volunteer time off, company holidays, and well-being benefits, among others. U.S. based employees are also eligible to receive, per calendar year, up to 80 hours of sick time, and new hires are eligible to accrue up to 120 hours of paid vacation.
EEO Statement
Takeda is proud in its commitment to creating a diverse workforce and providing equal employment opportunities to all employees and applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, gender expression, parental status, national origin, age, disability, citizenship status, genetic information or characteristics, marital status, status as a Vietnam era veteran, special disabled veteran, or other protected veteran in accordance with applicable federal, state and local laws, and any other characteristic protected by law.
Locations
Bannockburn, IL
Worker Type
Employee
Worker Sub-Type
Regular
Time Type
Full time
Job Exempt Yes
About Pinterest:
Millions of people around the world come to our platform to find creative ideas, dream about new possibilities and plan for memories that will last a lifetime. At Pinterest, we're on a mission to bring everyone the inspiration to create a life they love, and that starts with the people behind the product.
Discover a career where you ignite innovation for millions, transform passion into growth opportunities, celebrate each other's unique experiences and embrace theflexibility to do your best work. Creating a career you love? It's Possible.
At Pinterest, AI isn't just a feature, it's a powerful partner that augments our creativity and amplifies our impact, and we're looking for candidates who are excited to be a part of that. To get a complete picture of your experience and abilities, we'll explore your foundational skills and how you collaborate with AI.
Through our interview process, what matters most is that you can always explain your approach, showing us not just what you know, but how you think. You can read more about our AI interview philosophy and how we use AI in our recruiting process here.
About tvScientific
tvScientific is the first and only CTV advertising platform purpose-built for performance marketers. We leverage massive data and cutting-edge science to automate and optimize TV advertising to drive business outcomes. Our solution combines media buying, optimization, measurement, and attribution in one, efficient platform. Our platform is built by industry leaders with a long history in programmatic advertising, digital media, and ad verification who have now purpose-built a CTV performance platform advertisers can trust to grow their business.
We are seeking a Staff Data Engineer to lead the design, implementation, and evolution of our identity services and data governance platform. This role is critical to ensuring trusted, privacy-safe, and well-governed data across the organization. You will work at the intersection of data engineering, identity resolution, privacy, and platform reliability.This is anindividual contributor role, where you will work to define and implement a strategic vision for data engineering within the organization.
What you'll do:
- Identity Services:
- Design and maintain a scalable identity resolution platform
- Build pipelines and services to ingest, normalize, link, and version identity data across multiple sources
- Ensure deterministic and probabilistic matching logic that is transparent, auditable, and measurable
- Partner with product and analytics teams to expose identity data through reliable, well-documented APIs and datasets
- Build and operate batch and streaming pipelines using modern data stack tools
- Create clear documentation, standards, and runbooks for identity and governance systems
- Data Governance & Trust
- Own data governance foundations including data lineage, quality checks, schema enforcement, and access controls
- Implement privacy-by-design principles (PII handling, consent enforcement, retention policies)
- Collaborate with legal, privacy, and security teams to operationalize regulatory requirements (e.g., GDPR, CCPA)
- Establish monitoring and alerting for data quality, freshness, and integrity
What we're looking for:
- Data engineering experience with proven track record building data infrastructure using Spark with Scala
- Proven experience building data infrastructure using Spark with Scala for at least 5 years
- Experience in delivering significant technical initiatives and building reliable, large scale services
- Experience in delivering APIs backed by relationship-heavy datasets
- Experience implementing data governance practices, including data quality, metadata management, and access controls
- Strong understanding of privacy-by-design principles and handling of sensitive or regulated data
- Familiarity with data lakes, cloud warehouses, and storage formats
- Strong proficiency in AWS services
- Successful design and implementation of scalable and efficient data infrastructure
- High attention to detail in implementation of automated data quality checks
- Effective collaboration with cross-functional teams
- Excellent written and verbal communication skills
- Bachelor's degree in Computer Science or a related field
In-Office Requirement Statement:
- We recognize that the ideal environment for work is situational and may differ across departments. What this looks like day-to-day can vary based on the needs of each organization or role.
Relocation Statement:
- This position is not eligible for relocation assistance. Visit ourPinFlexpage to learn more about our working model.
#LI-SM4
#LI-REMOTE
At Pinterest we believe the workplace should be equitable, inclusive, and inspiring for every employee. In an effort to provide greater transparency, we are sharing the base salary range for this position. The position is also eligible for equity. Final salary is based on a number of factors including location, travel, relevant prior experience, or particular skills and expertise.
Information regarding the culture at Pinterest and benefits available for this position can be found here.
US based applicants only$155,584—$320,320 USDOur Commitment to Inclusion:
Pinterest is an equal opportunity employer and makes employment decisions on the basis of merit. We want to have the best qualified people in every job. All qualified applicants will receive consideration for employment without regard to race, color, ancestry, national origin, religion or religious creed, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, age, marital status, status as a protected veteran, physical or mental disability, medical condition, genetic information or characteristics (or those of a family member) or any other consideration made unlawful by applicable federal, state or local laws. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. If you require a medical or religious accommodation during the job application process, please completethis formfor support.
Location: 100% Remote
Duration: 12+ Months
Overview:
An experienced Administrator to operate and support the enterprise implementation of Microsoft Purview Data Catalog across a complex, multi-platform data environment. The administrator will be responsible for the day-to-day configuration, monitoring, and maintenance of Purview capabilities, ensuring reliable metadata ingestion, catalog quality, lineage visibility, and compliance alignment across governed data domains.
This role focuses on platform operations and governance execution, working within established architecture and enterprise governance standards.
Key Responsibilities
Platform Administration & Operations:
- Administer and operate Microsoft Purview Data Map and Data Catalog environments.
- Monitor platform health, scan execution, metadata ingestion, and lineage availability.
- Troubleshoot and resolve catalog, scan, and connectivity issues.
- Perform routine maintenance, configuration updates, and service optimizations.
- Coordinate incident resolution with internal engineering teams and Microsoft support as required.
Data Source Management & Scanning:
- Register, configure, and maintain data sources across Azure, M365, on?prem, and approved third?party platforms.
- Configure and schedule metadata scans for supported sources.
- Manage authentication for scans using managed identities, service principals, and Key Vault secrets.
- Monitor scan performance, failures, and coverage; take corrective action as needed.
- Optimize scan frequency and scope to balance cost, performance, and governance coverage.
Catalog Configuration & Metadata Management:
- Maintain and enforce enterprise metadata standards within the Purview Catalog.
- Manage business metadata, classifications, glossary terms, and custom attributes.
- Ensure metadata accuracy, completeness, and consistency across data assets.
- Support curation activities including asset certification and publishing.
- Resolve duplicate, incomplete, or stale catalog entries.
Lineage & Discovery Enablement:
- Enable and validate data lineage ingestion from supported data platforms.
- Monitor lineage completeness and visibility for critical data assets.
- Assist data consumers and stewards with lineage?based impact analysis.
- Escalate lineage gaps or tool limitations requiring architectural or engineering remediation.
Security, Access & Governance Controls:
- Configure and manage Purview role?based access control (RBAC) within collections.
- Provision and maintain access for administrators, data curators, and data stewards.
- Enforce domain?based access controls and separation of duties.
- Integrate Purview access with Microsoft Entra ID.
- Support sensitivity labels and classification alignment with Microsoft Information Protection.
Compliance & Risk Support:
- Support automated discovery of sensitive data (PII, PCI, PHI).
- Assist risk, audit, and compliance teams with catalog evidence and reporting.
- Validate scan coverage for regulated data domains.
- Support regulatory and audit initiatives (SOX, GLBA, NYDFS, GDPR, etc.).
User Support & Enablement:
- Provide operational support to data producers, consumers, and data stewards.
- Respond to access requests, catalog issues, and usage questions.
- Maintain operational documentation, runbooks, and standard operating procedures.
- Support onboarding of new data domains following established governance patterns.
- Assist with training and adoption initiatives led by governance or architecture teams.
Required Qualifications:
- 5+ years experience supporting enterprise data platforms or governance tools and 4+ years hands?on MS Purview experience at enterprise scale.
- Hands?on experience administering Microsoft Purview Data Catalog.
- Strong understanding of metadata management, data classification, and lineage concepts.
- Working knowledge of Azure data services and enterprise data ecosystems.
- Experience managing access controls and identities using Microsoft Entra ID.
- Familiarity with regulated data environments and compliance requirements.
- Strong troubleshooting, operational support, and documentation skills.
Preferred Qualifications:
- Experience supporting Purview integrations with Synapse, Fabric, Databricks, Snowflake, or SQL Server.
- Exposure to financial services or other regulated industries.
- Experience with PowerShell, REST APIs, or basic automation for operational tasks.
- Prior experience supporting enterprise data governance or stewardship programs.
Sr. Data Engineer (Hybrid)
Chicago, IL
The American Medical Association (AMA) is the nation's largest professional Association of physicians and a non-profit organization. We are a unifying voice and powerful ally for America's physicians, the patients they care for, and the promise of a healthier nation. To be part of the AMA is to be part of our Mission to promote the art and science of medicine and the betterment of public health.
At AMA, our mission to improve the health of the nation starts with our people. We foster an inclusive, people-first culture where every employee is empowered to perform at their best. Together, we advance meaningful change in health care and the communities we serve.
We encourage and support professional development for our employees, and we are dedicated to social responsibility. We invite you to learn more about us and we look forward to getting to know you.
We have an opportunity at our corporate offices in Chicago for a Sr. Data Engineer (Hybrid) on our Information Technology team. This is a hybrid position reporting into our Chicago, IL office, requiring 3 days a week in the office.
As a Sr. Data Engineer, you will play a key role in implementing
and maintaining AMA's enterprise data platform to support analytics,
interoperability, and responsible AI adoption. This role partners closely with
platform engineering, data governance, data science, IT security, and business
stakeholders to deliver highquality, reliable, and secure data products. This
role contributes to AMA's modern lakehouse architecture, optimizing data
operations, and embedding governance and quality standards into engineering
workflows. This role serves as a
senior technical contributor within the team-providing mentorship to junior
engineers and implementing engineering best practices within the data platform function,
in alignment with architectural direction set by leadership.
RESPONSIBILITIES:
Data Engineering & AI Enablement
- Build and maintain scalable data pipelines and
ETL/ELT workflows supporting analytics, operational reporting, and AI/ML use
cases. - Implement best practice patterns for ingestion,
transformation, modeling, and orchestration within a modern lakehouse
environment (e.g., Databricks, Delta Lake, Azure Data Lake). - Develop highperformance
data models and curated datasets with strong attention to quality, usability,
and interoperability; create reusable engineering components and automation. - Collaborate with the Architecture Team, the Data
Platform Lead, and federated IT teams to optimize storage, compute, and
architectural patterns for performance and costefficiency. - Build model-ready data sets and feature
pipelines to support AI/ ML use cases; serve as a technical coordination point
supporting business units' AI-related infrastructure needs. - Collaborate with data scientists and AI Working
Group to operationalize models responsibly and maintain ongoing monitoring
signals.
Governance, Quality & Compliance
- Embed data governance, metadata standards,
lineage tracking, and quality controls directly into engineering workflows;
ensure technical implementation and alignment within engineering workflows. - Work with the Data Governance Lead and business
stakeholders to operationalize stewardship, classification, validation,
retention, and access standards. - Implement privacybydesign and securitybydesign
principles, ensuring compliance with internal policies and regulatory
obligations. - Maintain documentation for pipelines, datasets,
and transformations to support transparency and audit requirements.
Platform Reliability, Observability & Optimization
- Monitor and troubleshoot pipeline failures,
performance bottlenecks, data anomalies, and platformlevel issues. - Implement observability tooling, alerts,
logging, and dashboards to ensure endtoend reliability. - Support cost governance by optimizing compute
resources, refining job schedules, and advising on efficient architecture. - Collaborate with the Data Platform Lead on
scaling, configuration management, CI/CD pipelines, and environment management. - Collaborate with business units to understand
data needs, translate them into engineering requirements, and deliver
fit-for-purpose data solutions; share and apply best practices and emerging
technologies within assigned initiatives. - Work with IT Security and Legal/ Compliance to
ensure platform and datasets meet risk and regulatory standards.
Staff Management
- Lead, mentor, and provide management oversight
for staff. - Responsible for setting objectives, evaluating
employee performance, and fostering a collaborative team environment. - Responsible for developing staff knowledge and
skills to support career development.
May include other responsibilities as assigned
REQUIREMENTS:
- Bachelor's degree in Computer Science, Engineering, Information Systems, or related field preferred or equivalent work experience and HS diploma/equivalent education required.
- 5+ years of experience in data engineering within cloud environments
- Experience in people management preferred.
- Demonstrated hands-on experience with modern data platforms (Databricks preferred).
- Proficiency in Python, SQL, and data
transformation frameworks. - Experience designing and operationalizing
ETL/ELT pipelines, orchestration workflows (Airflow, Databricks Workflows), and
CI/CD processes. - Solid understanding of data modeling,
structured/unstructured data patterns, and schema design. - Experience implementing governance and quality
controls: metadata, lineage, validation, stewardship workflows. - Working knowledge of cloud architecture, IAM,
networking, and security best practices. - Demonstrated ability to collaborate across
technical and business teams. - Exposure to AI/ML engineering concepts, feature
stores, model monitoring, or MLOps patterns. - Experience with infrastructureascode
(Terraform, CloudFormation) or DevOps tooling.
The American Medical Association is located at 330 N. Wabash Avenue, Chicago, IL 60611 and is convenient to all public transportation in Chicago.
This role is an exempt position, and the salary range for this position is $115,523.42-$150,972.44. This is the lowest to highest salary we believe we would pay for this role at the time of this posting. An employee's pay within the salary range will be determined by a variety of factors including but not limited to business consideration and geographical location, as well as candidate qualifications, such as skills, education, and experience. Employees are also eligible to participate in an incentive plan. To learn more about the American Medical Association's benefits offerings, please click here.
We are an equal opportunity employer, committed to diversity in our workforce. All qualified applicants will receive consideration for employment. As an EOE/AA employer, the American Medical Association will not discriminate in its employment practices due to an applicant's race, color, religion, sex, age, national origin, sexual orientation, gender identity and veteran or disability status.
THE AMA IS COMMITTED TO IMPROVING THE HEALTH OF THE NATION
Apply NowShare Save JobRemote working/work at home options are available for this role.
Duration: 6+ months
Location: 100% Remote
Job Overview
The Marketplace Data Product Engineer serves as the primary technical facilitator, and adoption champion for the Marketplace platform. This role bridges engineering, product, and business domains - leading workshops, demos, onboarding sessions, and cross?domain engagements to accelerate Marketplace adoption. You will configure demo environments, support development, translate complex technical concepts for business audiences, gather product feedback, and partner closely with product and engineering teams to shape the Marketplace roadmap. This will guide domains through the process of understanding, showcasing, and maturing their data products within the ecosystem.
Key Responsibilities
- Facilitate workshops, demos, onboarding sessions, and cross?domain engagements to drive Marketplace adoption.
- Serve as the primary technical presenter of the Marketplace for domain teams and stakeholders.
- Engage with domain owners to understand their data products, help refine their articulation, and showcase how they integrate into the Marketplace ecosystem.
- Configure and maintain demo environments for Marketplace capabilities, data products, and new features.
- Support light development, proof?of?concept configurations, and sample integrations to demonstrate platform capabilities.
- Translate technical Marketplace concepts into clear, business?friendly language for non?technical audiences.
- Collect structured feedback from domain teams, synthesize insights, and partner with product and engineering to influence the roadmap.
- Develop and refine training materials, demos, playbooks, and onboarding assets to support continuous adoption.
- Act as an advocate for domains, ensuring their data product needs and challenges are well represented in Marketplace planning.
- Support ongoing adoption initiatives, including community sessions, office hours, and cross?domain knowledge sharing.
Required Skills & Qualifications
- 4-7+ years of experience in data engineering, platform engineering, solution engineering, technical consulting, or similar roles.
- Strong understanding of data products, data modeling concepts, data APIs, enterprise integrations and metadata?driven architectures.
- Ability to configure and demonstrate platform features, build light proofs?of?concept, and support technical onboarding.
- Excellent communication and presentation skills, with experience translating technical concepts for business partners.
- Experience facilitating workshops, leading demos, or driving customer/product adoption initiatives.
- Ability to engage domain teams, understand their data product needs, and help articulate value within a larger ecosystem.
- Strong collaboration and stakeholder management skills across engineering, product, and business teams.
- Comfortable working in fast?moving environments and driving clarity through ambiguity.
Preferred Qualifications
- Experience with data product and governance frameworks, data marketplaces, data mesh concepts, or platform adoption roles.
- Hands?on experience with cloud data platforms (Azure, AWS, or GCP), data pipelines, or integration tooling.
- Familiarity with REST/GraphQL APIs, event-driven patterns, and data ingestion workflows.
- Background in solution architecture, customer engineering, or sales engineering.
- Experience developing demo environments, sample apps, or repeatable platform enablement assets.
- Strong storytelling ability when explaining data product value, domain capabilities, and Marketplace patterns.
Description
The Data Engineer is responsible for designing, building, and maintaining scalable data pipelines to support the bank's analytics, reporting, and decision-making processes. Working closely with analysts, reporting, integration teams and business stakeholders to ensure high-quality, secure, and efficient data solutions that comply with financial regulations and industry standards.
Below is a list of essential functions of this position. Additional responsibilities may be assigned in the position.
KEY RESPONSIBILITIES
- Build and maintain data models, schemas, and databases (e.g., data warehouses, data lakes) to support business intelligence, machine learning, and reporting needs.
- Ensure data is optimized for performance, reliability, and scalability, minimizing latency and maximizing throughput.
- Build required infrastructure for optimal extraction, transformation and loading of data from various data sources using cloud and SQL technologies
- Implement data quality checks, monitoring, and validation processes to ensure accuracy, consistency, and compliance with regulatory requirements.
- Partner with business analyst, data Integration, Automation, and IT Teams to understand data requirements and deliver solutions that align with business goals.
- Ensure data adherence to strict security protocols and regulatory standards including encryption, access controls, and audit trails.
- Champion data governance, quality standards, and performance optimization.
- Create and maintain comprehensive documentation for data schemas, processes and systems to ensure transparency and reproducibility.
ATTITUDES
Builds positive relationships with internal and external clients by valuing other's feelings and rights in both words and actions, and embracing other's unique beliefs, backgrounds, and perspectives by demonstrating:
- Respect - treat every client and colleague with dignity and respect.
- Client Focus - Design scalable and reliable data pipelines that directly support the client's business goals and decision-making needs. Actively engage with stakeholders to understand evolving requirements and deliver solutions that provide timely, actionable insights
- Inclusion - Support a diverse work environment by building data systems that are accessible, equitable, and considerate of user needs, while actively seeking input from voices across all backgrounds and roles.
BEHAVIORS
Demonstrates strong business ethics and honest behaviors and the ability to positively influence and work with others to achieve excellent results by demonstrating:
- Leadership - Proactively drives data strategy, mentoring peers, and sets high standards for quality, innovation, and collaboration across teams.
- Integrity - Establish and enforce program governance frameworks, including change control and release management.
- Collaboration - Works with stakeholders across all departments to drive data efforts. Serves as a key contributor between business stakeholders and technical teams.
- Volunteerism - Use your skill beyond the role by mentoring others, helping teammates, and supporting meaningful causes.
COMPETENCIES
Reflects skill, good judgement, positive conduct, and personal responsibility for assigned areas. Seeks to implement and leverage services and technologies that create efficiencies by demonstrating:
- Accountability - Takes ownership of work, ensuring data systems are reliable and accurate. Promptly addresses issues or errors with transparency and responsibility.
- Innovation - Embrace new ideas, new tools, and bold thinking; challenge the status quo.
- Professionalism - consistently demonstrates courteous behavior, integrity, and strong work ethic while representing the bank with a polished appearance and clear communication.
POSITION LEVEL(S) EXPECTATIONS
- Strong understanding of Data Models, databases, schemas, and security methodologies.
- Excellent leadership, strategic thinking, and stakeholder management skills.
SEEKS PROFESSIONAL DEVELOPMENT OPPORTUNITIES
Actively participate in expanding skill sets and career paths by attending training programs, workshops, certifications, and educational resources relevant to the role. Set stretch assignments and cross functional opportunities that foster growth and learning.
Requirements
QUALIFICATIONS, EDUCATION, & EXPERIENCE
To perform this position successfully, an individual must be able to perform each essential position requirement satisfactorily, and a skills inventory is listed below.
- Bachelor's degree in a technology related program or 3-5 years' experience a data related field.
- Strong understanding of data architecture and data base design principles.
- Strong leadership and communication skills across technical and non-technical audiences.
- 3-5 Years experience in Data roles.
- Proficiency in languages such as Python, Java, Scala, or SQL.
- Experience in financial services (banking, insurance, wealth management).
- Excellent problem-solving and communication skills, with a collaborative mindset.
- Demonstrated leadership and self-direction.
- A background screening will be conducted.
LANGUAGE SKILLS: Ability to read, comprehend, and interpret documents. Possesses professional communication and interpersonal skills to write and speak effectively both one-on-one and before groups of clients or employees of the organization. Ability to communicate to clients directly and effectively.
TECHNOLOGY SKILLS: Ability to utilize telephone systems and possess good digital literacy including email, internet and intranet use. Strong understanding of Salesforce platform capabilities and implementation methodologies.
MATHEMATICAL SKILLS: Ability to add, subtract, multiply, and divide in all units of measure.
REASONING ABILITY: Ability to apply common sense understanding to carry out instructions furnished in written, oral, or diagram form. Ability to solve challenging problems involving several variables in a standardized situation.
PHYSICAL DEMANDS AND WORK ENVIRONMENT: The physical demands and work environment described here are representative of those that must be met by an employee to successfully perform the essential functions of this position.
This position operates in a professional office environment with considerable time spent at a desk using office equipment such as computers, phones, and printers. Ability to travel on occasion to all market areas and attend seminars or training sessions offsite and employee meetings off-site.
Reasonable accommodation may be made to enable individuals with disabilities to perform the essential functions.
DISCLAIMER: This job description is not an exclusive list of responsibilities and duties. They may change at any time without notice.
BENEFITS
- Medical, Dental, Vision & Life Insurance
- 401K with company match
- Paid Time Off & Recognized Holidays
- Leave policies
- Voluntary Benefit Options (Life, Accident, Critical Illness, Hospital Indemnity & Pet)
- Employee Assistance Program
- Employee Health & Wellness Program
- Special Loan and Deposit Rates
- Gradifi Student Loan Paydown Plan
- Rewards & Recognition Programs and much more!
Eligibility requirements apply.
CNB Bank is an equal opportunity employer and all applicants are considered based on qualifications without regard to sex, race, color, ancestry, religious creed, national origin, sexual orientation, gender identity, physical disability, mental disability, age, marital status, disabled veteran or Vietnam era veteran status. CNB Financial Corporation is an Affirmative Action Employer and is committed to fostering, cultivating and preserving a culture of diversity and inclusion.
?LicenceId=5a7398f0-7edb-4cb7-a02b-518dcfa222fa&ProductType=IntranetLicense&SubType=PG
About Pinterest:
Millions of people around the world come to our platform to find creative ideas, dream about new possibilities and plan for memories that will last a lifetime. At Pinterest, we're on a mission to bring everyone the inspiration to create a life they love, and that starts with the people behind the product.
Discover a career where you ignite innovation for millions, transform passion into growth opportunities, celebrate each other's unique experiences and embrace theflexibility to do your best work. Creating a career you love? It's Possible.
At Pinterest, AI isn't just a feature, it's a powerful partner that augments our creativity and amplifies our impact, and we're looking for candidates who are excited to be a part of that. To get a complete picture of your experience and abilities, we'll explore your foundational skills and how you collaborate with AI.
Through our interview process, what matters most is that you can always explain your approach, showing us not just what you know, but how you think. You can read more about our AI interview philosophy and how we use AI in our recruiting process here.
About tvScientific
tvScientific is the first and only CTV advertising platform purpose-built for performance marketers. We leverage massive data and cutting-edge science to automate and optimize TV advertising to drive business outcomes. Our solution combines media buying, optimization, measurement, and attribution in one, efficient platform. Our platform is built by industry leaders with a long history in programmatic advertising, digital media, and ad verification who have now purpose-built a CTV performance platform advertisers can trust to grow their business.
As a Senior Data Engineer at tvScientific, you will be a key player in implementing the robust data infrastructure to power our data-heavy company. You will collaborate with our cross-functional teams to evolve our core data pipelines, design for efficiency as we scale, and store data in optimal engines and formats. This is an individual contributor role, where you will work to define and implement a strategic vision for data engineering within the organization.
What you'll do:
- Implement robust data infrastructure in AWS, using Spark with Scala
- Evolve our core data pipelines to efficiently scale for our massive growth
- Store data in optimal engines and formats
- Collaborate with our cross-functional teams to design data solutions that meet business needs
- Built out fault-tolerant batch and streaming pipelines
- Leverage and optimize AWS resources while designing for scale
- Collaborate closely with our Data Science and Product teams
- How we'll define success:
- Successful implementation of scalable and efficient data infrastructure
- Timely delivery and optimization of data assets and APIs
- High attention to detail in implementation of automated data quality checks
- Effective collaboration with cross-functional teams
What we're looking for:
- Production data engineering experience
- Proficiency in Spark and Scala, with proven experience building data infrastructure in Spark using Scala
- Familiarity with data lakes, cloud warehouses, and storage formats
- Strong proficiency in AWS services
- Expertise in SQL for data manipulation and extraction
- Excellent written and verbal communication skills
- Bachelor's degree in Computer Science or a related field
- Nice-to-Haves
- Experience in adtech
- Experience implementing data governance practices, including data quality, metadata management, and access controls
- Strong understanding of privacy-by-design principles and handling of sensitive or regulated data
- Familiarity with data table formats like Apache Iceberg, Delta
In-Office Requirement Statement:
- We recognize that the ideal environment for work is situational and may differ across departments. What this looks like day-to-day can vary based on the needs of each organization or role.
Relocation Statement:
- This position is not eligible for relocation assistance. Visit ourPinFlexpage to learn more about our working model.
#LI-SM4
#LI-REMOTE
At Pinterest we believe the workplace should be equitable, inclusive, and inspiring for every employee. In an effort to provide greater transparency, we are sharing the base salary range for this position. The position is also eligible for equity. Final salary is based on a number of factors including location, travel, relevant prior experience, or particular skills and expertise.
Information regarding the culture at Pinterest and benefits available for this position can be found here.
US based applicants only$123,696—$254,667 USDOur Commitment to Inclusion:
Pinterest is an equal opportunity employer and makes employment decisions on the basis of merit. We want to have the best qualified people in every job. All qualified applicants will receive consideration for employment without regard to race, color, ancestry, national origin, religion or religious creed, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, age, marital status, status as a protected veteran, physical or mental disability, medical condition, genetic information or characteristics (or those of a family member) or any other consideration made unlawful by applicable federal, state or local laws. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. If you require a medical or religious accommodation during the job application process, please completethis formfor support.
Salary range:
The UC academic salary scales set the minimum pay at appointment. See the following table for the current salary scale for this position: . The current full-time salary range for this position is $70,977-$199,722. Placement on the scale is commensurate with college teaching experience.
Percent time:
15% to 100%
Anticipated start:
Positions usually start in July or August for Fall, January for Spring and June for Summer.
Review timeline:
Applications will be accepted and reviewed for unit needs through January 2027. Applications are typically considered in April and May for fall course needs, in September and October for spring course needs, and February and March for summer course needs. The pool will close January 2027; applicants wishing to remain in the pool after that time will need to submit a new application.
Application Window
Open date: June 9, 2025
Most recent review date: Tuesday, Jun 24, 2025 at 11:59pm (Pacific Time)
Applications received after this date will be reviewed by the search committee if the position has not yet been filled.
Final date: Tuesday, Jan 12, 2027 at 11:59pm (Pacific Time)
Applications will continue to be accepted until this date, but those received after the review date will only be considered if the position has not yet been filled.
Position description
Data Science Undergraduate Studies (DSUS) at the University of California, Berkeley invites applications for a pool of qualified temporary lecturers to teach DSUS courses should an opening arise. Screening of applicants is ongoing and will continue as needed. The number of positions varies from semester to semester (fall, spring and summer sessions), depending on the needs of the unit.
About DSUS
Data Science Undergraduate Studies (DSUS) offers a range of academic, co-curricular, and enrichment programs-including the Data Science major and minor-with a wide-reaching impact both across UC Berkeley and beyond.
Designed in collaboration with faculty from across Berkeley, Data Science invests students with deep technical knowledge, expertise in how to apply that knowledge in a field of their choosing, and an understanding of the social and human contexts and ethical implications of how data are collected, analyzed, and used. This combination positions graduates to help inform and develop solutions to a range of pressing challenges, from adapting industry to a new world of data to amplifying learning in education to helping communities recover from disaster.
DSUS is part of the College of Computing, Data Science, and Society (CDSS), which strives to develop, implement, and share high-quality, ethics-oriented, and accessible curricula, educating a diverse student body in data science, computing, and statistics. Core to the college is an understanding of how computing and data science affect equality, equity, and opportunity-and the capacity to respond to social challenges.
DSUS is committed to hiring and developing staff who want to work in a high performing culture that reflects the outstanding work of our faculty and students. DSUS seeks candidates who can support the success of all students through inclusive curriculum, classroom environment, and pedagogy.
Responsibilities
DSUS is seeking outstanding instructors to be appointed in the non-Senate Lecturer title series who can teach small and large courses in several areas. We are particularly interested in instructors who can combine computational and inferential thinking in a way that reflects the new field of Data Science Education.
Core courses include:
Fundamentals of Data Science
Principle and Techniques of Data Science
Human Contexts and Ethics of Data
Data and Justice
Data, Inference, and Decisions
Honors Thesis Seminar
Connector Courses: Instructors may be hired to teach Connector Courses that connect Foundations of Data Science with other disciplines, such as neuroscience, legal studies, public health, demography, English or others. Connector courses allow students to apply theoretical concepts from data science to a particular area of interest. Course design and syllabus will leverage the sequence of computational and statistical techniques that students learn in the Foundations course.
Teaching a Data Science course may include holding office hours, assign grades, advise students, prepare course materials (e.g., syllabus), provide clear and prompt feedback on student work, and maintain the course website.
Please note: The use of a lecturer pool does not guarantee that an open position exists. See the review date specified in AP Recruit to learn whether the unit is currently reviewing applications for a specific position. If there is no future review date specified, your application may not be considered at this time.
Department: dsus
Division:
Qualifications
Basic qualifications (required at time of application)
Must have an advanced degree or be enrolled in an advanced degree program at the time of application.
Additional qualifications (required at time of start)
Advanced degree. Candidates must already be authorized to work in the United States.
Preferred qualifications
A Ph.D. or equivalent international degree in computer science, statistics, information, applied mathematics, engineering, or the social sciences is preferred.
Ability to support the success of all students through inclusive curriculum, classroom environment, and pedagogy.
Application Requirements
Document requirements
Curriculum Vitae - Your most recently updated C.V.
Cover Letter
Statement of Teaching - Please discuss prior teaching experience, teaching approach, and future teaching interests. This can include, for example, specific efforts, accomplishments, and future plans to support the success of all students through inclusive curriculum, classroom environment, and pedagogy.
Reference requirements
- 3-4 required (contact information only)
Apply link:
JPF04958
Help contact:
About UC Berkeley
UC Berkeley is committed to diversity, equity, inclusion, and belonging in our public mission of research, teaching, and service, consistent with UC Regents Policy 4400 and University of California Academic Personnel policy (APM 210 1-d). These values are embedded in our Principles of Community, which reflect our passion for critical inquiry, debate, discovery and innovation, and our deep commitment to contributing to a better world. Every member of the UC Berkeley community has a role in sustaining a safe, caring and humane environment in which these values can thrive.
The University of California, Berkeley is an Equal Opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, age, or protected veteran status.
For more information, please refer to the University of California's Affirmative Action and Nondiscrimination in Employment Policy and the University of California's Anti-Discrimination Policy.
In searches when letters of reference are required all letters will be treated as confidential per University of California policy and California state law. Please refer potential referees, including when letters are provided via a third party (i.e., dossier service or career center), to the UC Berkeley statement of confidentiality prior to submitting their letter.
As a University employee, you will be required to comply with all applicable University policies and/or collective bargaining agreements, as may be amended from time to time. Federal, state, or local government directives may impose additional requirements.
Unless stated otherwise, unambiguously, in the position description, this position does not include sponsorship of a new consular H-1B visa petition that would require payment of the $100,000 supplemental fee.
As a condition of employment, the finalist will be required to disclose if they are subject to any final administrative or judicial decisions within the last seven years determining that they committed any misconduct.
- "Misconduct" means any violation of the policies or laws governing conduct at the applicant's previous place of employment, including, but not limited to, violations of policies or laws prohibiting sexual harassment, sexual assault, or other forms of harassment or discrimination, as defined by the employer.
- UC Sexual Violence and Sexual Harassment Policy
- UC Anti-Discrimination Policy
- APM - 035: Affirmative Action and Nondiscrimination in Employment
Job location
Berkeley, CA