Sebpo Analyst Data Solution Jobs in Usa

12,274 positions found — Page 2

Junior Java Spring boot developer/Data analyst
✦ New
Salary not disclosed
Lexington 16 hours ago
"Failing Tech Interviews? Or Not getting any Interviews? Turn 'Frustration' Into an Offer.” Getting interviews but not converting them into offers is one of the most frustrating stages of a tech job search.

It's also one of the most fixable—because interview performance is rarely about intelligence.

It's usually about preparation structure, repetition, communication clarity, and knowing what interviewers actually test.

Many candidates learn coding, but they don't learn how to present their skills under pressure.

SynergisticIT is designed for candidates who want to stop guessing and start improving with a clear framework.

Since 2010, SynergisticIT has helped thousands of candidates land full-time jobs at tech leaders and enterprise employers—companies such as Google, Apple, PayPal, Visa, Western Union, Wells Fargo, Client, Walmart Labs, Client, Banking, Client, Client, and many more—with offers often ranging from $95,000 to $154,000 depending on role and skill depth.

The focus is: build job-ready ability + interview confidence + hiring alignment so you can close the deal when opportunities appear.

Why do people fail interviews after doing CS or online courses? Typically it's one (or several) of these gaps: Weak fundamentals (you know syntax, but not the "why”) Poor project explanation (you built something, but can't defend design decisions) Shallow system understanding (APIs, DB design, CI/CD, cloud basics are fuzzy) No repetition under pressure (whiteboard/online assessments feel unfamiliar) Lack of structured mock interview practice SynergisticIT addresses these gaps by treating interviews as a skill you work on—like a sport.

You don't just watch videos; you practice real drills.

We emphasize on real interview patterns: coding questions, debugging, project walkthroughs, behavioral responses, and the ability to speak clearly about your work.

What kinds of roles are being targeted? Instead of chasing every shiny trend, JOPP focuses on roles employers repeatedly hire for: Java full stack, software programming, Python/Java development, DevOps, data analyst, data engineer, data scientist, and ML/AI engineer.

In other words, the program builds candidates across Java / Full Stack / DevOps and Data Analytics / Data Engineering / Data Science / Machine Learning / AI—because companies hire teams, not single-skill candidates.

Ideal candidates who benefit from interview-focused help This includes: recent grads with limited experience, laid-off professionals re-entering the market, career changers, candidates with gaps, experienced applicants who can't convert interviews, and F1/OPT candidates needing a stable path.

SynergisticIT also supports candidates with guidance around STEM extension and offers process support relating to H-1B/Green Card filing once employed (as applicable through employers and standard immigration processes).

If you want to explore, here are the key links: Event videos (OCW, JavaOne, Gartner): USA Today feature Contact Form (Get Started): If you're already getting interviews, you're closer than you think.

Now it's time to train like you mean it—and turn interviews into offers.

Please read our blogs Why do Tech Companies not Hire recent Computer Science Graduates | SynergisticIT What Recruiters Look for in Junior Developers | SynergisticIT Software engineering or Data Science as a career? How OPT Students Can Land Tech Jobs – SynergisticIT Please note: Resume databases are shared with clients and interested clients will reach out directly if they find a qualified candidate for their req.

Resume submissions may be shared with our JOPP team database also.

Please unsubscribe if contacted or if you don't want to be contacted please don't submit your resume.
Not Specified
Junior data scientist/Java Developer
🏢 Synergistic it
Salary not disclosed
Madison 2 days ago
"Get Responses to your Applications? Make Recruiters Notice You.” If your applications disappear into a black hole, you're experiencing the modern hiring funnel.

Most resumes never reach a hiring manager.

They're filtered by ATS systems, keyword screening, and recruiters looking for job-ready signals—specific stacks, strong project depth, relevant certifications, and clear experience narratives.

That's why "I applied a lot” often leads to silence.

The fix is not more applications.

The fix is improving what your application communicates in the first 10 seconds.

Since 2010, SynergisticIT has helped candidates land full-time roles at organizations such as Google, Apple, PayPal, Visa, Western Union, Wells Fargo, Client, Walmart Labs, Client, Banking, Client, Client, and hundreds more.

Many JOPP graduates achieve offers in the $90,000 to $154,000 range depending on their role focus and skill coverage.

Our purpose is to align your skills and profile with what employers are hiring for right now—so you get responses, interviews, and offers.

Why you may not be getting replies Your resume lacks stack clarity (recruiters can't quickly see your fit) Projects look like tutorials (no depth, no real-world features, no measurable outcomes) Skills are scattered (no coherent narrative: "What role are you targeting?”) You're missing job-market staples (Git, CI/CD basics, APIs, cloud exposure, SQL) You're not speaking the language of the job description SynergisticIT approaches this from both angles: build real skills and build a market-ready profile.

preparing you for screening, interview calls, technical rounds, and offer negotiation readiness.

Target roles and stacks Current demand often includes entry-level software programmers, Java full stack developers, Python/Java developers, DevOps engineers, data analysts, data engineers, data scientists, and ML/AI engineers.

The focus remains consistent: Java / Full Stack / DevOps plus Data Analytics / Data Engineering / Data Science / Machine Learning / AI.

This breadth matters because today's employers value candidates who can handle more than one layer of the system.

Ideal candidates for response-building support Recent grads, laid-off professionals, career switchers, candidates with gaps, experienced applicants not hearing back, and F1/OPT jobseekers needing a stable tech role.

SynergisticIT also provides support and guidance around STEM extension, and process support related to H-1B and Green Card filing once employed (as applicable through employers).

If you want to explore, here are the key links: Event videos (OCW, JavaOne, Gartner): USA Today feature Contact Us (Fill the Form): If recruiters aren't responding, it's not the end—it's feedback.

And you can fix it with the right plan.

Please read our blogs Why do Tech Companies not Hire recent Computer Science Graduates | SynergisticIT What Recruiters Look for in Junior Developers | SynergisticIT Software engineering or Data Science as a career? How OPT Students Can Land Tech Jobs – SynergisticIT Please note: Resume databases are shared with clients and interested clients will reach out directly if they find a qualified candidate for their req.

Resume submissions may be shared with our JOPP team database also.

Please unsubscribe if contacted or if you don't want to be contacted please don't submit your resume.
Not Specified
JUnior data scientist/Backend Java Developer/AI engineer
✦ New
🏢 Synergistic it
Salary not disclosed
Baton Rouge 16 hours ago
"Let's Get Responses to your Applications? Make Hiring Mangers Notice You.” If your applications disappear into a black hole, you're experiencing the modern hiring funnel.

Most resumes never reach a hiring manager.

They're filtered by ATS systems, keyword screening, and recruiters looking for job-ready signals—specific stacks, strong project depth, relevant certifications, and clear experience narratives.

That's why "I applied a lot” often leads to silence.

The fix is not more applications.

The fix is improving what your application communicates in the first 10 seconds.

Since 2010, SynergisticIT has helped candidates land full-time roles at organizations such as Google, Apple, PayPal, Visa, Western Union, Wells Fargo, Client, Walmart Labs, Client, Banking, Client, Client, and hundreds more.

Many JOPP graduates achieve offers in the $90,000 to $154,000 range depending on their role focus and skill coverage.

Our purpose is to align your skills and profile with what employers are hiring for right now—so you get responses, interviews, and offers.

Why you may not be getting replies Your resume lacks stack clarity (recruiters can't quickly see your fit) Projects look like tutorials (no depth, no real-world features, no measurable outcomes) Skills are scattered (no coherent narrative: "What role are you targeting?”) You're missing job-market staples (Git, CI/CD basics, APIs, cloud exposure, SQL) You're not speaking the language of the job description SynergisticIT approaches this from both angles: build real skills and build a market-ready profile.

preparing you for screening, interview calls, technical rounds, and offer negotiation readiness.

Target roles and stacks Current demand often includes entry-level software programmers, Java full stack developers, Python/Java developers, DevOps engineers, data analysts, data engineers, data scientists, and ML/AI engineers.

The focus remains consistent: Java / Full Stack / DevOps plus Data Analytics / Data Engineering / Data Science / Machine Learning / AI.

This breadth matters because today's employers value candidates who can handle more than one layer of the system.

Ideal candidates for response-building support Recent grads, laid-off professionals, career switchers, candidates with gaps, experienced applicants not hearing back, and F1/OPT jobseekers needing a stable tech role.

SynergisticIT also provides support and guidance around STEM extension, and process support related to H-1B and Green Card filing once employed (as applicable through employers).

If you want to explore, here are the key links: Event videos (OCW, JavaOne, Gartner): USA Today feature Contact Us (Fill the Form): If recruiters aren't responding, it's not the end—it's feedback.

And you can fix it with the right plan.

Please read our blogs Why do Tech Companies not Hire recent Computer Science Graduates | SynergisticIT What Recruiters Look for in Junior Developers | SynergisticIT Software engineering or Data Science as a career? How OPT Students Can Land Tech Jobs – SynergisticIT Please note: Resume databases are shared with clients and interested clients will reach out directly if they find a qualified candidate for their req.

Resume submissions may be shared with our JOPP team database also.

Please unsubscribe if contacted or if you don't want to be contacted please don't submit your resume.
Not Specified
Data Engineering Manager
✦ New
Salary not disclosed
Green Bay, WI 1 day ago
At Nicolet National Bank, our culture is based on the principles of community banking, putting the needs of our customers at the forefront of our decision-making. Our Core Values drive everything we do, and we are committed to serving our customers with excellence. We believe that every job in our organization is critical to our success, and we are dedicated to creating a work environment where our employees feel valued, respected, and supported. With locations in Wisconsin, Michigan, Minnesota, Iowa, Colorado, and Florida we are proud to service our local communities and make a positive impact on the lives of our customers. At Nicolet National Bank, we believe that our people are our most valuable asset, and we are committed to investing in their growth and development.

The Data Engineering Manager is responsible for leading and developing a team of Data Architects and Data Solutions Engineers while actively contributing to hands-on technical projects. This role will manage the data warehouse in Snowflake, engineering automations in Alteryx and/or other solutions, while ensuring efficient project intake and prioritization. The ideal candidate combines strong technical expertise with proven technical leadership skills to drive innovation and operational excellence across the data engineering function.

As a Data Engineering Manager, you will:


  • Set the technical strategy for data engineering solutions and data architecture which includes end to end data pipeline strategy, consumption management, project scoping, and data automation.
  • Design, develop, and optimize data engineering solutions using Snowflake, DBT, Azure Data Factory, and Alteryx.
  • Continuously assess and optimize the data engineering technology stack to ensure scalability, performance, and alignment with industry best practices.
  • Implement best practices for data modeling, ETL/ELT processes, and automation.
  • Own and maintain the Snowflake data warehouse roadmap and engineering standards.
  • Lead data project scoping, prioritization, and resource allocation to ensure timely delivery of data engineering solutions.
  • Ensure data integrity, security, and compliance across all engineering solutions.
  • Collaborate with IT and rest of data teams to align solutions with enterprise
  • Establish documentation and governance standards for data engineering workflows ensuring completeness, audit readiness, and traceability in alignment with enterprise architecture.
  • Directly supervise the Data Architecture & Data Engineering team in accordance with Nicolet's policies and applicable laws. Responsibilities include interviewing, hiring, and training employees; planning, assigning, and directing work; appraising performance; coaching, mentoring and development planning; rewarding and disciplining employees; addressing complaints and resolving problems.


Qualifications:


  • Bachelor's degree in Computer Science, Data Engineering, Data Analytics or related field.
  • 7+ years in data engineering or related data roles required.
  • 3+ years in leadership or management positions required.
  • Strong technical expertise in Snowflake, DBT, Azure Data Factory and SQL or like systems.
  • Familiarity with Alteryx, UiPath, Tableau, Power BI and Salesforce is preferred.
  • Ability to design and implement scalable data solutions.
  • Excellent leadership, communication, and organizational skills
  • Ability to balance hands-on development with team development.
  • Must be able to work fully in-office. This position does not allow for remote work.


Benefits:


  • Medical, Dental, Vision, & Life Insurance
  • 401(k) with a company match
  • PT0 & 11 1/2 Paid Holidays


The above statements are intended to describe the general nature and level of work being performed. They are not intended to be construed as an exhaustive list of all responsibilities and skills required for the position.

Equal Opportunity Employer/Veterans/Disabled
Not Specified
Data Architect I
Salary not disclosed
San Jose 6 days ago
Title: Data and Cloud Solutions Architect Location: San Jose, CA (100% on-site) Pay: $65/hr Job Description: The Data and Cloud Solutions Architect will be a key contributor to designing, evolving, and optimizing our company's cloud-based data architecture.

This role requires a strong background in data engineering, hands-on experience building cloud data solutions, and a talent for communicating complex designs through clear diagrams and documentation.

Core Responsibilities Cloud Data Architecture Design & Strategy: Design and implement secure, scalable cloud-based data pipelines, data warehouses, and data lakes.

Drive the selection and integration of cloud data services (e.g., storage, databases, analytics tools).

Develop comprehensive cloud data strategies in alignment with business goals.

Diagramming & Documentation: Produce clear and informative visual diagrams (e.g., data flow diagrams, entity-relationship diagrams, system architecture diagrams) to guide implementation and knowledge sharing.

Maintain detailed documentation of data architecture, design decisions, and processes.

Hands-on Implementation & Optimization: Actively contribute to the hands-on implementation of cloud data solutions.

Proactively identify and implement performance optimization strategies for cloud data systems.

Troubleshoot and resolve issues related to data pipelines, data quality, and data accessibility.

Must Have: Bachelor's of Engineering in Computer Science "Engineering degree in another branch such as Electrical, Civil, Mechanical, or IT, etc it will not be considered" Minimum of 5 years of hands-on data engineering experience using distributed computing approaches (Spark, Map Reduce, DataBricks) Proven track record of successfully designing and implementing cloud-based data solutions in Azure Deep understanding of data modeling concepts and techniques.

Strong proficiency with database systems (relational and non-relational).

Exceptional diagramming skills with tools like Visio, Lucidchart, or other data visualization software.

Preferred Qualifications Advanced knowledge of cloud-specific data services (e.g., DataBricks, Azure Data Lake).

Expertise in big data technologies (e.g., Hadoop, Spark).

Strong understanding of data security and governance principles.

Experience in scripting languages (Python, SQL).

Additional Skills Communication: Exemplary written and verbal communication skills to collaborate effectively with all teams and stakeholders.

Problem-solving: Outstanding analytical and problem-solving skills for complex data challenges.

Teamwork & Leadership: Ability to work effectively in cross-functional teams and demonstrate potential for technical leadership.
Not Specified
Senior Data Engineer
Salary not disclosed

Our Ideal Candidate

We are looking for a Senior Data Engineer who is a self-starter and detail-oriented with a strong blend of technical expertise and business acumen. The ideal candidate has a strong foundation in data engineering, experience working with healthcare data, and the ability to build scalable data-driven solutions. You are a proactive problem-solver who takes ownership of your work, continuously seeks to improve data quality and accessibility, and is committed to delivering high-quality data solutions.


Responsibilities

  • Lead data modeling efforts to create optimized data structures for reporting and analytical purposes.
  • Design, develop, and maintain end-to-end data pipelines that transform raw source data into high-quality, actionable datasets.
  • Build the company's data infrastructure and data catalog, from data ingestion through the semantic layer, ensuring a robust, scalable architecture on AWS.
  • Collaborate with cross-functional teams (product, technology, operations, etc.) to understand data needs, align them with business goals, and translate them into technical solutions.


Qualifications

  • Bachelor's or Master's (preferred) degree in Computer Science, Engineering, or a related quantitative field (Data Science).
  • 5+ years of experience as a Data Engineer, Analytics Engineer, or similar role, with a strong focus on the development of end-to-end data solutions and products.
  • 5+ years of hands-on experience with AWS cloud technologies is required, including designing, building, and maintaining cloud-based data infrastructure and infrastructure as a Code (IaC), such as CDK or Terraform.
  • Proficiency in building and managing data infrastructure and ETL pipelines within AWS, leveraging services like AWS Glue, Athena, Redshift, Aurora, RDS, DynamoDB, EMR, Lambda, IAM, S3, EC2, CLI.
  • Demonstrated experience in designing and implementing robust data models for analytical purposes.
  • Strong proficiency in SQL and experience with various database systems (e.g., MySQL, NoSQL, Snowflake, Vector Databases).
  • Strong proficiency in Python for data engineering and analytics, and extensive experience with data pipeline development and orchestration tools (e.g., Airflow, dbt).
  • Experience with Power BI or Tableau for data reporting and dashboard development.
  • Experience shipping data products to production and understanding software development lifecycle best practices.
  • Strong problem-solving skills, the ability to work independently, and good communication and collaboration skills.
  • Ability to learn new technologies and adapt to a fast-paced environment.
  • Awareness of HIPAA, PHI, and other healthcare-specific regulations related to data and AI.
Not Specified
Data-MDM Architect (Profisee) with BA/PM
Salary not disclosed
Milwaukee, WI 2 days ago

Job: Data-MDM Architect (Profisee) with BA/PM experience

Location: Waukesha/Milwaukee, Wisconsin

Mode: Work from office, at least 3 days in a week



Primary Purpose

  • Responsible for designing and architecting data/MDM solutions, analyzing, implementing, and deploying these solutions both on-premises and in the cloud. By collaborating with diverse business teams and utilizing extensive knowledge of big data tools and products, creates scalable, flexible, and comprehensive data solutions that tackle complex business challenges.


Major Responsibilities

  • Manage the technical delivery of medium to large, moderately complex projects on-time with targeted zero defects.
  • Provide planning, estimation, scheduling, prioritization and coordination of technical activities related to Enterprise-wide data solutions on both cloud and on premises.
  • Ensure solutions alignment to Enterprise Architecture policies and best practices; ensure that process methodologies are followed in development.
  • Accountable to business and technology management for end-to-end application scoping, planning, development and delivery that meets and exceeds quality standards.
  • Identify and manage dependencies and downstream impacts of the project to minimize adverse effects on other projects and / or programs.
  • Assist Project manager with the estimation of technical timelines and allocation of the technical resources to specific task.
  • Communicate Expectations, Roles and Responsibilities to team members and hold them accountable to meet the expectations.
  • Collaborate with IT partners to devise capacity plan and ensure appropriate infrastructure for the end-to-end system delivery.
  • Supervise contingent workers and their daily tasks including onshore and offshore staff.
  • Identify valuable data sources and automate collection processes.
  • Maintain data accuracy and timeliness, a critical highly visible aspect of the position as it impacts supply chain and sales effectiveness, financial performance of the business, and customer perception through on-time delivery, working capital, financial reporting accuracy and product quality.
  • Architect and design master data to drive towards “Single source of the truth”.
  • Regularly monitor and measure performance of MDM standards.
  • Performs problem and trend analyses to identify and correct problems and increase data quality.
  • Review / Approve execution of data changes.
  • Track and report through the CAB review board.
  • Develop SLA’s and ensure they are met.
  • Drive data mapping workshops for migrations.
  • Coordinate and participate in the ETL (extract, transform, load) process for any migrations.
  • Plan and architect M&A initiatives and integrations
Not Specified
Data Architect
✦ New
Salary not disclosed
New York, NY 16 hours ago
Position Summary

We areseekingan experienced and forward-thinkingSolution Architect - Data Engineeringto lead the design and implementation of scalable, secure, and high-performance data solutions. The ideal candidate will have deep expertise withPython and SQL, experience with data warehouses (Snowflake or something similar), a strong command ofengineering best practices(includinglinters and code formatters, project organization, and managing environments), and practical experience buildingCI/CD pipelinesto ensure robust, automated delivery of data pipelines and services.


Responsibilities

  • Architect Scalable Data Solutions
    Design and implement end-to-end data engineering architectures that are scalable, maintainable, and performant across batch and real-time processing systems.


  • Engineering Leadership
    Lead by example with high-quality Python code,utilizinglinters (e.g.,pylint,flake8,black) and enforcing code cleanliness, readability, and best practices across teams.


  • CI/CD Pipeline Development
    Build, manage, and optimize CI/CD pipelines using tools such asGitHub Actions,GitLab CI,CircleCI, orJenkinsto automate testing, code quality checks, and deployment of data engineering components.


  • Data Governance & Quality
    Establish data validation, logging, and monitoring strategies to ensure data integrity and reliability at scale.


  • Collaborate Cross-Functionally
    Work closely with data scientists, software engineers, DevOps, and business stakeholders to translate requirements into technical solutions and ensure alignment with overall enterprise architecture.


  • Mentorship & Code Reviews
    Provide guidance to junior developers, lead technical reviews, and enforce clean coding standards throughout the data engineering team.


Required Skills & Experience

  • 7+ years of experience in software or data engineering, with 3+ years in an architectural or technical leadership role.


  • Expert-levelproficiencyinPython and SQL, with a deep understanding of best practices, performance tuning, and maintainable code patterns.


  • Proven experience withlinters,formatters, and other static analysis tools to ensure code quality and compliance.


  • Hands-on experience designing and implementingCI/CD pipelinesfor data pipelines, APIs, and other backend services.


  • Solid knowledge of modern data platforms and technologies (e.g., Spark, Airflow,dbt, Kafka, Snowflake,BigQuery, etc.).


  • Strong understanding of software engineering practices such as version control, testing, and continuous integration.


Desired Skills & Experience

  • Experience working in cloud environments (AWS, GCP, or Azure).
  • Familiarity with Infrastructure as Code (IaC) tools like Terraform or CloudFormation.
  • Understanding of security, compliance, and governance in data pipelines.
  • Excellent communication and documentation skills.
  • Strong leadership presence with the ability to mentor and influence teams.
  • Problem-solver with a focus on delivering value and simplicity through technology.


Wage and Benefits

We offer a Total Rewards package that includes medical and dental coverage, 401(k) plans, flex spending, life insurance, disability, employee discount program, employee stock purchase program and paid family benefits to support you and your family.The salary range for this position is posted below. Where an employee or prospective employee is paid within this range will depend on, among other factors, actual ranges for current/former employees in the subject position, market considerations, budgetary considerations, tenure and standing with the Company (applicable to current employees), as well as the employee's/applicant's skill set, level of experience, and qualifications.


Employment Transparency

It is the policy of our company to provide equal employment opportunities to all employees and applicants for employment without regard to race, color, ethnicity, gender, age, religion, creed, national origin, sexual orientation, gender identity, marital status, citizenship, genetic information, veteran status, disability, or any other basis prohibited by applicable federal, state, or local law.


Please note this job description is not designed to cover or contain a comprehensive listing of activities, duties, or responsibilities that are required of the employee for this job. Duties, responsibilities, and activities may change at any time with or without notice.


The employer will make reasonable accommodations in compliance with the American with Disabilities Act of 1990. The job description will be reviewed periodically as duties and responsibilities change with business necessity. Essential and other job functions are subject to modification. Reasonable accommodations may be provided to enable individuals with disabilities to perform the essential functions.


For applicants to jobs in the United States: In compliance with the current Americans with Disabilities Act and state and local laws, if you have a disability and would like to request an accommodation to apply for a position with our company, please email .

Salary Range$200,000—$220,000 USD
Not Specified
Senior Data Analytics Engineer (Customer Data)
✦ New
Salary not disclosed
Irving, TX 16 hours ago

Job Summary:

Our client is seeking a Senior Data Analytics Engineer (Customer Data) to join their team! This position is located in Irving, Texas.

Duties:

  • Support cross-functional teams including Marketing, Data Science, Product, and Digital
  • Build datasets that power: customer segmentation, personalization workflows, campaign and lifecycle analytics, BI dashboards and KPIs and real-time and ML-driven customer experiences
  • Build, optimize, and maintain customer data pipelines using PySpark/Databricks
  • Transform raw customer data into analytics‑ready datasets for reporting, segmentation, personalization, and AI/ML applications
  • Develop customer behavior metrics, campaign insights, and lifecycle reporting layers
  • Design datasets used by Power BI/Tableau; dashboard creation is a plus, not required
  • Optimize Databricks performance such as: skewed joins, partitioning, sorting, caching/persist strategy
  • Work across AWS/Azure/GCP and integrate pipelines with CDPs
  • Participate in ingestion and digestion phases to shape MarTech and BI analytical layers
  • Document and uphold data engineering standards, governance, and best practices across teams


Desired Skills/Experience:

  • 6+ years in Data Engineering or Analytics Engineering
  • Strong hands-on experience with: Databricks, PySpark, Python and SQL
  • Proven experience with customer/marketing data: segmentation, personalization, campaign analytics, retention, behavioral metrics
  • Ability to design performance‑optimized pipelines; batch or near real-time
  • Experience building datasets consumed by Power BI/Tableau
  • Understanding of CDP workflows, customer identity data, traits/feature modeling, and activation
  • Strong communication skills, translating marketing needs into technical data solutions
  • Power BI expertise, major plus
  • Experience with Delta Lake, orchestration, or feature engineering for ML
  • Background as an Analytics Engineer, BI/Data Modeling Engineer, or Data Engineer with strong analytics orientation


Benefits:

  • Medical, Dental, & Vision Insurance Plans
  • Employee-Owned Profit Sharing (ESOP)
  • 401K offered


The approximate pay range for this position starting at $140,000. Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.


At KellyMitchell, our culture is world class. We’re movers and shakers! We don’t mind a bit of friendly competition, and we reward hard work with unlimited potential for growth. This is an exciting opportunity to join a company known for innovative solutions and unsurpassed customer service. We're passionate about helping companies solve their biggest IT staffing & project solutions challenges. As an employee-owned, women-led organization serving Fortune 500 companies nationwide, we deliver expert service at a moment's notice.


By applying for this job, you agree to receive calls, AI-generated calls, text messages, or emails from KellyMitchell and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy at

Not Specified
Solution Architect - CDP / Personalization
Salary not disclosed
Tempe, AZ 6 days ago

Overview:

The Solution Architect will be focused on customer data, personalization, and enterprise digital experience platforms. This person shapes the tech vision, translates business needs into technical blueprints, and guides delivery teams across marketing tech and core enterprise systems.


Must Haves:

  • 5+ years of experience as a Solution Architect
  • Extensive experience implementing a CDP or integrating with other MarTech
  • Experience developing architecture blueprints, strategies, and roadmaps
  • Experience delivering presentations to senior-level executives and technical audiences
  • Ability to work with developers in an outsourced environment
  • Good understanding of product management, agile principles and development methodologies and capability of supporting agile teams by providing advice and guidance on opportunities, impact, and risks, taking account of technical and architectural debt


Plusses:

  • Adobe Experience Platform
  • Adobe Journey Optimizer
  • Adobe Real-Time CDP
  • Bachelor's degree in computer science, information technology, engineering, system analysis or a related


Job Description:

The Solution Architect, Personalization leads and supports architecture activities for a portfolio of enterprise-level solutions. This includes systems such as customer data platforms, personalization engines, recommendation engines, loyalty and discount engines, promotional tools, communication platforms, CMS, DAM, mobile apps, master data solutions, in-store digital screens, ERP, HRMS, and POS systems.


You will provide architectural leadership, design oversight, and technology guidance to ensure solutions meet business requirements and comply with enterprise architecture governance. Responsibilities span five dimensions:


Responsibilities:

1. Interpret Business Needs

  • Translate customer journeys and business requirements into capability maps, value streams, technical requirements, and architectural blueprints
  • Collaborate with business owners, CX technology, product owners, and product managers
  • Determine enterprise solution designs that support future business capabilities


2. Technical Leadership

  • Guide development & engineering teams with technical expertise and architectural vision


3. Assess Technology

  • Analyze current-state solutions for aging tech, misalignment, or deficiencies
  • Support product lifecycle decisions (maintain/refresh/retire)
  • Evaluate emerging technologies and market trends
  • Identify and recommend solutions for legacy systems and technical debt
  • Support product and project teams in selecting and configuring software


4. Apply Technology

  • Lead evaluation, design, and evolution of solution architecture across applications
  • Drive broader-scope architecture efforts across multiple projects/products
  • Develop strategic roadmaps for transitioning from current to future-state architecture
  • Act as a consultant across technologies, platforms, and vendor solutions
  • Guide execution of architectural plans throughout the product lifecycle
  • Ensure alignment with enterprise architecture across agile teams


5. Provide Enterprise Guidance

  • Deliver reference models, standards, and architectural documentation
  • Support governance, compliance, and assurance processes
  • Help guide a community of practice (CoP) across technical teams
  • Define principles, guidelines, standards, and patterns for enterprise‑wide architecture



Compensation:

up to $150k per year annual salary + 5% annual bonus

Exact compensation may vary based on several factors, including skills, experience, and education.


Benefits:

  • Competitive salary plus annual bonus
  • Competitive benefits packages (medical, dental, 401k, employee stock plan, etc.)
  • People Perks which allow for great discounts on food and fuel
  • Work for a leading, innovative, and growing company in convenience store operations
  • Fortune 500 company and a two-time Gallup Exceptional Workplace Award winner
  • Tuition reimbursement of $5,000 per year
  • Learning opportunities to develop new skills and to evolve professionally in a fast-growing company
Not Specified
Senior Manager, Data Architecture (Ref: 195759)
Salary not disclosed
Charlotte, NC 6 days ago

Job Title: Senior Manager, Data Architecture (Ref: 195759)

Location: Charlotte, North Carolina – In-Office (5 Days Per Week)

Salary: Up to $175,000 + Bonus

Contact:


We’re looking for an experienced and forward-thinking Senior Manager, Data Architecture to define and lead the enterprise data architecture strategy within a large-scale, data-driven organization. This is a high-impact leadership role where you’ll shape the long-term data roadmap, modernize architecture standards, and guide the evolution of a cloud-based data platform.

In this role, you’ll lead a team of data architects and modelers while partnering closely with Data Engineering, Analytics, BI, Platform, and business stakeholders. You’ll ensure scalable, secure, and high-performing data solutions that enable advanced analytics, operational reporting, and strategic decision-making across the enterprise.


What You’ll Do

  • Define and maintain the enterprise data architecture vision aligned to business and technology strategy
  • Lead, mentor, and grow a team of data architects and modelers, establishing best practices and standards
  • Design and govern scalable data platforms leveraging Azure, Snowflake, and Databricks
  • Establish enterprise standards for data modeling (Dimensional, 3NF, Data Vault), integration, and storage
  • Define architecture patterns for ingestion, transformation, and cross-domain data integration
  • Drive architectural consistency across analytics, BI, and operational data products
  • Partner with Data Governance teams to enforce data quality, lineage, metadata, and compliance standards
  • Ensure solutions meet security, privacy, and regulatory requirements
  • Collaborate with Engineering and Platform teams on cloud architecture and long-term technical roadmap
  • Communicate complex architectural designs clearly to both technical and executive stakeholders


What You’ll Bring

  • 7+ years of experience in data architecture or advanced data engineering roles
  • 5+ years in a dedicated Data Architect or equivalent leadership capacity
  • Deep experience designing enterprise-scale data platforms in cloud environments
  • Strong expertise in Microsoft Azure data services
  • Expert-level knowledge of Snowflake and Databricks
  • Extensive experience with enterprise data modeling methodologies (Dimensional, 3NF, Data Vault)
  • Experience with data modeling tools such as Erwin (preferred)
  • Proven experience leading or mentoring architects or senior technical professionals
  • Strong understanding of governance, security, and regulatory considerations in enterprise data environments
  • Exceptional communication skills with the ability to influence senior stakeholders


Qualifications

  • Bachelor’s degree in Computer Science, Engineering, Information Systems, or related field (or equivalent experience)
  • 10+ years of progressive experience in data architecture, engineering, or enterprise data platform design
Not Specified
Digital Data Architect
Salary not disclosed
Manhattan Beach, CA 6 days ago

The pay range for this role is $150,000 - $200,000/yr USD.



WHO WE ARE:


Headquartered in Southern California, Skechers—the Comfort Technology Company®—has spent over 30 years helping men, women, and kids everywhere look and feel good. Comfort innovation is at the core of everything we do, driving the development of stylish, high-quality products at a great value. From our diverse footwear collections to our expanding range of apparel and accessories, Skechers is a complete lifestyle brand.


ABOUT THE ROLE:


Skechers Digital Team is seeking a Digital Data Architect reporting to the Director, Digital Architecture, Consumer Domain. This role is responsible for designing and governing Skechers’ Consumer Data 360 ecosystem, enabling identity resolution, high-quality data foundations, personalization, loyalty intelligence, and machine learning capabilities across digital and retail channels.


The ideal candidate will be a strong technical leader, have hands-on full-stack technical knowledge in enterprise technologies related to Skecher’s consumer domain, and have the ability to work in a fast-paced agile environment. You should have knowledge of consumer programs from an architecture/industry perspective, and you should have strong hands-on experience designing solutions on the Salesforce Core Platform (including configuration, integration, and data model best practices).


You will work cross-functionally with Digital Engineering, Data Engineering, Data Science, Loyalty, and Marketing teams to architect scalable, secure, and high-performance data platforms that support advanced personalization and recommender systems.


WHAT YOU’LL DO:


  • Responsible for the full technical life cycle of consumer platform capabilities which includes:
  • Capability roadmap and technical architecture in alignment to consumer experience
  • Technical planning, design, and execution
  • Operations, analytics/reporting, and adoption
  • Define and evolve Skechers’ Consumer Data 360 architecture, including identity resolution (deterministic and probabilistic matching) and unified customer profiles.
  • Architect scalable data models and pipelines across CDP, CRM, e-commerce, marketing automation, data lake, and warehouse platforms.
  • Establish enterprise data quality frameworks including validation, deduplication, anomaly detection, and observability.
  • Optimize SQL workloads and large-scale distributed queries through performance tuning, partitioning, indexing, and workload management strategies.
  • Design and oversee ML pipelines supporting personalization, churn modeling, and recommender systems.
  • Partner with Data Science teams to productionize models using distributed platforms such as Databricks (Spark, Delta Lake, MLflow preferred).
  • Ensure secure data governance, access control (RBAC/ABAC), and compliance with GDPR, CCPA, and related privacy regulations.
  • Provide architectural oversight ensuring performance, scalability, resilience, and maintainability.
  • Collaborate with stakeholders to translate business objectives (LTV growth, personalization lift, engagement) into scalable data solutions.


REQUIREMENTS:


  • Computer Science, Data Engineering, or related degree or equivalent experience.
  • 12+ years experience architecting enterprise data platforms in cloud environments.
  • 9+ years experience with data engineering with a focus on consumer data.
  • 6+ years experience working with Salesforce platforms, including data models and enterprise integrations.
  • Strong experience with Data 360 and identity resolution architectures.
  • Proven expertise in SQL performance tuning and large-scale data modeling.
  • Hands-on experience implementing ML pipelines and recommender systems in production environments.
  • Experience with cloud technologies (AWS, GCP, or Azure).
  • Experience with integration patterns (API, ETL, event streaming).
  • Experience providing technical leadership and guidance across multiple projects and development teams.
  • Experience translating business requirements into detailed technical specifications and working with development teams through implementation, including issue resolution and stakeholder communication.
  • Strong project management skills including scope assessment, estimation, and clear technical communication with both business users and technical teams.
  • Must hold at least one of the following Salesforce Certifications (Platform App Builder, Platform Developer 1, JavaScript Developer 1).
  • Experience with Databricks or similar distributed data/ML platforms preferred.
Not Specified
Lead Data Engineer
Salary not disclosed
Atlanta, GA 2 days ago

Job Title – Lead Data Engineer

Please note this role is not able to offer visa transfer or sponsorship now or in the future


About the role


As a Lead Data Engineer, you will make an impact by designing, building, and operating scalable, cloud‑native data platforms supporting batch and streaming use cases, with strong focus on governance, performance, and reliability. You will be a valued member of the Data Engineering team and work collaboratively with cross‑functional engineering, cloud, and architecture stakeholders.


In this role, you will:

  • Design, build, and operate scalable cloud‑native data platforms supporting batch and streaming workloads with strong governance, performance, and reliability.
  • Develop and operate data systems on AWS, Azure, and GCP, designing cloud‑native, scalable, and cost‑efficient data solutions.
  • Build modern data architectures including data lakes, data lakehouses, and data hubs, with strong understanding of ingestion patterns, data governance, data modeling, observability, and platform best practices.
  • Develop data ingestion and collection pipelines using Kafka and AWS Glue; work with modern storage formats such as Apache Iceberg and Parquet.
  • Design and develop real‑time streaming pipelines using Kafka, Flink, or similar streaming frameworks, with understanding of event‑driven architectures and low‑latency data processing.
  • Perform data transformation and modeling using SQL‑based frameworks and orchestration tools such as dbt, AWS Glue, and Airflow, including Slowly Changing Dimensions (SCD) and schema evolution.
  • Use Apache Spark extensively for large‑scale data transformations across batch and streaming workloads.


Work model

We believe hybrid work is the way forward as we strive to provide flexibility wherever possible. Based on this role’s business requirements, this is a hybrid position requiring 4 days a week in a client or Cognizant office in Atlanta, GA. Regardless of your working arrangement, we are here to support a healthy work-life balance though our various wellbeing programs.


The working arrangements for this role are accurate as of the date of posting. This may change based on the project you’re engaged in, as well as business and client requirements. Rest assured; we will always be clear about role expectations.


What you need to have to be considered

  • Hands‑on experience developing and operating data systems on AWS, Azure, and GCP.
  • Proven ability to design cloud‑native, scalable, and cost‑efficient data solutions.
  • Experience building data lakes, data lakehouses, and data hubs with strong understanding of ingestion patterns, governance, modeling, observability, and platform best practices.
  • Expertise in data ingestion and collection using Kafka and AWS Glue, with experience in Apache Iceberg and Parquet.
  • Strong experience designing and developing real‑time streaming pipelines using Kafka, Flink, or similar streaming frameworks.
  • Deep expertise in data transformation and modeling using SQL‑based frameworks and orchestration tools including dbt, AWS Glue, and Airflow, with knowledge of SCD and schema evolution.
  • Extensive experience using Apache Spark for large‑scale batch and streaming data transformations.


These will help you stand out

  • Experience with event‑driven architectures and low‑latency data processing.
  • Strong understanding of schema evolution, SCD modeling, and modern data modeling concepts.
  • Experience with Apache Iceberg, Parquet, and modern ingestion/storage patterns.
  • Strong knowledge of observability, governance, and platform best practices.
  • Ability to partner effectively with cloud, architecture, and engineering teams.



Salary and Other Compensation:

Applications will be accepted until March 17, 2025.

The annual salary for this position is between $81,000 - $135,000, depending on experience and other qualifications of the successful candidate.

This position is also eligible for Cognizant’s discretionary annual incentive program, based on performance and subject to the terms of Cognizant’s applicable plans.

Benefits: Cognizant offers the following benefits for this position, subject to applicable eligibility requirements:

  • Medical/Dental/Vision/Life Insurance
  • Paid holidays plus Paid Time Off
  • 401(k) plan and contributions
  • Long‑term/Short‑term Disability
  • Paid Parental Leave
  • Employee Stock Purchase Plan


Disclaimer: The salary, other compensation, and benefits information is accurate as of the date of this posting. Cognizant reserves the right to modify this information at any time, subject to applicable law.

Not Specified
Data Manager
Salary not disclosed
Minneapolis, MN 2 days ago

Company/Role Overview:

CliftonLarsonAllen (CLA) Search has been retained by Midwestern Higher Education Compact to identify a Data Manager to serve their team. The Midwestern Higher Education Compact (MHEC) brings together leaders from 12 Midwestern states to strengthen postsecondary education, advance student success, and promote regional economic vitality.


MHEC programs and initiatives save member states and students millions of dollars annually through time- and cost-savings opportunities. MHEC research supports workforce readiness and improves the quality, accessibility, and affordability of postsecondary education. MHEC convenings bring together leaders and subject experts to share knowledge, generate ideas, and develop collaborative solutions.


To learn more, click here:


What You’ll Do:

  • Administer and maintain Microsoft Fabric, OneLake, and Azure environments.
  • Design and deliver sophisticated data solutions that are innovative and sustainable.
  • Ensure data infrastructure is secure, reliable, and scalable.
  • Manage and improve how data is brought into the organization from multiple sources.
  • Maintain accurate, well-structured, consistent, and complete data that ensure high quality and useability for internal staff.
  • Develop and oversee standards on how data is collected, stored, and protected across departments.
  • Manage MHEC’s customer relationship management (CRM) system, ensuring data integrity, integration with other platforms, and alignment with organizational needs.
  • Partner with teams across the organization to monitor processes and make recommendations.
  • Partner with research staff to understand data access patterns and develop storage strategies that accelerate research and analytics
  • Develop and maintain Power BI dashboards and reports to deliver clear insights to senior leaders and decision-makers.
  • Ensure staff have access to timely, clear, and meaningful data visualizations.
  • Train staff to use reports and dashboards effectively.
  • Support departments in using data to guide decision-making.
  • Document data pipelines, integrations, and system processes.
  • Recommend tools and practices that help MHEC grow its data capacity.
  • Monitor developments in Microsoft’s data platforms and assess future needs.


What You’ll Need:

  • Bachelor's degree or equivalent experience preferred.
  • 5+ years’ experience, preferably with Microsoft data platforms including Power BI, Azure, and/or Fabric.
  • Experience designing and maintaining data systems and dashboards.
  • Experience in higher education or nonprofit sectors preferred.
  • Strong technical understanding of Microsoft Fabric, OneLake, and Azure.
  • Proficiency demonstrated in Python, R, SAS, SQL or other statistical/data management software
  • Experience with data visualization platforms (Tableau, Power BI, or similar)
  • Experience with Microsoft Dynamics and Power Automate is a plus but not required.
  • Ability to plan, optimize, build, and maintain data pipelines and dashboards.
Not Specified
Sr. Data Engineer, tvScientific
✦ New
Salary not disclosed
San Francisco, CA 1 day ago

About Pinterest:


Millions of people around the world come to our platform to find creative ideas, dream about new possibilities and plan for memories that will last a lifetime. At Pinterest, we're on a mission to bring everyone the inspiration to create a life they love, and that starts with the people behind the product.


Discover a career where you ignite innovation for millions, transform passion into growth opportunities, celebrate each other's unique experiences and embrace theflexibility to do your best work. Creating a career you love? It's Possible.


At Pinterest, AI isn't just a feature, it's a powerful partner that augments our creativity and amplifies our impact, and we're looking for candidates who are excited to be a part of that. To get a complete picture of your experience and abilities, we'll explore your foundational skills and how you collaborate with AI.


Through our interview process, what matters most is that you can always explain your approach, showing us not just what you know, but how you think. You can read more about our AI interview philosophy and how we use AI in our recruiting process here.

About tvScientific


tvScientific is the first and only CTV advertising platform purpose-built for performance marketers. We leverage massive data and cutting-edge science to automate and optimize TV advertising to drive business outcomes. Our solution combines media buying, optimization, measurement, and attribution in one, efficient platform. Our platform is built by industry leaders with a long history in programmatic advertising, digital media, and ad verification who have now purpose-built a CTV performance platform advertisers can trust to grow their business.



As a Senior Data Engineer at tvScientific, you will be a key player in implementing the robust data infrastructure to power our data-heavy company. You will collaborate with our cross-functional teams to evolve our core data pipelines, design for efficiency as we scale, and store data in optimal engines and formats. This is an individual contributor role, where you will work to define and implement a strategic vision for data engineering within the organization.



What you'll do:



  • Implement robust data infrastructure in AWS, using Spark with Scala
  • Evolve our core data pipelines to efficiently scale for our massive growth
  • Store data in optimal engines and formats
  • Collaborate with our cross-functional teams to design data solutions that meet business needs
  • Built out fault-tolerant batch and streaming pipelines
  • Leverage and optimize AWS resources while designing for scale
  • Collaborate closely with our Data Science and Product teams
  • How we'll define success:

    • Successful implementation of scalable and efficient data infrastructure
    • Timely delivery and optimization of data assets and APIs
    • High attention to detail in implementation of automated data quality checks
    • Effective collaboration with cross-functional teams




What we're looking for:



  • Production data engineering experience
  • Proficiency in Spark and Scala, with proven experience building data infrastructure in Spark using Scala
  • Familiarity with data lakes, cloud warehouses, and storage formats
  • Strong proficiency in AWS services
  • Expertise in SQL for data manipulation and extraction
  • Excellent written and verbal communication skills
  • Bachelor's degree in Computer Science or a related field
  • Nice-to-Haves

    • Experience in adtech
    • Experience implementing data governance practices, including data quality, metadata management, and access controls
    • Strong understanding of privacy-by-design principles and handling of sensitive or regulated data
    • Familiarity with data table formats like Apache Iceberg, Delta




In-Office Requirement Statement:



  • We recognize that the ideal environment for work is situational and may differ across departments. What this looks like day-to-day can vary based on the needs of each organization or role.


Relocation Statement:



  • This position is not eligible for relocation assistance. Visit ourPinFlexpage to learn more about our working model.


#LI-SM4


#LI-REMOTE

At Pinterest we believe the workplace should be equitable, inclusive, and inspiring for every employee. In an effort to provide greater transparency, we are sharing the base salary range for this position. The position is also eligible for equity. Final salary is based on a number of factors including location, travel, relevant prior experience, or particular skills and expertise.


Information regarding the culture at Pinterest and benefits available for this position can be found here.

US based applicants only$123,696—$254,667 USD

Our Commitment to Inclusion:


Pinterest is an equal opportunity employer and makes employment decisions on the basis of merit. We want to have the best qualified people in every job. All qualified applicants will receive consideration for employment without regard to race, color, ancestry, national origin, religion or religious creed, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, age, marital status, status as a protected veteran, physical or mental disability, medical condition, genetic information or characteristics (or those of a family member) or any other consideration made unlawful by applicable federal, state or local laws. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. If you require a medical or religious accommodation during the job application process, please completethis formfor support.

Not Specified
Staff Data Engineer, tvScientific
✦ New
🏢 Pinterest
Salary not disclosed
San Francisco, CA 16 hours ago

About Pinterest:


Millions of people around the world come to our platform to find creative ideas, dream about new possibilities and plan for memories that will last a lifetime. At Pinterest, we're on a mission to bring everyone the inspiration to create a life they love, and that starts with the people behind the product.


Discover a career where you ignite innovation for millions, transform passion into growth opportunities, celebrate each other's unique experiences and embrace theflexibility to do your best work. Creating a career you love? It's Possible.


At Pinterest, AI isn't just a feature, it's a powerful partner that augments our creativity and amplifies our impact, and we're looking for candidates who are excited to be a part of that. To get a complete picture of your experience and abilities, we'll explore your foundational skills and how you collaborate with AI.


Through our interview process, what matters most is that you can always explain your approach, showing us not just what you know, but how you think. You can read more about our AI interview philosophy and how we use AI in our recruiting process here.

About tvScientific


tvScientific is the first and only CTV advertising platform purpose-built for performance marketers. We leverage massive data and cutting-edge science to automate and optimize TV advertising to drive business outcomes. Our solution combines media buying, optimization, measurement, and attribution in one, efficient platform. Our platform is built by industry leaders with a long history in programmatic advertising, digital media, and ad verification who have now purpose-built a CTV performance platform advertisers can trust to grow their business.



As aStaff Data Engineer at tvScientific, you will be a key player in implementing the robust data infrastructure to power our data-heavy company. You will collaborate with our cross-functional teams to evolve our core data pipelines, design for efficiency as we scale, and store data in optimal engines and formats.This is anindividual contributor role, where you will work to define and implement a strategic vision for data engineering within the organization.



What you'll do:



  • Design and implement robust data infrastructure in AWS, using Spark with Scala
  • Evolve our core data pipelines to efficiently scale for our massive growth
  • Store data in optimal engines and formats, matching your designs to our performance needs and cost factors
  • Collaborate with our cross-functional teams to design data solutions that meet business needs
  • Design and implement knowledge graphs, exposing their functionality both via Batch Processing and APIs
  • Leverage and optimize AWS resources while designing for scale
  • Collaborate closely with our Data Science and Product teams
  • How we'll define success:

    • Successful design and implementation of scalable and efficient data infrastructure
    • Timely delivery and optimization of data assets and APIs
    • High attention to detail in implementation of automated data quality checks
    • Effective collaboration with cross-functional teams




What we're looking for:



  • Production data engineering experience
  • Proficiency in Spark and Scala, with proven experience building data infrastructure in Spark using Scala
  • Experience in delivering significant technical initiatives and building reliable, large scale services
  • Experience in delivering APIs backed by relationship-heavy datasets
  • Familiarity with data lakes, cloud warehouses, and storage formats
  • Strong proficiency in AWS services
  • Expertise in SQL for data manipulation and extraction
  • Excellent written and verbal communication skills
  • Bachelor's degree in Computer Science or a related field
  • Nice-to-haves:

    • Experience in adtech
    • Experience implementing data governance practices, including data quality, metadata management, and access controls
    • Strong understanding of privacy-by-design principles and handling of sensitive or regulated data
    • Familiarity with data table formats like Apache Iceberg, Delta
    • Previous experience building out a Data Engineering function
    • Proven experience working closely with Data Science teams on machine learning pipelines




In-Office Requirement Statement:



  • We recognize that the ideal environment for work is situational and may differ across departments. What this looks like day-to-day can vary based on the needs of each organization or role.


Relocation Statement:



  • This position is not eligible for relocation assistance. Visit ourPinFlexpage to learn more about our working model.


#LI-SM4


#LI-REMOTE

At Pinterest we believe the workplace should be equitable, inclusive, and inspiring for every employee. In an effort to provide greater transparency, we are sharing the base salary range for this position. The position is also eligible for equity. Final salary is based on a number of factors including location, travel, relevant prior experience, or particular skills and expertise.


Information regarding the culture at Pinterest and benefits available for this position can be found here.

US based applicants only$155,584—$320,320 USD

Our Commitment to Inclusion:


Pinterest is an equal opportunity employer and makes employment decisions on the basis of merit. We want to have the best qualified people in every job. All qualified applicants will receive consideration for employment without regard to race, color, ancestry, national origin, religion or religious creed, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, age, marital status, status as a protected veteran, physical or mental disability, medical condition, genetic information or characteristics (or those of a family member) or any other consideration made unlawful by applicable federal, state or local laws. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. If you require a medical or religious accommodation during the job application process, please completethis formfor support.

Not Specified
Sr. Data Engineer (Hybrid)
✦ New
Salary not disclosed

Sr. Data Engineer (Hybrid)

Chicago, IL

The American Medical Association (AMA) is the nation's largest professional Association of physicians and a non-profit organization. We are a unifying voice and powerful ally for America's physicians, the patients they care for, and the promise of a healthier nation. To be part of the AMA is to be part of our Mission to promote the art and science of medicine and the betterment of public health.

At AMA, our mission to improve the health of the nation starts with our people. We foster an inclusive, people-first culture where every employee is empowered to perform at their best. Together, we advance meaningful change in health care and the communities we serve.

We encourage and support professional development for our employees, and we are dedicated to social responsibility. We invite you to learn more about us and we look forward to getting to know you.

We have an opportunity at our corporate offices in Chicago for a Sr. Data Engineer (Hybrid) on our Information Technology team. This is a hybrid position reporting into our Chicago, IL office, requiring 3 days a week in the office.

As a Sr. Data Engineer, you will play a key role in implementing
and maintaining AMA's enterprise data platform to support analytics,
interoperability, and responsible AI adoption. This role partners closely with
platform engineering, data governance, data science, IT security, and business
stakeholders to deliver highquality, reliable, and secure data products. This
role contributes to AMA's modern lakehouse architecture, optimizing data
operations, and embedding governance and quality standards into engineering
workflows. This role serves as a
senior technical contributor within the team-providing mentorship to junior
engineers and implementing engineering best practices within the data platform function,
in alignment with architectural direction set by leadership.

RESPONSIBILITIES:

Data Engineering & AI Enablement

  • Build and maintain scalable data pipelines and
    ETL/ELT workflows supporting analytics, operational reporting, and AI/ML use
    cases.
  • Implement best practice patterns for ingestion,
    transformation, modeling, and orchestration within a modern lakehouse
    environment (e.g., Databricks, Delta Lake, Azure Data Lake).
  • Develop highperformance
    data models and curated datasets with strong attention to quality, usability,
    and interoperability; create reusable engineering components and automation.
  • Collaborate with the Architecture Team, the Data
    Platform Lead, and federated IT teams to optimize storage, compute, and
    architectural patterns for performance and costefficiency.
  • Build model-ready data sets and feature
    pipelines to support AI/ ML use cases; serve as a technical coordination point
    supporting business units' AI-related infrastructure needs.
  • Collaborate with data scientists and AI Working
    Group to operationalize models responsibly and maintain ongoing monitoring
    signals.

Governance, Quality & Compliance

  • Embed data governance, metadata standards,
    lineage tracking, and quality controls directly into engineering workflows;
    ensure technical implementation and alignment within engineering workflows.
  • Work with the Data Governance Lead and business
    stakeholders to operationalize stewardship, classification, validation,
    retention, and access standards.
  • Implement privacybydesign and securitybydesign
    principles, ensuring compliance with internal policies and regulatory
    obligations.
  • Maintain documentation for pipelines, datasets,
    and transformations to support transparency and audit requirements.

Platform Reliability, Observability & Optimization

  • Monitor and troubleshoot pipeline failures,
    performance bottlenecks, data anomalies, and platformlevel issues.
  • Implement observability tooling, alerts,
    logging, and dashboards to ensure endtoend reliability.
  • Support cost governance by optimizing compute
    resources, refining job schedules, and advising on efficient architecture.
  • Collaborate with the Data Platform Lead on
    scaling, configuration management, CI/CD pipelines, and environment management.
  • Collaborate with business units to understand
    data needs, translate them into engineering requirements, and deliver
    fit-for-purpose data solutions; share and apply best practices and emerging
    technologies within assigned initiatives.
  • Work with IT Security and Legal/ Compliance to
    ensure platform and datasets meet risk and regulatory standards.

Staff Management

  • Lead, mentor, and provide management oversight
    for staff.
  • Responsible for setting objectives, evaluating
    employee performance, and fostering a collaborative team environment.
  • Responsible for developing staff knowledge and
    skills to support career development.

May include other responsibilities as assigned

REQUIREMENTS:

  1. Bachelor's degree in Computer Science, Engineering, Information Systems, or related field preferred or equivalent work experience and HS diploma/equivalent education required.
  2. 5+ years of experience in data engineering within cloud environments
  3. Experience in people management preferred.
  4. Demonstrated hands-on experience with modern data platforms (Databricks preferred).
  5. Proficiency in Python, SQL, and data
    transformation frameworks.
  6. Experience designing and operationalizing
    ETL/ELT pipelines, orchestration workflows (Airflow, Databricks Workflows), and
    CI/CD processes.
  7. Solid understanding of data modeling,
    structured/unstructured data patterns, and schema design.
  8. Experience implementing governance and quality
    controls: metadata, lineage, validation, stewardship workflows.
  9. Working knowledge of cloud architecture, IAM,
    networking, and security best practices.
  10. Demonstrated ability to collaborate across
    technical and business teams.
  11. Exposure to AI/ML engineering concepts, feature
    stores, model monitoring, or MLOps patterns.
  12. Experience with infrastructureascode
    (Terraform, CloudFormation) or DevOps tooling.

The American Medical Association is located at 330 N. Wabash Avenue, Chicago, IL 60611 and is convenient to all public transportation in Chicago.

This role is an exempt position, and the salary range for this position is $115,523.42-$150,972.44. This is the lowest to highest salary we believe we would pay for this role at the time of this posting. An employee's pay within the salary range will be determined by a variety of factors including but not limited to business consideration and geographical location, as well as candidate qualifications, such as skills, education, and experience. Employees are also eligible to participate in an incentive plan. To learn more about the American Medical Association's benefits offerings, please click here.

We are an equal opportunity employer, committed to diversity in our workforce. All qualified applicants will receive consideration for employment. As an EOE/AA employer, the American Medical Association will not discriminate in its employment practices due to an applicant's race, color, religion, sex, age, national origin, sexual orientation, gender identity and veteran or disability status.

THE AMA IS COMMITTED TO IMPROVING THE HEALTH OF THE NATION

Apply NowShare Save Job
Remote working/work at home options are available for this role.
Not Specified
Senior Data Architect – Power & Utilities AI Platforms
✦ New
$250 +
San Francisco, CA 1 day ago
A leading global consulting firm is seeking a Senior Manager specializing in Data Architecture within the utilities sector.

This role involves leading complex technology projects, impacting business outcomes through innovative data solutions.

Candidates should have a strong background in data architecture, cloud technologies, and experience mentoring teams.

The successful applicant will engage with clients, ensuring effective delivery and quality management within a dynamic consulting environment.
#J-18808-Ljbffr
Not Specified
Cost Analyst
Salary not disclosed
Charleston, SC 2 days ago

Robert Bosch is hiring a Cost Analyst in Charleston, SC. As a Cost Analyst, you will support financial planning and manufacturing operations by analyzing cost data, preparing reports, identifying cost-saving opportunities, and partnering with cross-functional teams to improve profitability and operational efficiency. This is a direct-hire opportunity.


Benefits of the Cost Analyst:

  • 401k
  • 401k Matching
  • Health insurance
  • Dental insurance
  • Vision insurance
  • Paid vacation


Shift Information:

  • Monday – Friday | 9:00 AM – 5:00 PM


Required Qualifications:

  • Bachelor’s degree in Accounting, Finance, Business, or a related field
  • Strong analytical and problem-solving skills
  • Proficiency in Microsoft Excel and financial reporting tools
  • Ability to interpret financial data and provide actionable insights
  • Strong verbal and written communication skills
  • Ability to work effectively in a fast-paced manufacturing environment


Preferred Qualifications:

  • Experience in manufacturing cost accounting or financial analysis
  • Experience with ERP systems
  • Knowledge of standard costing and variance analysis
  • Advanced Excel skills (pivot tables, VLOOKUP/XLOOKUP)


Principal Responsibilities of the Cost Analyst:

  • Analyze manufacturing costs, including labor, materials, and overhead
  • Prepare cost reports and variance analyses to support leadership decision-making
  • Monitor standard costs and recommend adjustments as needed
  • Partner with operations and engineering teams to identify cost-reduction opportunities
  • Support budgeting and forecasting activities
  • Ensure accuracy of financial data and compliance with internal controls
  • Assist with month-end closing processes related to cost accounting
  • Provide financial insights to improve operational performance


Contact & Additional Information:

All your information will be kept confidential according to EEO guidelines.


By choice, we are committed to a diverse workforce - EOE/Protected Veteran/Disabled.

Indefinite U.S work authorized individual only. Future sponsorship for work authorization unavailable.

MAU Workforce Solutions is an innovative global company with extensive experience providing solutions for success in staffing, recruiting, technology and outsourcing to our clients, employees, and applicants. Headquartered in Augusta, GA since 1973, MAU is a family and minority-owned company offering better processes and better people to create efficiencies and greater profits for our clients. Our relationships with world-class companies, our training programs and our culture of family allow MAU to offer better results, better jobs, and better lives to those who work with us.


All Applicants must submit to background check and drug screening

Disclaimer: This job description is not designed to be a complete list of all duties and responsibilities required of the position

EOE

Not Specified
Business Objects Analyst (Hybrid)
✦ New
Salary not disclosed
Lansing, Hybrid 16 hours ago
Title: Business Objects Security Programmer Analyst Location: Lansing, MI (2-days onsite, 3-days remote Hybrid Schedule) Note: This is a W2 contract role – this role is NOT open to C2C, 1099, or 3 rd party candidates The Business Objects Security Programmer Analyst is responsible for administering user security, maintaining Business Objects environments, supporting reporting operations, and providing technical automation and data processing support.

The role combines security administration, BO universe maintenance, SQL/batch scripting, DevOps support, HR load validation, and PowerPlatform solution maintenance.

Secondary duties include providing backup support for .NET development and PowerPlatform applications.

Position Duties: Process security requests including new access, changes, and deletions Monitor and manage security-related mailboxes Process, track, archive, and audit all security forms Maintain and enhance security form automation for users, supervisors, and ASAs Provide primary customer support for Business Objects report issues and general user assistance Maintain and update IDT universes, including structure changes, troubleshooting, and optimization Perform BO health checks and produce BO Health Reports Conduct report inventory cleanup, including HR reporting cleanup and all-folder cleanup activities Validate, confirm, and balance HR data loads and associated reporting Support DevOps activities related to deployment, version control, configuration, and process automation Develop and maintain SQL and batch scripts used for data movement, auditing, and operational tasks Document system procedures, processes, and policies Maintain and track tasks on the Master Calendar (annual, quarterly, and monthly activities) Maintain and enhance PowerPlatform solutions, including Power BI dashboards, Forms, and Power Automate workflows Support automation efforts that increase efficiency, routing, and data integration Provide .NET development backup support for miscellaneous projects Provide backup support for PowerPlatform applications and workflows, as needed Position Qualifications: Working knowledge of Business Objects security, universe design, and report deployment Strong SQL and batch scripting skills Ability to perform access management, security audits, and form processing Experience with DevOps principles and deployment workflows Experience maintaining Microsoft PowerPlatform solutions (Power BI, PowerApps, Power Automate) Ability to document processes clearly and accurately Strong analytical, troubleshooting, and customer support skills Experience with MIDB (Oracle), CMOD, and HR data environments preferred Experience supporting government or regulatory environments preferred Familiarity with .NET development and basic code maintenance preferred A minimum of a Bachelor's Degree in Computer Science, Information Systems, or other relevant field required Note: This is a W2 contract role – this role is NOT open to C2C, 1099, or 3 rd party candidates .
Remote working/work at home options are available for this role.
Not Specified
jobs by JobLookup