Automatic Data Processing Jobs in Usa

14,430 positions found

Databricks Data Engineer
✦ New
Salary not disclosed

**Must be able to be onsite in Farmington, CT 2 days a week for collaboration**

The Opportunity: We are seeking a software engineer/developer or ETL/data integration/big data developer with experience in projects emphasizing data processing and storage. This person will be responsible for supporting the data ingestion, transformation, and distribution to end consumers. Candidate will perform requirements analysis, design/develop process flow, unit and integration tests, and create/update process documentation.

· Work with the Business Intelligence team and operational stakeholders to design and implement both the data presentation layer available to the user community, as well as the underlying technical architecture of the data warehousing environment. · Develop scalable and reliable data solutions to move data across systems from multiple sources in real time as well as batch modes. · Design and develop database objects, tables, stored procedures, views, etc. · Independently analyze, solve, and correct issues in real time, providing problem resolution end-to-end · Design and develop ETL Processes that will transform a variety of raw data, flat files, xl spreadsheets into SQL Databases · Understands the concept of Data marts and Data lakes and experience with migrating legacy systems to data marts/lake · Uses additional cloud technologies (e.g., understands concept of Cloud services like Azure SQL server) · Maintain comprehensive project documentation · Aptitude to learn new technologies and the ability to perform continuous research, analysis, and process improvement. · Strong interpersonal and communication skills to be able to work in a team environment to include customer and contractor technical, end users, and management team members. · Manage multiple projects, responsibilities and competing priorities.

Requirements Experience Needed: · Programming languages, frameworks, and file formats such as: Python, SQL, PLSQL, and VB · Database platforms such as: Oracle, SQL Server, MySQL · Big data concepts and technologies such as Synapse & Databricks · AWS and Azure cloud computing · HVR data replication

Not Specified
Data Manager
Salary not disclosed
Atlanta, GA 2 days ago
Apply for JobJob ID295066

LocationAtlanta, Georgia

Full/Part TimeFull-Time

Regular/TemporaryRegular

Add to Favorite JobsEmail this Job

About Us

Georgia Tech is a top-ranked public research university situated in the heart of Atlanta, a diverse and vibrant city with numerous economic and cultural strengths. The Institute serves more than 45,000 students through top-ranked undergraduate, graduate, and executive programs in engineering, computing, science, business, design, and liberal arts. Georgia Tech's faculty attracted more than $1.4 billion in research awards this past year in fields ranging from biomedical technology to artificial intelligence, energy, sustainability, semiconductors, neuroscience, and national security. Georgia Tech ranks among the nation's top 20 universities for research and development spending and No. 1 among institutions without a medical school.Georgia Tech's Mission and Values
Georgia Tech's mission is to develop leaders who advance technology and improve the human condition. The Institute has nine key values that are foundational to everything we do:
1. Students are our top priority.
2. We strive for excellence.
3. We thrive on diversity.
4. We celebrate collaboration.
5. We champion innovation.
6. We safeguard freedom of inquiry and expression.
7. We nurture the wellbeing of our community.
8. We act ethically.
9. We are responsible stewards.
Over the next decade, Georgia Tech will become an example of inclusive innovation, a leading technological research university of unmatched scale, relentlessly committed to serving the public good; breaking new ground in addressing the biggest local, national, and global challenges and opportunities of our time; making technology broadly accessible; and developing exceptional, principled leaders from all backgrounds ready to produce novel ideas and create solutions with real human impact.

Job Summary

The Manager of Data is responsible for overseeing the collection, management, and analysis of institutional data to support decision-making and strategic planning. This role involves leading a team of data analysts and ensuring data integrity, security, and compliance with relevant regulations. Additionally, the manager collaborates with various departments to develop data governance policies and implement effective data management practices that enhance the institution's ability to leverage data for improved outcomes.



Responsibilities

Job Duty 1 -
Oversee the development and implementation of data management strategies to ensure the accurate collection, storage, and retrieval of institutional data.

Job Duty 9 -
Collaborate with academic and administrative departments to identify data needs and develop solutions that enhance data accessibility and usability.

Job Duty 10 -
Perform other duties as assigned.

Job Duty 2 -
Lead a team of data analysts in conducting data analysis and reporting to support institutional decision-making and strategic initiatives.

Job Duty 3 -
Establish and enforce data governance policies to ensure data quality, integrity, and compliance with relevant regulations and standards.

Job Duty 4 -
Monitor data management systems and tools, ensuring they are maintained, updated, and aligned with best practices in data security and privacy.

Job Duty 5 -
Provide training and support to staff on data management practices, tools, and analytical techniques to foster a data-driven culture within the institution.

Job Duty 6 -
Conduct regular audits of data processes and systems to identify areas for improvement and implement corrective actions as needed.

Job Duty 7 -
Prepare and present comprehensive reports on data trends, analysis findings, and management initiatives to senior leadership and relevant stakeholders.

Job Duty 8 -
Stay informed about emerging data management technologies and methodologies to continually enhance the institution's data management capabilities.



Required Qualifications

Educational Requirements
Bachelor's degree in related discipline or equivalent, related experience.

Required Experience
5+ years of relevant experience; 3+ years of supervisory knowledge.



Preferred Qualifications

Preferred Educational Qualifications
Master's degree in related discipline or equivalent, related experience.

  • Master's degree in Computer Science, Information Technology, Information System, Data Science, Business Administration, related discipline or equivalent, related experience.
  • Certified Data Management Professional certification.
  • Experience designing, implementing and operating Security Information Management solutions such as SIMS or ThreatSwitch.
  • Advanced knowledge of SQL, database design and data modeling expertise.
  • Experience in managing and securing enterprise security database systems containing sensitive and regulated data.
  • Experience in cross-departmental collaboration during security investigations, assessments, and compliance reviews.


USG Core Values

The University System of Georgia is comprised of our 25 institutions of higher education and learning as well as the System Office. Our USG Statement of Core Values are Integrity, Excellence, Accountability, and Respect. These values serve as the foundation for all that we do as an organization, and each USG community member is responsible for demonstrating and upholding these standards. More details on the USG Statement of Core Values and Code of Conduct are available in USG Board Policy 8.2.18.1.2 and can be found on-line at policymanual/section8/C224/#p8.2.18_personnel_conduct.

Additionally, USG supports Freedom of Expression as stated in Board Policy 6.5 Freedom of Expression and Academic Freedom found on-line at policymanual/section6/C2653.



Equal Employment Opportunity

The Georgia Institute of Technology (Georgia Tech) is an Equal Employment Opportunity Employer. The Institute is committed to maintaining a fair and respectful environment for all. To that end, and in accordance with federal and state law, Board of Regents policy, and Institute policy, Georgia Tech provides equal opportunity to all faculty, staff, students, and all other members of the Georgia Tech community, including applicants for admission and/or employment, contractors, volunteers, and participants in institutional programs, activities, or services. Georgia Tech complies with all applicable laws and regulations governing equal opportunity in the workplace and in educational activities.

Equal opportunity and decisions based on merit are fundamental values of the University System of Georgia ("USG") and Georgia Tech. Georgia Tech prohibits discrimination, including discriminatory harassment, on the basis of an individual's race, ethnicity, ancestry, color, religion, sex (including pregnancy), national origin, age, disability, genetics, or veteran status in its programs, activities, employment, and admissions. Further, Georgia Tech prohibits citizenship status, immigration status, and national origin discrimination in hiring, firing, and recruitment, except where such restrictions are required in order to comply with law, regulation, executive order, or Attorney General directive, or where they are required by Federal, State, or local government contract.



Other Information

This is a supervisory position.This position does not have any financial responsibilities.This position will have some driving.This role is considered a position of trust.This position does not require a purchasing card (P-Card).This position will have some traveling.This position does not require security clearance or ability to obtain one.This position is located in Atlanta, GASalary range is dependent on candidates experiences and skills that ranges from $109,136 - $159,284.You must be a US citizen to be considered for this role.

Other Information

The Georgia Tech Research Institute (GTRI) is the nonprofit, applied research division of the Georgia Institute of Technology (Georgia Tech). This position is in the Research Security Department (RS) of GTRI.



Background Check

Successful candidate must be able to pass a background check. Please visit employment/pre-employment-screening

Not Specified
Data Privacy & Cybersecurity Associate Attorney (3–5 Years Experience) – New York, NY- 410348
✦ New
Salary not disclosed
New York, NY 1 day ago

Job ID: 410348


Practice area:- Data Privacy - Litigation,Data Privacy - Transactional


Data Privacy & Cybersecurity Associate Attorney (3–5 Years Experience) – Privacy Compliance & Cyber Incident Response | New York, NY


Keywords:- Data Privacy Associate Attorney, Cybersecurity Attorney, Privacy Compliance Attorney, Data Protection Litigation Attorney, Litigation Attorney New York, New York legal jobs, Attorney jobs NYC, NY Bar required, Law firm litigation associate, Partner-track position, lawyer,data processing agreement, DPA, standard contractual clauses, privacy policy, cross-border transfer, SaaS privacy, controller-processor, data map, vendor contract, information security addendum, data transfer impact assessment


A top-tier law firm is seeking a Data Privacy Associate Attorney (3–5 years experience) to join its Cybersecurity, Privacy, and Data Protection practice in New York, NY. Work on cutting-edge privacy compliance matters, regulatory investigations, and cyber incident response within a highly respected national practice.


Associates say that this Vault 20 law firm offers consistently substantive work across all levels of seniority and a collegial atmosphere. The prestigious firm, which has offices located across the country, provides valuable training and abundant mentorship opportunities to its associates, as well as a partner-track atmosphere that strives for high associate retention. Associates also praise the firm for its generous pro-bono and face-time policies.

________________________________________


A nationally recognized top-tier law firm is seeking a Data Privacy Associate Attorney to join its expanding Cybersecurity, Privacy, and Data Protection practice in New York, NY. This opportunity is ideal for attorneys with strong experience advising clients on cybersecurity compliance, data protection regulations, and incident response strategies.


Attorneys pursuing New York legal jobs in privacy and cybersecurity law will gain hands-on experience advising companies across industries on regulatory compliance, privacy frameworks, and litigation risks associated with data protection. The Data Privacy Attorney will collaborate with multidisciplinary teams while working directly with sophisticated clients navigating evolving global privacy regulations.


This partner-track position offers significant professional growth for attorneys seeking New York legal jobs in one of the fastest-growing legal practice areas. The firm is actively interviewing candidates with strong law firm experience who want exposure to complex cybersecurity matters and regulatory investigations.

________________________________________


Key Responsibilities


• Advise clients on cybersecurity compliance and data privacy regulations.

• Develop and implement corporate privacy policies and governance programs.

• Provide legal guidance during cyber incidents and data breach responses.

• Assist clients with data breach notification requirements and regulatory compliance.

• Conduct regulatory investigations and litigation related to privacy violations.

• Collaborate with internal legal teams and cybersecurity professionals on compliance strategies.

• Draft legal memoranda, compliance documentation, and regulatory responses.

________________________________________


Qualifications


• 3–5 years of experience practicing as a Data Privacy Associate Attorney or similar role.

• Prior large law firm experience in privacy, cybersecurity, or regulatory compliance.

• Strong understanding of privacy laws, data protection frameworks, and cybersecurity regulations.

• Excellent legal research, writing, and analytical abilities.

• Ability to work effectively in a fast-paced legal environment.

• NY Bar required and active license to practice law in New York.

• Superior academic credentials.

________________________________________


Education


• Juris Doctor (JD) degree from an accredited law school.

________________________________________


Certifications


• Active bar admission in the relevant jurisdiction.

________________________________________


Skills


• Strong analytical and problem-solving capabilities.

• Excellent legal writing and research skills.

• Effective communication and presentation abilities.

• Ability to collaborate within cross-disciplinary legal teams.

• Strong client advisory and risk management skills.

________________________________________


Culture & Firm Appeal


This opportunity is with a nationally respected top-tier law firm known for providing sophisticated legal services across a wide range of industries. The firm maintains a strong reputation for handling complex regulatory, cybersecurity, and litigation matters for major clients.

Attorneys benefit from a collegial and collaborative culture where mentorship and professional development are prioritized. Associates consistently receive meaningful responsibility early in their careers and gain exposure to complex legal matters across the cybersecurity and privacy landscape.

Professionals exploring New York legal jobs will value the firm’s commitment to mentorship, high-quality legal work, and a strong partner-track environment designed to support long-term career development.

________________________________________


Why This Role Is Unique


• Work at the forefront of cybersecurity and data privacy law, one of the fastest-growing legal practice areas.

• Advise clients on complex data breach incidents and privacy compliance strategies.

• Exposure to high-profile regulatory investigations and privacy litigation matters.

• Collaborative team environment with strong mentorship and training.

• Clear partner-track position within a respected national legal practice.

• Ideal opportunity for attorneys seeking advanced New York legal jobs in cybersecurity and privacy law.

This position rarely opens at this level and provides attorneys the opportunity to build expertise in a rapidly evolving legal practice area while working with sophisticated clients.

________________________________________


Benefits


• Comprehensive health insurance.

• Retirement savings plan.

• Professional development opportunities.

________________________________________


Call to Action


Apply now for a confidential discussion with a BCG Attorney Search recruiter.

Explore this elite-level opportunity today.

Submit your resume to learn more about this prestigious role.

________________________________________


BCG Attorney Search is the industry leader for placing candidates in permanent positions in law firms. Since 2000, our recruiters have placed several thousand attorneys and enjoyed extraordinarily high success rates with our candidates. As a BCG Attorney Search candidate, you have access to more opportunities than any legal placement firm in the United States. We are able to offer the most in-depth insight in the legal recruiting market thanks to our dedicated team of over 150 employees who mercilessly research, study and analyze the legal market. The depth and breadth of our research empowers us to place attorneys at rates that are unparalleled at any placement firm in the United States. Many of our recruiters make 30 to 40 placements per year, while recruiters at competitor firms are likely to make four or five. Unlike other placement firms that can only tell you about openings at major AmLaw firms, we place candidates of all backgrounds in firms of all sizes. While other legal recruiters only represent a narrow band of candidates from top law firms and top law schools, our research, firm contacts, and market insight allow us to place hundreds of candidates each year who do not fit this mold. It is rare that we do not get candidates we represent interviews and offers. No one in the world is better at legal recruiting and placement than BCG Attorney Search.


BCG Attorney Search will confidentially review your application and will not forward your materials to the firm without first discussing the opportunity with you.

Not Specified
Senior Data Architect
✦ New
Salary not disclosed
Princeton, NJ 9 hours ago

About Cygnus Professionals, Inc.

Cygnus is a Princeton, NJ-headquartered global Business IT consulting and software Services firm with offices in the USA and Asia. Cygnus offers and enables innovation and helps our clients accelerate time to market & grow their business. Over 15 years, we have taken great pride in continuing our deep relationships with our clients.


For further information about CYGNUS, please visit our website Title: Data Architect

Location: Princeton, New Jersey – Onsite

W2 Contract


Job Summary

We are seeking an experienced Data Architect to design, build, and maintain scalable data architecture solutions supporting enterprise analytics, data integration, and digital transformation initiatives. The ideal candidate will work closely with business stakeholders, data engineers, and application teams to design robust data models, data pipelines, and enterprise data platforms that support advanced analytics and reporting.

Key Responsibilities

  • Design and implement enterprise data architecture frameworks and best practices.
  • Develop logical and physical data models for enterprise data platforms.
  • Architect data lakes, data warehouses, and data integration solutions across cloud and on-prem environments.
  • Collaborate with data engineers and application teams to build scalable data pipelines and ETL/ELT processes.
  • Ensure data governance, data quality, security, and compliance standards are implemented across the data ecosystem.
  • Evaluate and recommend data technologies, tools, and frameworks aligned with enterprise strategy.
  • Provide architectural guidance for cloud-based data platforms (AWS/Azure/GCP).
  • Optimize performance for large-scale data processing and analytics workloads.
  • Support business intelligence, reporting, and advanced analytics initiatives.

Required Qualifications

  • 10+ years of experience in data architecture, data engineering, or enterprise data management.
  • Strong experience with data modeling (conceptual, logical, physical).
  • Expertise with data warehouse and data lake architectures.
  • Hands-on experience with ETL/ELT tools and data integration platforms.
  • Experience with SQL and large-scale data platforms (Snowflake, Redshift, BigQuery, etc.).
  • Experience working with cloud data platforms (AWS, Azure, or GCP).
  • Strong understanding of data governance, data quality, and metadata management.
  • Experience with big data technologies (Spark, Hadoop, Kafka) is a plus.

Preferred Skills

  • Experience in Healthcare, Pharmaceutical, or Life Sciences domain.
  • Knowledge of Master Data Management (MDM) and data catalog tools.
  • Familiarity with BI tools such as Tableau, Power BI, or Looker.
  • Strong communication skills to interact with business and technical teams.

Education

  • Bachelor’s or Master’s degree in Computer Science, Information Systems, Data Science, or related field.


Cygnus Belief

We believe in our commitment to diversity & inclusion.


Equal Employment Opportunity Statement

Cygnus is an Equal Opportunity Employer. We ensure that no one should be discriminated against because of their differences, such as age, disability, ethnicity, gender, gender identity and expression, religion, or sexual orientation.


All our employment decisions are taken without looking into age, race, creed, color, religion, sex, nationality, disability status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status, or any other aspects of employment protected by federal, state, or local law. Applicants for employment in the US must have work authorization.

Not Specified
Data Quality Analyst / Data Steward
✦ New
Salary not disclosed
Montgomery 1 day ago
Job Requisition: Data Quality Analyst / Data Steward Contract Length: Long Term – Potential renewal each fiscal year Work Location: 100% onsite – Montgomery, AL Candidate Profile Experienced data professional capable of building, advancing, and scaling data quality and governance foundations from scratch.

Able to operate independently in low structure environments, collaborate across business and IT, and deliver high quality, AI ready data ecosystems.

Role Purpose Establish, advance, and mature data quality and governance capabilities in a green field, low maturity data environment.

Support enterprise analytics, BI, and AI/ML readiness through SQL/ETL engineering, data profiling, validation, stewardship, metadata management, and early stage data architecture.

Drive long term improvement of data standards, definitions, lineage, and quality processes.

Key Responsibilities Data Quality & Engineering Perform data audits, profiling, validation, anomaly detection, and quality gap identification.

Develop automated data quality rules and validation logic using T SQL, SQL Server, stored procedures, and indexing strategies.

Build and maintain SSIS packages for validation, cleansing, transformation, and error detection workflows.

Troubleshoot ETL/ELT pipelines, data migrations, integration failures, and data load issues.

Conduct root cause analysis and implement preventive and long term remediation solutions.

Optimize SQL queries, tune stored procedures, and improve data processing performance.

Document audit findings, validation processes, data flows, standards, and quality reports.

Build dashboards and reports for data quality KPIs using Power BI/Tableau.

Data Stewardship & Governance Define, maintain, and enforce data quality standards, business rules, data definitions, and governance policies.

Monitor datasets for completeness, accuracy, timeliness, consistency, and compliance.

Ensure proper and consistent data usage across departments and systems.

Maintain business glossaries, data dictionaries, metadata repositories, and lineage documentation.

Partner with IT, data engineering, and business teams to support governance initiatives and compliance requirements.

Provide training on data entry, data handling, stewardship practices, and data literacy.

Collaborate with cross functional teams to identify recurring data issues and recommend preventive solutions.

GreenField / LowMaturity Environment Architect initial data quality frameworks, validation layers, governance artifacts, and ingestion patterns.

Establish scalable data preparation workflows supporting analytics, BI, and AI/ML readiness.

Mature data quality and governance processes from ad hoc to standardized, automated, and measurable.

Drive adoption of data quality and governance practices across business and technical teams.

Support long term evolution of enterprise data strategy and governance maturity.

Required Technical Skills Advanced T SQL, SQL Server development, debugging, and performance tuning.

SSIS development, deployment, and troubleshooting.

Data profiling, validation rule design, quality scoring, and measurement techniques.

ETL/ELT pipeline design, debugging, and optimization.

Data modeling (conceptual, logical, physical).

Metadata management and lineage documentation.

Reporting and dashboarding with Power BI, Tableau, or similar tools.

Strong documentation and communication skills.

Preferred Skills Knowledge of DAMA DMBoK, DCAM, MDM concepts, and governance frameworks.

Experience in low maturity/green field data environments.

Familiarity with AI/ML data readiness and feature store aligned data structuring.

Cloud data engineering exposure (Azure, Databricks, GCP).

Education Bachelor’s degree in Information Systems, Computer Science, Data Science, Statistics, Business Analytics, or related field.

Master’s degree preferred.

Certifications (Preferred) DAMA CDMP (Associate/Practitioner) EDM Council DCAM ASQ Data Quality Credential Collibra Data Steward Certification Certified Data Steward (eLearningCurve) Cloud/AI certifications (Azure, Databricks, Google)
Not Specified
Azure Data Engineer
✦ New
Salary not disclosed
Queens 1 day ago
Job Description : We are seeking a hands-on Consultant with strong Azure ETL experience and advanced Power BI development skills.

They are required to have experience modernizing legacy Microsoft BI environments (including SSIS).

This is not an SSIS-only role.

The consultant will design, modernize, and enhance enterprise data and analytics solutions supporting Cyber Security, Physical Security, Electronic Security and Police operations.

This role includes evolving legacy SQL Server/SSIS-based processes into modern Azure data architectures while designing scalable new ETL/ELT pipelines and delivering executive-level analytics solutions.

The consultant will work directly with stakeholders to deliver production-grade reporting and analytics capabilities across multiple enterprise systems.

This requires architectural thinking and hands-on technical execution.

Core Responsibilities: Candidates must have direct experience building enterprise-grade ETL pipelines and executive Power BI dashboards.

Design and implement modern ETL/ELT pipelines in Azure Assess and refactor existing SSIS packages as part of broader modernization efforts Architect Lakehouse / Medallion data models Develop optimized dimensional data models (star schema) Integrate data from SQL Server, Oracle, APIs, and security platforms Design and deploy enterprise Power BI dashboards Build paginated reports using Power BI Report Builder Optimize DAX and dataset performance Implement Row-Level Security (RLS) Support CI/CD and DevOps deployment processes Produce technical documentation and data lineage artifacts Engage directly with executive stakeholders Required Technical Skills: (Must-Have) Data Engineering & Architecture: Strong ETL/ELT design and optimization experience Advanced SQL (expert-level required) Python / PySpark Dimensional data modeling (star schema required) REST API integrations Azure Data Stack: • Azure Data Factory • Azure Databricks • Azure Synapse Analytics • Azure Data Lake Storage Microsoft Data Platform: • Experience with SQL Server data warehouse environments • Working knowledge of SSIS and experience modernizing or migrating SSIS workflows to Azure-based solutions Power BI: Power BI Desktop (expert-level) Advanced DAX Executive dashboard development Paginated reports (Power BI Report Builder) Data Gateway configuration Incremental refresh Row-Level Security (RLS) Nice to Have: Microsoft Purview Terraform (Infrastructure-as-Code) Orchestration tools (Airflow or equivalent) Security systems data integration experience Experience with C# / .NET web application development (for integration with internal systems or APIs) Experience Requirements: 7+ years of hands-on data engineering / analytics delivery Demonstrated experience building production data pipelines in Azure Proven experience delivering executive-facing Power BI solutions Experience working in complex enterprise environments Software Skills: 4–6 years of experience in Azure for building, deploying, and managing cloud-based data and application services.

Technical Skills: 2–4 years of experience in .NET code development for developing and maintaining enterprise applications and data processing components.

6+ years of experience in Data Modeling including designing logical and physical data models for enterprise data warehouses and analytics systems.

6+ years of experience in Python scripting for data processing, automation, ETL development, and data transformation tasks.

6+ years of experience in Structured Query Language (SQL) for writing complex queries, stored procedures, performance tuning, and data manipulation.
Not Specified
Databricks Architect/ Senior Data Engineer
✦ New
🏢 OZ
Salary not disclosed
Boca Raton, FL 1 day ago

OZ – Databricks Architect/ Senior Data Engineer


Note: Only applications from U.S. citizens or lawful permanent residents (Green Card Holders) will be considered.


We believe work should be innately rewarding and a team-building venture. Working with our teammates and clients should be an enjoyable journey where we can learn, grow as professionals, and achieve amazing results. Our core values revolve around this philosophy. We are relentlessly committed to helping our clients achieve their business goals, leapfrog the competition, and become leaders in their industry. What drives us forward is the culture of creativity combined with a disciplined approach, passion for learning & innovation, and a ‘can-do’ attitude!


What We're Looking For:

We are seeking a highly experienced Databricks professional with deep expertise in data engineering, distributed computing, and cloud-based data platforms. The ideal candidate is both an architect and a hands-on engineer who can design scalable data solutions while actively contributing to development, optimization, and deployment.


This role requires strong technical leadership, a deep understanding of modern data architectures, and the ability to implement best practices in DataOps, performance optimization, and data governance.


Experience with modern AI/GenAI-enabled data platforms and real-time data processing environments is highly desirable.


Position Overview:

The Databricks Senior Data Engineer will play a critical role in designing, implementing, and optimizing enterprise-scale data platforms using the Databricks Lakehouse architecture. This role combines architecture leadership with hands-on engineering, focusing on building scalable, secure, and high-performance data pipelines and platforms. The ideal candidate will establish coding standards, define data architecture frameworks such as the Medallion Architecture, and guide the end-to-end development lifecycle of modern data solutions.


This individual will collaborate with cross-functional stakeholders, including data engineers, BI developers, analysts, and business leaders, to deliver robust data platforms that enable advanced analytics, reporting, and AI-driven decision-making.


Key Responsibilities:

  • Architecture & Design: Architect and design scalable, reliable data platforms and complex ETL/ELT and streaming workflows for the Databricks Lakehouse Platform (Delta Lake, Spark).
  • Hands-On Development: Write, test, and optimize code in Python, PySpark, and SQL for data ingestion, transformation, and processing.
  • DataOps & Automation: Implement CI/CD, monitoring, and automation (e.g., with Azure DevOps, DBX) for data pipelines.
  • Stakeholder Collaboration: Work with BI developers, analysts, and business users to define requirements and deliver data-driven solutions.
  • Performance Optimization: Tune delta tables, Spark jobs, and SQL queries for maximum efficiency and scalability.
  • GenAI Applications Development: It is a big plus to have experience in GenAI application development


Requirements:

  • 8+ years of experience in data engineering, with strong hands-on expertise in Databricks and Apache Spark.
  • Proven experience designing and implementing scalable ETL/ELT pipelines in cloud environments.
  • Strong programming skills in Python and SQL; experience with PySpark required.
  • Hands-on experience with Databricks Lakehouse, Delta Lake, and distributed data processing.
  • Experience working with cloud platforms such as Microsoft Azure, AWS, or GCP (Azure preferred).
  • Experience with CI/CD pipelines, Git, and DevOps practices for data engineering.
  • Strong understanding of data architecture, data modeling, and performance optimization.
  • Experience working with cross-functional teams to deliver enterprise data solutions.
  • Tackles complex data challenges, ensuring data quality and reliable delivery.


Qualifications:

  • Bachelor’s degree in Computer Science, Information Technology, Engineering, or a related field.
  • Experience designing enterprise-scale data platforms and modern data architectures.
  • Experience with data integration tools such as Azure Data Factory or similar platforms.
  • Familiarity with cloud data warehouses such as Databricks, Snowflake, or Azure Fabric.
  • Experience supporting analytics, reporting, or AI/ML workloads is highly desirable.
  • Databricks, Azure, or cloud certifications are preferred.
  • Strong problem-solving, communication, and technical leadership skills.


Technical Proficiency in:

  • Databricks, Apache Spark, PySpark, Delta Lake
  • Python, SQL, Scala (preferred)
  • Cloud platforms: Azure (preferred), AWS, or GCP
  • Azure Data Factory, Kafka, and modern data integration tools
  • Data warehousing: Databricks, Snowflake, or Azure Fabric
  • DevOps tools: Git, Azure DevOps, CI/CD pipelines
  • Data architecture, ETL/ELT design, and performance optimization


What You’re Looking For:

Join a fast-growing organization that thrives on innovation and collaboration. You’ll work alongside talented, motivated colleagues in a global environment, helping clients solve their most critical business challenges. At OZ, your contributions matter – you’ll have the chance to be a key player in our growth and success. If you’re driven, bold, and eager to push boundaries, we invite you to join a company where you can truly make a difference.


About Us:

OZ is a 28-year-old global technology consulting, services, and solutions leader specializing in creating business-focused solutions for our clients by leveraging disruptive digital technologies and innovation.


OZ is committed to creating a continuum between work and life by allowing people to work remotely. We offer competitive compensation and a comprehensive benefits package. You’ll enjoy our work style within an incredible culture. We’ll give you the tools you need to succeed so you can grow and develop with us and become part of a team that lives by its core values.

Not Specified
Senior Cloud Data Engineer
✦ New
🏢 Cyient
Salary not disclosed
East Hartford, CT 1 day ago

Job Description Summary

We are seeking a highly skilled and experienced Senior Data Engineer to join our dynamic team. In this role, you will be instrumental in designing, building, and maintaining robust and scalable data pipelines and solutions within the Microsoft Azure ecosystem. You will be responsible for developing and optimizing ETL/ELT processes, ensuring data quality, and enabling efficient data access for analytics and business intelligence. We are looking for a hands-on engineer who thrives in a fast-paced environment and is passionate about leveraging cutting-edge technologies



Key Responsibilities:

Design, develop, and maintain cloud-based data pipelines and ETL/ELT workflows.

Build and optimize data architectures to support structured and unstructured data processing.

Collaborate with data analysts, data scientists, and business stakeholders to understand data needs.

Implement data quality, security, and governance best practices.

Monitor and troubleshoot data workflows to ensure high availability and performance.

Optimize database and data storage solutions for performance and cost efficiency.

Contribute to cloud adoption, migration, and modernization initiatives.


Mandatory Skills:

Strong expertise with Azure cloud platform.

Strong experience in Databricks

Azure Data Factory proficiency required; building datasets, data flows, and pipelines in ADF (not just maintaining something already built)

Hands-on experience with ETL/ELT tools and frameworks.

Proficiency in SQL, Python, and data modeling.

Knowledge of CI/CD pipelines and infrastructure-as-code tools.

Understanding of data governance, security, and compliance.


Preferred Skills:

Exposure to API integration and microservices architecture.

Strong analytical and problem-solving skills.

Azure cloud certifications and/or past experience

AKS (Azure Kubernetes Service) experience, and ETL related to applications containerized & deployed on AKS (or EKS)

Not Specified
Data Integration & AI Engineer
✦ New
Salary not disclosed
Edison, NJ 1 day ago

About Wakefern

Wakefern Food Corp. is the largest retailer-owned cooperative in the United States and supports its co-operative members' retail operations, trading under the ShopRite®, Price Rite®, The Fresh Grocer®, Dearborn Markets®, and Gourmet Garage® banners.


Employing an innovative approach to wholesale business services, Wakefern focuses on helping the independent retailer compete in a big business world. Providing the tools entrepreneurs need to stay a step ahead of the competition, Wakefern’s co-operative members benefit from the company’s extensive portfolio of services, including innovative technology, private label development, and best-in-class procurement practices.


The ideal candidate will have a strong background in designing, developing, and implementing complex projects, with focus on automating data processes and driving efficiency within the organization. This role requires a close collaboration with application developers, data engineers, data analysts, data scientists to ensure seamless data integration and automation across various platforms. The Data Integration & AI Engineer is responsible for identifying opportunities to automate repetitive data processes, reduce manual intervention, and improve overall data accessibility.


Essential Functions

  • Participate in the development life cycle (requirements definition, project approval, design, development, and implementation) and maintenance of the systems.
  • Implement and enforce data quality and governance standards to ensure the accuracy and consistency.
  • Provide input for project plans and timelines to align with business objectives.
  • Monitor project progress, identify risks, and implement mitigation strategies.
  • Work with cross-functional teams and ensure effective communication and collaboration.
  • Provide regular updates to the management team.
  • Follow the standards and procedures according to Architecture Review Board best practices, revising standards and procedures as requirements change and technological advancements are incorporated into the >tech_ structure.
  • Communicates and promotes the code of ethics and business conduct.
  • Ensures completion of required company compliance training programs.
  • Is trained – either through formal education or through experience – in software / hardware technologies and development methodologies.
  • Stays current through personal development and professional and industry organizations.

Responsibilities

  • Design, build, and maintain automated data pipelines and ETL processes to ensure scalability, efficiency, and reliability across data operations.
  • Develop and implement robust data integration solutions to streamline data flow between diverse systems and databases.
  • Continuously optimize data workflows and automation processes to enhance performance, scalability, and maintainability.
  • Design and develop end-to-end data solutions utilizing modern technologies, including scripting languages, databases, APIs, and cloud platforms.
  • Ensure data solutions and data sources meet quality, security, and compliance standards.
  • Monitor and troubleshoot automation workflows, proactively identifying and resolving issues to minimize downtime.
  • Provide technical training, documentation, and ongoing support to end users of data automation systems.
  • Prepare and maintain comprehensive technical documentation, including solution designs, specifications, and operational procedures.


Qualifications

  • A bachelor's degree or higher in computer science, information systems, or a related field.
  • Hands-on experience with cloud data platforms (e.g., GCP, Azure, etc.)
  • Strong knowledge and skills in data automation technologies, such as Python, SQL, ETL/ELT tools, Kafka, APIs, cloud data pipelines, etc.
  • Experience in GCP BigQuery, Dataflow, Pub/Sub, and Cloud storage.
  • Experience with workflow orchestration tools such as Cloud Composer or Airflow
  • Proficiency in iPaaS (Integration Platform as a Service) platforms, such as Boomi, SAP BTP, etc.
  • Develop and manage data integrations for AI agents, connecting them to internal and external APIs, databases, and knowledge sources to expand their capabilities.
  • Build and maintain scalable Retrieval-Augmented Generation (RAG) pipelines, including the curation and indexing of knowledge bases in vector databases (e.g., Pinecone, Vertex AI Vector Search).
  • Leverage cloud-based AI/ML platforms (e.g., Vertex AI, Azure ML) to build, train, and deploy machine learning models on a scale.
  • Establish and enforce data quality and governance standards for AI/ML datasets, ensuring the accuracy, completeness, and integrity of data used for model training and validation.
  • Collaborate closely with data scientists and machine learning engineers to understand data requirements and deliver optimized data solutions that support the entire machine learning lifecycle.
  • Hands-on experience with IBM DataStage and Alteryx is a plus.
  • Strong understanding of database design principles, including normalization, indexing, partitioning, and query optimization.
  • Ability to design and maintain efficient, scalable, and well-structured database schemas to support both analytical and transactional workloads,
  • Familiarity with BI visualization tools such as MicroStrategy, Power BI, Looker, or similar.
  • Familiarity with data modeling tools.
  • Familiarity with DevOps practices for data (CI/CD pipelines)
  • Proficiency in project management software (e.g., JIRA, Clarizen, etc.)
  • Familiarity with DevOps practices for data (CI/CD pipelines)
  • Strong knowledge and skills in data management, data quality, and data governance.
  • Strong communication, collaboration, and problem-solving skills.
  • Ability to work on multiple projects and prioritize tasks effectively.
  • Ability to work independently and in a team environment.
  • Ability to learn new technologies and tools quickly.
  • The ability to handle stressful situations.
  • Highly developed business acuity and acumen.
  • Strong critical thinking and decision-making skills.


Working Conditions & Physical Demands

This position requires in-person office presence at least 4x a week.


Compensation and Benefits

The salary range for this position is $75,868 - $150,644. Placement in the range depends on several factors, including experience, skills, education, geography, and budget considerations.

Wakefern is proud to offer a comprehensive benefits package designed to support the health, well-being, and professional development of our Associates. Benefits include medical, dental, and vision coverage, life and disability insurance, a 401(k) retirement plan with company match & annual company contribution, paid time off, holidays, and parental leave.


Associates also enjoy access to wellness and family support programs, fitness reimbursement, educational and training opportunities through our corporate university, and a collaborative, team-oriented work environment. Many of these benefits are fully or partially funded by the company, with some subject to eligibility requirements

Not Specified
SENIOR AWS DATA ENGINEER
✦ New
Salary not disclosed
Irving, Texas 9 hours ago

Visa Status: US Citizen or Green Card Only

Location: Irving, TX (Local Candidates Only)

Employment Type: Full-time / Direct Hire

Work Environment: Hybrid (Monday thru Thursday - in office / Friday - at home)

***MUST HAVE 10+ YEARS EXPERIENCE AS A DATA ENGINEER***

***US Citizen or Green Card Only***

The AWS Senior Data Engineer will own the planning, design, and implementation of data structures for this leading Hospitality Corporation in their AWS environment. This role will be responsible for incorporating all internal and external data sources into a robust, scalable, and comprehensive data model within AWS to support business intelligence and analytics needs throughout the company.

Responsibilities:

  • Collaborate with cross-functional teams to understand and define business intelligence needs and translate them into data modeling solutions
  • Develops, builds and maintains scalable data pipelines, data schema design, and dimensional data modelling in Databricks and AWS for all system data sources, API integrations, and bespoke data ingestion files from external sources. Includes Batch and real-time pipelines.
  • Responsible for data cleansing, standardization, and quality control
  • Create data models that will support comprehensive data insights, business intelligence tools, and other data science initiatives
  • Create data models and ETL procedures with traceability, data lineage and source control
  • Design and implement data integration and data quality framework
  • Implement data monitoring best practices with trigger based alerts for data processing KPIs and anomalies
  • Investigate and remediate data problems, performing and documenting thorough and complete root cause analyses. Make recommendation for mitigation and prevention of future issues.
  • Work with Business and IT to assess efficacy of all legacy data sources, making recommendations for migration, anonymization, archival and/or destruction.
  • Continually seek to optimize performance through database indexing, query optimization, stored procedures, etc.
  • Ensure compliance with data governance and data security requirements, including data life cycle management, purge and traceability.
  • Create and manage documentation and change control mechanisms for all technical design, implementations and systems maintenance.

Target Skills and Experience

  • Bachelor's or graduate degree in computer science, information systems or related field preferred, or similar combination of education and experience
  • At least 10 years' experience designing and managing data pipelines, schema modeling, and data processing systems.
  • Experience with Databricks a plus (or similar tools like Microsoft Fabric, Snowflake, etc.) to drive scalable data solutions.
  • Experience with SAP a plus
  • Proficient in Python, with a track record of solving real-world data challenges.
  • Advanced SQL skills, including experience with database design, query optimization, and stored procedures.
  • Experience with Terraform or other infrastructure-as-code tools is a plus.
Not Specified
jobs by JobLookup
✓ All jobs loaded