Automatic Data Processing Jobs Salary Jobs in Usa
16,971 positions found — Page 2
AI Data & Python Tools Engineer
We're seeking an AI Data and Python Tools Engineer to develop and deploy intelligent tools that leverage big data infrastructure and modern AI architecture. This role combines strong software engineering fundamentals with the ability to build production-ready AI applications at speed, including integration with Model Context Protocol (MCP) systems.
Responsibilities:
- Develop and deploy AI-powered full-stack applications using Python, React, and modern machine learning frameworks
- Design and streamline data pipelines, train and validate ML models, and implement robust evaluation methods
- Collaborate with cross-functional teams to solve complex problems and integrate scalable, cloud-based AI solutions
- Rapidly prototype, test, and iterate on AI tools with a strong focus on performance, flexibility, and scalability
- Maintain clear technical documentation, perform code reviews, and support the full software development lifecycle
Software Engineering & AI/ML Data, Tools Development
- 3+ years of Python Development with a background in back end services and data processing
- Exposure to AI/ML algorithms
- Familiarity with ML frameworks (TensorFlow, PyTorch, scikit-learn)
- Understanding of LLMs, vector databases, and retrieval systems
- Experience with Model Context Protocol (MCP) integration and server development
Big Data & Cloud Infrastructure
- Knowledge of building and deploying cloud based applications
- Hands-on experience with cloud data platforms (AWS/GCP/Azure)
- Proficiency with big data technologies (Spark, Kafka, or similar streaming platforms)
- Experience with data warehouses (Snowflake, BigQuery, Redshift) and data lakes
- Knowledge of containerization (Docker/Kubernetes) and infrastructure as code
*Preferred Experience
- Experience building web applications with modern frameworks (React, Vue, or Angular)
- API development and integration experience
- Basic UX/UI design sensibilities for internal tooling
- Experience with real-time data processing and analytics
- Background in building developer tools or internal platforms
- Familiarity with AI/ML operations (MLOps) practices (Experience using airflow)
- Experience building MCP servers and integrating with AI assistants
- Knowledge of structured data exchange protocols and API design for AI systems.
Type: Full Time
Location: Austin, TX or Cupertino, CA (Monday- Friday onsite)
*Relocation assistance can be offered based on individual needs and circumstances*
Job Description Summary
We are seeking a highly skilled and experienced Senior Data Engineer to join our dynamic team. In this role, you will be instrumental in designing, building, and maintaining robust and scalable data pipelines and solutions within the Microsoft Azure ecosystem. You will be responsible for developing and optimizing ETL/ELT processes, ensuring data quality, and enabling efficient data access for analytics and business intelligence. We are looking for a hands-on engineer who thrives in a fast-paced environment and is passionate about leveraging cutting-edge technologies
Key Responsibilities:
Design, develop, and maintain cloud-based data pipelines and ETL/ELT workflows.
Build and optimize data architectures to support structured and unstructured data processing.
Collaborate with data analysts, data scientists, and business stakeholders to understand data needs.
Implement data quality, security, and governance best practices.
Monitor and troubleshoot data workflows to ensure high availability and performance.
Optimize database and data storage solutions for performance and cost efficiency.
Contribute to cloud adoption, migration, and modernization initiatives.
Mandatory Skills:
Strong expertise with Azure cloud platform.
Strong experience in Databricks
Azure Data Factory proficiency required; building datasets, data flows, and pipelines in ADF (not just maintaining something already built)
Hands-on experience with ETL/ELT tools and frameworks.
Proficiency in SQL, Python, and data modeling.
Knowledge of CI/CD pipelines and infrastructure-as-code tools.
Understanding of data governance, security, and compliance.
Preferred Skills:
Exposure to API integration and microservices architecture.
Strong analytical and problem-solving skills.
Azure cloud certifications and/or past experience
AKS (Azure Kubernetes Service) experience, and ETL related to applications containerized & deployed on AKS (or EKS)
Title : Data QA Engineer
Location: Minneapolis , Dallas , Atlanta (Onsite)
Job Type : Contract
Exp : 8-15 Years
Key Responsibilities:
- Design, build, and maintain automated data quality frameworks to validate accuracy, completeness, consistency, and timeliness of data.
- Develop automation scripts using Python/SQL to test data pipelines, ETL/ELT processes, and analytics workflows.
- Implement data quality checks and monitoring within Azure-based data platforms.
- Work extensively with Azure services (ADF, ADLS, Synapse) and Databricks for large-scale data processing.
- Integrate data quality validations into CI/CD pipelines and support proactive issue detection.
- Perform root cause analysis for data issues and collaborate with data engineering, analytics, and business teams to resolve them.
- Define and enforce data quality standards, metrics, and SLAs.
Required Skills & Qualifications:
- Strong experience (8–15 years) in data engineering, data quality, or data automation roles.
- Hands-on expertise with Azure data ecosystem and Databricks.
- Strong programming skills in Python and SQL.
- Experience building automated data validation and reconciliation frameworks.
- Solid understanding of data warehousing, data lakes, and distributed data processing.
- Familiarity with DevOps/CI-CD practices for data platforms.
Preferred Skills:
- Experience with data observability or data quality tools.
- Exposure to cloud-scale analytics and performance optimization.
- Strong communication and stakeholder management skills.
We are seeking a skilled and motivated Data Privacy & Cybersecurity Attorney to join an industry-leading team advising private and public companies of all sizes across a variety of industries, including artificial intelligence and machine learning, cloud computing, software, and fintech. This high-impact role offers the opportunity to engage directly with clients, lead significant matters, and work across a broad range of privacy, data, and cybersecurity issues in both transactional and advisory contexts. The salary wage range for this job posting is $250,000 to $435,000.
Responsibilities:
- Advise clients on cyber, data, and privacy compliance across a full range of matters, including CCPA/CPRA, GDPR, HIPAA, CIPA, and other applicable state and federal privacy laws
- Counsel on privacy, data processing, and cybersecurity matters in connection with corporate and technology transactions, including mergers and acquisitions, public offerings, and other commercial and strategic transactions
- Draft and negotiate data processing agreements, privacy schedules, and related commercial agreements
- Conduct information security and privacy due diligence for corporate transactions
- Advise on data breach response, incident response planning, and related regulatory obligations
- Monitor and advise on emerging AI laws, regulatory standards, and enforcement trends
- Support or lead matters involving litigation and/or regulatory enforcement relating to data privacy and cybersecurity
Qualifications Required:
- J.D. from an ABA-accredited, nationally recognized law school with excellent academic credentials
- 4+ years of experience advising clients on data privacy, cybersecurity, and related compliance matters, in a private law firm or in-house setting
- Active member in good standing with the California State Bar
- Expertise in CCPA/CPRA, GDPR, HIPAA, CIPA, and other state and federal privacy frameworks
- Experience with transactional work, including mergers and acquisitions, capital markets, or technology transactions
- Excellent legal writing, drafting, negotiation, and analytical skills
- Strong verbal, written, and interpersonal communication skills with both legal and technical stakeholders
Preferred:
- CIPP certification
- Experience with regulatory policy, enforcement matters, and/or data privacy litigation
- Familiarity with emerging AI regulatory frameworks
- Experience in technology industries
Interested candidates should apply with their resumes. If you are a potential fit, we will schedule a confidential conversation.
Overall Responsibility:
This role supports the design, development, and optimization of Arora’s enterprise data and ERP systems. This role reports directly under the Data Analytics Manager to improve financial reporting, support platform integrations, and build scalable data architecture that enables informed decision-making across the organization.
The position combines technical execution (SQL, automation, system configuration) with financial reporting support and cross-platform integration work to ensure accuracy, efficiency, and long-term system sustainability.
Essential Functions:
- Execute reporting and system requests in alignment with established data governance standards and reporting frameworks under the direction of the Data Analytics Manager.
- Contribute to the design of data models and system workflows that reduce manual processes and improve cross-functional data visibility.
- Support internal dashboards by creating backend data solutions and integrating with Vision.
- Provide system-level troubleshooting and ensure data consistency and reliability across platforms.
- Collaborate with teams to streamline processes through automation and data tools.
- Maintain documentation of data procedures, workflows, and system modifications.
- Support financial reporting and analysis by developing standardized, scalable reporting solutions aligned with company-wide data architecture.
- Assist in translating financial and operational requirements into structured reporting outputs and automation workflows.
- Assist in platform integrations (ERP, CRM, BI tools, and other enterprise systems) to support long-term architectural alignment and scalability.
Needed Skills:
- Ability to program in SQL at an expert level to assist data processes. Potential need for other programming language knowledge (Java, Python, etc.).
- Ability to create and maintain productive relationships with employees, clients, and vendors.
Education/Experience Minimum:
- 3-5 years of experience
- Strong programming skills having the ability to write complex queries.
- Preferred familiarity with all Microsoft platforms, including but not limited to Excel, Power BI, SharePoint, and SQL Server.
- Preferred experience with Deltek Vision v7.6 and VantagePoint
- Experience in building automated processes and data workflows.
- Strong problem-solving and attention to detail.
Senior Data Analyst, Web Analytics - Dania Beach, FL
Exciting Opportunity for a Senior Data Analyst, Web Analytics!
Are you passionate about data analytics, SQL, and Google Analytics? Do you want to be part of a fast-growing team in the travel industry, working on a platform that millions of travelers use daily? If so, we have a great opportunity for you!
Why Join Us?
Work on a leading e-commerce travel platform, similar to Expedia and Travelocity.
Hybrid role in Dania Beach, FL (3 days onsite, flexible scheduling).
Full-Time
Exciting projects – building a new analytics framework from scratch, integrating UX/UI, and improving travel product data insights.
Competitive benefits & travel perks – free flights, discounted vacations, and more!
What You’ll Do
- Analyze Web & E-commerce Data – Extract insights from Google Analytics & BigQuery to understand customer behavior.
- Data Processing & SQL Queries – Work with large datasets in BigQuery, Redshift, or Snowflake.
- Collaborate with Data Engineering – Ensure proper tracking, tagging, and data collection using Google Tag Manager.
- Report Findings to Leadership – Build dashboards in Looker Studio to drive business decisions.
What We’re Looking For
2+ years of SQL experience (BigQuery, Redshift, Snowflake, or equivalent).
1+ years working with Google Analytics and web analytics tools.
Strong understanding of e-commerce and customer behavior tracking.
Experience with Google Tag Manager (or similar) is a plus.
Knowledge of Looker Studio, Tableau, or Power BI is a plus.
- Friday Job Responsibilities: Coordinate and perform the Master Data function with the following accountabilities: Meet all baseline and project goals for accuracy and timeliness.
Meet the service level agreement for new account set ups and master data change requests.
Monitor customer master data to ensure compliance to data entry standards.
Manage workflow; navigate shifting priorities and staffing issues to minimize risk.
Create and utilize reports for period reporting, KPI reporting, and to analyze pertinent account information.
Provide knowledge and guidance to individuals (internal and external) on all aspects of master data maintenance and store creation.
Collaborate and negotiate with customers, sales field, and finance functions to resolve issues.
Identify and implement action plans and process improvements with little guidance.
Perform root cause analysis on out of sync issues between master files, C2C and SAP and collaborate on solutions Ability to manage multiple priorities to achieve individual and departmental metrics.
Support customer and/or division initiatives.
Participate and collaborate in meetings to gather/share information.
Conduct department overviews to associates and high-level management as needed.
Act as a liaison between functions.
Effectively communicate issues and procedural changes.
Job Requirements: High school diploma or equivalent required.
Some college preferred.
Highly skilled in master data processes and systems.
Highly skilled in the use of Microsoft Excel and proficient use of other Microsoft Office applications.
Skilled at writing DB2 queries, obtaining XPTR reports, and analyzing data.
Must have an understanding of A/R workflow and systems.
Knowledge of SAP preferred Ability to manage multiple tasks and adapt to changing priorities.
Concise and persuasive communication skills.
Highly skilled in applying critical thinking to problem solving and analysis Leadership skills to achieve department objectives through motivation.
Adept at collaboration, negotiation and promoting team work.
Professional and mature with a high degree of confidence interacting with all levels of personnel.
Proven history of being a self-starter.
Must be organized and detail oriented.
Sr Analyst, Analytics, Data Engineering
to assist the agency in the development and management of data-processing initiatives supporting a variety of clients.
The successful candidate will be able to design workflows that encapsulate various acquisition, hygiene, and transformative requirements while supporting automation and scalability.
Developing a high degree of familiarity and expertise with our client’s data is a marker of success.
The successful candidate will be able to effectively communicate with business stakeholders who will present business requirements: the technical design and implementation of solutions that resolve these business-focused requirements is a key measure of the candidate's value.
Locations: Sussex, WI
or Chicago, IL – 4 days in office
Key Responsibilities:
•
Build data processing workflows to support analytic initiatives across MSSS and Snowflake platforms utilizing T-SQL, Python, Pyspark, Alteryx.
Understand the merits of the various toolsets for any given solution.
•
Take business-focused direction from key stakeholders and translate, design, and implement those requirements onto the appropriate technology platforms available to the team.
•
Perform ETL/hygiene on disparate data from a variety of sources.
Integrate that data into a variety of platforms.
•
Build complex SQL queries to resolve analytic-focused requests.
•
Understand/Implement performance-tuning of processes.
•
Create complex data models supporting specific analytic consumption in SQL Server and/or Snowflake.
•
Be able to use a combination of various tools to develop integrated systems in support of recurring analytic processes.
•
Regularly identify opportunities for internal process enhancement.
•
Extract insights from data and provide recommendations to improve our clients’ marketing efforts.
•
Ability to evaluate various programmatic tools/approaches to determine the best platform.
•
Interface with clients to understand their needs and subsequently design/manage/build accurate solutions.
•
Implement vigilant process/results auditing to ensure QC.
Job Requirements:
Education:
BA/BS in Programming, Engineering, or other quantitative fields.
Experience:
2-4 Years Programming experience using MS SQL.
Additional skills in Alteryx, Python, PySpark/Snowpark are highly valued
Experience with media or marketing campaigns either from client-side work or agency-side experience a plus.
Knowledge, Skills & Abilities:
•
Familiarity interfacing with cloud-based data sources such as Snowflake, S3, Redshift.
•
Collaborative problem-solving mindset
•
Intellectually curious, resourceful, with business savvy
•
Clear and concise communication skills
Employees can be expected to be paid an annualized salary range of $65,000-$75,000, based on variations in knowledge, skills, experience and market conditions.
#LI-DP1
Position title:
Project Scientist
Salary range:
The UC academic salary scales set the minimum pay determined by rank and step at appointment. See the following table for the current salary scale for this position: . A reasonable estimate for this position is $181,700 - $229,700.
Percent time:
100%
Anticipated start:
Winter/Spring 2026
Position duration:
Initial appointment is for one year with the possibility of renewal based on performance and funding availability.
Application Window
Open date: February 25, 2026
Next review date: Wednesday, Mar 11, 2026 at 11:59pm (Pacific Time)
Apply by this date to ensure full consideration by the committee.
Final date: Friday, Mar 27, 2026 at 11:59pm (Pacific Time)
Applications will continue to be accepted until this date, but those received after the review date will only be considered if the position has not yet been filled.
Position description
The Advanced BioImaging Center (ABC) in the Department of Molecular and Cell Biology at the University of California, Berkeley seeks applications for two Project Scientists at the Assistant, Associate, or full rank. The selected candidate will be appointed at the rank to commensurate with prior experience. The position will report to Professor Gokul Upadhyayula, with Professor Eric Betzig serving as an additional academic mentor. The project scientist will make significant and creative contributions in the area of machine learning & data analytics.
The Advanced BioImaging Center (ABC) at UC Berkeley aspires to be a world-leading multidisciplinary imaging center that drives important biological discoveries through critical new advances in all aspects of imaging technology and that drives the dissemination of that technology through a multi-pronged education strategy to scientists around the world. ABC was intentionally designed to maximize scientific productivity and impact by adopting groundbreaking imaging technologies such as the next-generation adaptive optical multifunctional microscope, incorporating the high-level technical expertise of instrumentation scientists, applied mathematicians, and computational scientists, and building worldwide collaborations aimed at tackling the challenges posed by terabyte and petabyte-scale imaging data processing, visualization, and dissemination. Members of the ABC have access to leading - edge imaging and computing hardware, as well as exposure to collaborators from a range of diverse disciplines, including in the fields of Artificial Intelligence, Data Science, Mathematics, and more.
The Assistant/Associate/Full Project Scientists will be an integral part of a visionary scientific team driving cutting-edge biological discoveries through immediate applications of critical advances in imaging technologies. These positions will work with a dedicated team to develop data analytics software in terabyte- to petabyte-scale imaging projects. The incumbents will develop and refine machine learning applications and manage projects and provide regular progress reports to PIs and collaborators.
Successful candidates will be an integral part of the expert team working together with computational scientists and biologists in experimental design to tackle complex biological questions in a quantitative manner. The work will primarily be conducted at the facility in Barker Hall. Occasional travel may be required.
Key Responsibilities
*Make significant and creative contributions to development of new imaging and data processing tools for datasets generated on multicellular tissues, organoids, transparent embryos.
*Design, build, and maintain new software packages for efficient data processing.
*Advise on applications of these tools for biological imaging; collaborate with Postdocs and graduate students on specific projects to test, learn and implement for general and specific use cases.
*General organization and management of software documentation.
*Bring cross disciplinary expertise to solve problems at the intersection between life science, computer vision, and state-of-the-art AI methods.
*Work with petabyte-scale light sheet datasets that are typically 4D or 5D (x,y,z,t,chemistry). Identify and implement scalable solutions to scientific questions on large-scale data sets, especially using performant algorithms.
*Develop machine learning approaches, computer vision tools to help pre-process dataset and annotations to generate groundtruth benchmarks.
*Contribute to dissemination via open source code repositories, demonstrations, publications, presentation.
These positions will be eligible for full benefits.
Lab:
Contract: ar-contract-2022/
Qualifications
Basic qualifications (required at time of application)
*PhD (or equivalent international degree)
Additional qualifications (required at time of start)
*Minimum of four years of postdoctoral research experience
*For consideration for the Associate Project Scientist rank: a minimum of 8 years of post PhD research experience
*For consideration for the full Project Scientist rank: a minimum of 14 years of post PhD research experience
Preferred qualifications
*PhD or equivalent international degree in Computation Data, Computer Sciences, Bioinformatics or Related field
*Demonstrated record of productivity and publications and/or scholarly contributions
*Strong biological background and understanding of molecular biology
*Demonstrate understanding of optical microscopy, including light sheet microscopy, adaptive optics, and modern scientific cameras
*Demonstrated ability to work in a research team, manage active collaborations with other academic groups
*Demonstrated experience handling and processing large scale imaging datasets (>100TB to petabyte scale and beyond)
*Expertise in programming in C++, Labview, MATLAB, Python
*Expertise in databases, data infrastructure, data governance
*Expertise in high performance computing using SLURM or LSF
*Experience with PyTorch, JAX, or Tensorflow
*Experience with NVIDIA CUDA and related OpenMP programming
*Experience with cloud services (AWS, GCP, Azure, etc)
*Experience with state of the art AI/ML architectures (vison transformers, diffusion models, etc
*Experience mentoring undergraduate/graduate students, and/or technicians.
*Experience with professional speaking engagements
*Ability to effectively communicate, participate in efficient and open collaboration, and engage with a diverse group of researchers
*The ideal candidate will be innovative and able to synergize various ideas and approaches, while exercising sound judgment to evaluate and take acceptable risks
Application Requirements
Document requirements
Curriculum Vitae - Your most recently updated C.V.
Cover Letter
Statement of Research - Provide a summary of your major research accomplishments in approximately 250 words. Additionally, please include a brief statement highlighting your experience that is directly relevant to the key responsibilities of this position
Project Portfolio - Summary portfolio of data and/or AI projects executed, as demonstrated by publications or github contributions
Reference requirements
- 3 required (contact information only)
Apply link:
JPF05256
Help contact:
About UC Berkeley
UC Berkeley is committed to diversity, equity, inclusion, and belonging in our public mission of research, teaching, and service, consistent with UC Regents Policy 4400 and University of California Academic Personnel policy (APM 210 1-d). These values are embedded in our Principles of Community, which reflect our passion for critical inquiry, debate, discovery and innovation, and our deep commitment to contributing to a better world. Every member of the UC Berkeley community has a role in sustaining a safe, caring and humane environment in which these values can thrive.
The University of California, Berkeley is an Equal Opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, age, or protected veteran status.
For more information, please refer to the University of California's Affirmative Action and Nondiscrimination in Employment Policy and the University of California's Anti-Discrimination Policy.
In searches when letters of reference are required all letters will be treated as confidential per University of California policy and California state law. Please refer potential referees, including when letters are provided via a third party (i.e., dossier service or career center), to the UC Berkeley statement of confidentiality prior to submitting their letter.
As a University employee, you will be required to comply with all applicable University policies and/or collective bargaining agreements, as may be amended from time to time. Federal, state, or local government directives may impose additional requirements.
Unless stated otherwise, unambiguously, in the position description, this position does not include sponsorship of a new consular H-1B visa petition that would require payment of the $100,000 supplemental fee.
As a condition of employment, the finalist will be required to disclose if they are subject to any final administrative or judicial decisions within the last seven years determining that they committed any misconduct.
- "Misconduct" means any violation of the policies or laws governing conduct at the applicant's previous place of employment, including, but not limited to, violations of policies or laws prohibiting sexual harassment, sexual assault, or other forms of harassment or discrimination, as defined by the employer.
- UC Sexual Violence and Sexual Harassment Policy
- UC Anti-Discrimination Policy
- APM - 035: Affirmative Action and Nondiscrimination in Employment
Job location
Berkeley, CA
Data Entry Coordinator - 7 month contract - fully onsite in Fountainebleau, Florida.
Our client is looking for a data entry co-ordinator to join their team this week working full onsite in Fountainebleau, Florida for an initial 7 month contract.
- Perform clinical data processing tasks under the direction of the Site Director or another designated manager.
- Complete all sponsor-required training to obtain the necessary access and approval for data entry.
- Accurately enter protocol-specific data into paper or electronic case report forms.
- Monitor and track data entry to ensure completeness for all study subjects, including completed visits and associated forms.