Data Entry Job Description Sample Jobs in Usa
12,409 positions found — Page 9
Job title: PhD Quantitative Researcher (Cross-Asset)
Firm: Elite Quantitative Buy-Side Fund – Multidisciplinary team of academic researchers, finance industry experts and STEM subject matter experts.
Salary: Up to $200,000 starting base + exceptional bonus package.
Location: Chicago (Onsite)
This firm is a scientific and data-driven systematic fund who are currently at the forefront of systematic trading.
As a result of their stellar and continued success in the industry, they’re currently aggressively scaling their quantitative strategies business in Chicago. This is an invaluable opportunity to develop and implement systematic strategies alongside genuine experts in the field of quantitative trading.
Additional Information:
- Market leader within computational finance and systematic trading. Arguably one of the best industry performers of the last 3 decades.
- Renowned for developing quantitative strategies in systematic trading across an array of investment strategies and products (Equities, Futures, FI, Macro, Vol).
- Multidisciplinary team of exceptional subject matter STEM experts from finance, academia, and technology.
- Highly collaborative trading environment with data and execution managed centrally.
- Furthermore they're exploring and integrating fundamental/discretionary trading with data-driven quantitative trading.
Role:
- Explore and leverage an array of complex and noisy data (market, tick, options, alt) to identify statistical patterns and unique market opportunities.
- Contribute towards existing and novel strategies by refining methodologies and exchanging research ideas.
- Leverage sophisticated statistical methods to understand and manage risk, profitability and transaction costs in conceptualizing new trading ideas.
- Back-test and implement productionized trading models in a live trading environment.
- Contribute to the full lifecycle research strategy from data ingestion to alpha generation.
Required skills:
- Academic degree in mathematics, statistics, physics, computer science, or another highly quantitative discipline.
- Industry-related internship either within computational finance or technology. (Wil consider candidates with internships in other related data-driven fields).
- Knowledge of algorithms, data structures, probability and statistics.
- Experience of dealing with a multitude of noisy data challenges in a data-driven environment.
- Proficient in either C++ or Python.
Desirable skills:
- Experience with translating mathematical models and algorithms into code.
- Proficient in exploring and attaining value from noisy and complex data sets (alt, market, options, tick).
If this opportunity is of interest, please apply direct or email me directly at .
OZ – Databricks Architect/ Senior Data Engineer
Note: Only applications from U.S. citizens or lawful permanent residents (Green Card Holders) will be considered.
We believe work should be innately rewarding and a team-building venture. Working with our teammates and clients should be an enjoyable journey where we can learn, grow as professionals, and achieve amazing results. Our core values revolve around this philosophy. We are relentlessly committed to helping our clients achieve their business goals, leapfrog the competition, and become leaders in their industry. What drives us forward is the culture of creativity combined with a disciplined approach, passion for learning & innovation, and a ‘can-do’ attitude!
What We're Looking For:
We are seeking a highly experienced Databricks professional with deep expertise in data engineering, distributed computing, and cloud-based data platforms. The ideal candidate is both an architect and a hands-on engineer who can design scalable data solutions while actively contributing to development, optimization, and deployment.
This role requires strong technical leadership, a deep understanding of modern data architectures, and the ability to implement best practices in DataOps, performance optimization, and data governance.
Experience with modern AI/GenAI-enabled data platforms and real-time data processing environments is highly desirable.
Position Overview:
The Databricks Senior Data Engineer will play a critical role in designing, implementing, and optimizing enterprise-scale data platforms using the Databricks Lakehouse architecture. This role combines architecture leadership with hands-on engineering, focusing on building scalable, secure, and high-performance data pipelines and platforms. The ideal candidate will establish coding standards, define data architecture frameworks such as the Medallion Architecture, and guide the end-to-end development lifecycle of modern data solutions.
This individual will collaborate with cross-functional stakeholders, including data engineers, BI developers, analysts, and business leaders, to deliver robust data platforms that enable advanced analytics, reporting, and AI-driven decision-making.
Key Responsibilities:
- Architecture & Design: Architect and design scalable, reliable data platforms and complex ETL/ELT and streaming workflows for the Databricks Lakehouse Platform (Delta Lake, Spark).
- Hands-On Development: Write, test, and optimize code in Python, PySpark, and SQL for data ingestion, transformation, and processing.
- DataOps & Automation: Implement CI/CD, monitoring, and automation (e.g., with Azure DevOps, DBX) for data pipelines.
- Stakeholder Collaboration: Work with BI developers, analysts, and business users to define requirements and deliver data-driven solutions.
- Performance Optimization: Tune delta tables, Spark jobs, and SQL queries for maximum efficiency and scalability.
- GenAI Applications Development: It is a big plus to have experience in GenAI application development
Requirements:
- 8+ years of experience in data engineering, with strong hands-on expertise in Databricks and Apache Spark.
- Proven experience designing and implementing scalable ETL/ELT pipelines in cloud environments.
- Strong programming skills in Python and SQL; experience with PySpark required.
- Hands-on experience with Databricks Lakehouse, Delta Lake, and distributed data processing.
- Experience working with cloud platforms such as Microsoft Azure, AWS, or GCP (Azure preferred).
- Experience with CI/CD pipelines, Git, and DevOps practices for data engineering.
- Strong understanding of data architecture, data modeling, and performance optimization.
- Experience working with cross-functional teams to deliver enterprise data solutions.
- Tackles complex data challenges, ensuring data quality and reliable delivery.
Qualifications:
- Bachelor’s degree in Computer Science, Information Technology, Engineering, or a related field.
- Experience designing enterprise-scale data platforms and modern data architectures.
- Experience with data integration tools such as Azure Data Factory or similar platforms.
- Familiarity with cloud data warehouses such as Databricks, Snowflake, or Azure Fabric.
- Experience supporting analytics, reporting, or AI/ML workloads is highly desirable.
- Databricks, Azure, or cloud certifications are preferred.
- Strong problem-solving, communication, and technical leadership skills.
Technical Proficiency in:
- Databricks, Apache Spark, PySpark, Delta Lake
- Python, SQL, Scala (preferred)
- Cloud platforms: Azure (preferred), AWS, or GCP
- Azure Data Factory, Kafka, and modern data integration tools
- Data warehousing: Databricks, Snowflake, or Azure Fabric
- DevOps tools: Git, Azure DevOps, CI/CD pipelines
- Data architecture, ETL/ELT design, and performance optimization
What You’re Looking For:
Join a fast-growing organization that thrives on innovation and collaboration. You’ll work alongside talented, motivated colleagues in a global environment, helping clients solve their most critical business challenges. At OZ, your contributions matter – you’ll have the chance to be a key player in our growth and success. If you’re driven, bold, and eager to push boundaries, we invite you to join a company where you can truly make a difference.
About Us:
OZ is a 28-year-old global technology consulting, services, and solutions leader specializing in creating business-focused solutions for our clients by leveraging disruptive digital technologies and innovation.
OZ is committed to creating a continuum between work and life by allowing people to work remotely. We offer competitive compensation and a comprehensive benefits package. You’ll enjoy our work style within an incredible culture. We’ll give you the tools you need to succeed so you can grow and develop with us and become part of a team that lives by its core values.
This is a great opportunity for anyone with construction, fabrication, or trade experience (or just a strong work ethic and willingness to learn) to launch a stable career with growth potential.
What You’ll Do as a Field Technician – Entry-Level (Construction / Data Centers)
As a Field Technician, you’ll:
- Install, assemble, and modify containment systems that improve cooling efficiency in data centers
- Perform specialized cleaning and decontamination of equipment and areas to keep facilities running at peak performance
- Assist with deliveries, organize materials, and maintain tools and equipment
- Follow direction from supervisors to complete tasks safely, accurately, and on time
- Identify and report potential risks, always prioritizing safety
- Represent the company professionally with clients and team members
What We’re Looking For in a Field Technician – Entry-Level (Construction / Data Centers)
- 0–2 years of construction, technician, or trade experience (data center experience is a plus)
- U.S. citizenship or naturalized citizen, 18+ years old
- Reliable transportation to job sites
- Able to pass a background check and drug screen
- Comfortable working at heights, around noise, and in temperatures from 0–100°F+
- Physically able to lift 50 lbs and stay on your feet most of the day
- Positive attitude, strong work ethic, and good communication skills
Schedule & Pay for Field Technician – Entry-Level (Construction / Data Centers)
- Monday–Friday, 6:00 AM to 3:00 PM (overtime available)
- Full-time, on-site role
- Competitive hourly pay with overtime opportunities
- Full training, safety gear (PPE), and on-the-job mentorship provided
Why Join Us?
- Be part of the growing data center industry
- Gain hands-on technical skills with full training
- Work with a supportive team in a professional environment
- Build a career with opportunities for advancement
Apply today and start your career in data center construction with a growing technology company!
About Wakefern
Wakefern Food Corp. is the largest retailer-owned cooperative in the United States and supports its co-operative members' retail operations, trading under the ShopRite®, Price Rite®, The Fresh Grocer®, Dearborn Markets®, and Gourmet Garage® banners.
Employing an innovative approach to wholesale business services, Wakefern focuses on helping the independent retailer compete in a big business world. Providing the tools entrepreneurs need to stay a step ahead of the competition, Wakefern’s co-operative members benefit from the company’s extensive portfolio of services, including innovative technology, private label development, and best-in-class procurement practices.
The ideal candidate will have a strong background in designing, developing, and implementing complex projects, with focus on automating data processes and driving efficiency within the organization. This role requires a close collaboration with application developers, data engineers, data analysts, data scientists to ensure seamless data integration and automation across various platforms. The Data Integration & AI Engineer is responsible for identifying opportunities to automate repetitive data processes, reduce manual intervention, and improve overall data accessibility.
Essential Functions
- Participate in the development life cycle (requirements definition, project approval, design, development, and implementation) and maintenance of the systems.
- Implement and enforce data quality and governance standards to ensure the accuracy and consistency.
- Provide input for project plans and timelines to align with business objectives.
- Monitor project progress, identify risks, and implement mitigation strategies.
- Work with cross-functional teams and ensure effective communication and collaboration.
- Provide regular updates to the management team.
- Follow the standards and procedures according to Architecture Review Board best practices, revising standards and procedures as requirements change and technological advancements are incorporated into the >tech_ structure.
- Communicates and promotes the code of ethics and business conduct.
- Ensures completion of required company compliance training programs.
- Is trained – either through formal education or through experience – in software / hardware technologies and development methodologies.
- Stays current through personal development and professional and industry organizations.
Responsibilities
- Design, build, and maintain automated data pipelines and ETL processes to ensure scalability, efficiency, and reliability across data operations.
- Develop and implement robust data integration solutions to streamline data flow between diverse systems and databases.
- Continuously optimize data workflows and automation processes to enhance performance, scalability, and maintainability.
- Design and develop end-to-end data solutions utilizing modern technologies, including scripting languages, databases, APIs, and cloud platforms.
- Ensure data solutions and data sources meet quality, security, and compliance standards.
- Monitor and troubleshoot automation workflows, proactively identifying and resolving issues to minimize downtime.
- Provide technical training, documentation, and ongoing support to end users of data automation systems.
- Prepare and maintain comprehensive technical documentation, including solution designs, specifications, and operational procedures.
Qualifications
- A bachelor's degree or higher in computer science, information systems, or a related field.
- Hands-on experience with cloud data platforms (e.g., GCP, Azure, etc.)
- Strong knowledge and skills in data automation technologies, such as Python, SQL, ETL/ELT tools, Kafka, APIs, cloud data pipelines, etc.
- Experience in GCP BigQuery, Dataflow, Pub/Sub, and Cloud storage.
- Experience with workflow orchestration tools such as Cloud Composer or Airflow
- Proficiency in iPaaS (Integration Platform as a Service) platforms, such as Boomi, SAP BTP, etc.
- Develop and manage data integrations for AI agents, connecting them to internal and external APIs, databases, and knowledge sources to expand their capabilities.
- Build and maintain scalable Retrieval-Augmented Generation (RAG) pipelines, including the curation and indexing of knowledge bases in vector databases (e.g., Pinecone, Vertex AI Vector Search).
- Leverage cloud-based AI/ML platforms (e.g., Vertex AI, Azure ML) to build, train, and deploy machine learning models on a scale.
- Establish and enforce data quality and governance standards for AI/ML datasets, ensuring the accuracy, completeness, and integrity of data used for model training and validation.
- Collaborate closely with data scientists and machine learning engineers to understand data requirements and deliver optimized data solutions that support the entire machine learning lifecycle.
- Hands-on experience with IBM DataStage and Alteryx is a plus.
- Strong understanding of database design principles, including normalization, indexing, partitioning, and query optimization.
- Ability to design and maintain efficient, scalable, and well-structured database schemas to support both analytical and transactional workloads,
- Familiarity with BI visualization tools such as MicroStrategy, Power BI, Looker, or similar.
- Familiarity with data modeling tools.
- Familiarity with DevOps practices for data (CI/CD pipelines)
- Proficiency in project management software (e.g., JIRA, Clarizen, etc.)
- Familiarity with DevOps practices for data (CI/CD pipelines)
- Strong knowledge and skills in data management, data quality, and data governance.
- Strong communication, collaboration, and problem-solving skills.
- Ability to work on multiple projects and prioritize tasks effectively.
- Ability to work independently and in a team environment.
- Ability to learn new technologies and tools quickly.
- The ability to handle stressful situations.
- Highly developed business acuity and acumen.
- Strong critical thinking and decision-making skills.
Working Conditions & Physical Demands
This position requires in-person office presence at least 4x a week.
Compensation and Benefits
The salary range for this position is $75,868 - $150,644. Placement in the range depends on several factors, including experience, skills, education, geography, and budget considerations.
Wakefern is proud to offer a comprehensive benefits package designed to support the health, well-being, and professional development of our Associates. Benefits include medical, dental, and vision coverage, life and disability insurance, a 401(k) retirement plan with company match & annual company contribution, paid time off, holidays, and parental leave.
Associates also enjoy access to wellness and family support programs, fitness reimbursement, educational and training opportunities through our corporate university, and a collaborative, team-oriented work environment. Many of these benefits are fully or partially funded by the company, with some subject to eligibility requirements
Must be local to TX
Skills:
Delivery manager
2026 road map
To deliver roadmap, interact with business, explain value prop, understand their rules, standard rules
Manage timelines
Partner with segments
Before and after Data Quality scores
Technical
Articulate technical design and solutions
Capabilities of Collibra, Soda
How to use those tools
Proactive communication skills
12+ years kind of role Technical Project Manager with solutioning and problem skills
Role Summary
The Data Governance Lead will design, build, and scale an enterprise data governance program from the ground up, using Collibra as the core platform for a large real estate enterprise. This senior role combines strategic leadership, hands‑on Collibra configuration, stakeholder management, and deep domain knowledge of real estate data. The incumbent will own the governance vision, operating model, and tooling, and will partner with business, IT, data engineering, analytics, legal, and compliance teams.
Key Responsibilities
1. Data Governance Strategy and Operating Model
- Define and implement the enterprise data governance strategy, roadmap, and operating model aligned to business objectives.
- Define governance KPIs, maturity metrics, and success measures.
- Drive adoption through change management, communications, and training.
2. Collibra Implementation from Scratch
- Lead end‑to‑end Collibra implementation: platform setup, environment planning (Dev/Test/Prod), domain modeling, and taxonomy design.
- Customize asset models for real estate use cases.
- Configure and manage Business Glossary, Data Dictionary, Data Catalog, and Reference Data & Code Sets.
- Design and implement Collibra workflows for glossary lifecycle, owner/steward assignment, issue management, and escalation.
- Implement Collibra operating model with defined roles (Data Owner, Data Steward, Custodian, Consumer) and RACI mappings.
- Integrate Collibra with data warehouses/lakes (Snowflake, BigQuery, Azure), BI tools (Power BI, Tableau), and ETL/ELT tools (Informatica, dbt, ADF).
- Lead metadata ingestion across technical, operational, and business metadata.
3. Data Ownership, Stewardship, and Accountability
- Define and institutionalize data ownership and stewardship across business units.
- Partner with business leaders to assign Data Owners and Stewards.
- Drive accountability for data definitions, data quality, and metadata completeness.
- Establish Data Governance Councils and working groups.
4. Data Quality and Issue Management
- Collaborate with data quality teams to define Critical Data Elements (CDEs) and align rules and thresholds.
- Configure Collibra issue management workflows and ensure traceability from issues to root causes and remediation actions.
- Provide governance oversight for remediation and continuous improvement.
5. Compliance, Risk, and Security Governance
- Define governance controls for regulatory compliance, contractual data, and financial reporting.
- Partner with Legal, Risk, and Security to classify sensitive data and apply access and usage policies.
- Implement data classification and privacy metadata within Collibra.
6. Stakeholder and Program Leadership
- Serve as the single point of accountability for the data governance program.
- Present progress, metrics, and risks to senior leadership.
- Mentor governance analysts, stewards, and platform administrators.
- Coordinate with system integrators and vendors as required.
Required Skills and Qualifications
Mandatory
- 12–18+ years in data management, data governance, or analytics leadership.
- Deep hands‑on experience implementing Collibra from scratch at enterprise scale.
- Strong expertise in business glossary and metadata management, stewardship models, and workflow automation in Collibra.
- Proven track record driving enterprise adoption of governance platforms.
- Excellent stakeholder management and communication skills.
Preferred
- Experience in real estate, property management, construction, facilities, or capital projects.
- Familiarity with DAMA‑DMBOK, DCAM, or similar governance frameworks.
- Exposure to data quality tools such as SODA, Great Expectations, or Informatica DQ.
- Experience integrating Collibra with cloud data platforms.
- Prior experience leading governance programs in large, federated organizations.
- Collibra certification is a plus.
Behavioral and Leadership Attributes
- Strategic thinker with strong execution capability.
- Balances business pragmatism with governance rigor.
- Influences without formal authority and drives change.
- Excellent storytelling and change management skills.
- Hands‑on leader who can configure Collibra and mentor teams.
Success Measures First 12 Months
- Collibra platform live with core real estate domains onboarded.
- Business glossary adopted across key business units.
- Formal data ownership established for critical datasets.
- Measurable improvement in metadata completeness and data quality visibility.
- Governance operating model embedded into daily business processes.
Visa Status: US Citizen or Green Card Only
Location: Irving, TX (Local Candidates Only)
Employment Type: Full-time / Direct Hire
Work Environment: Hybrid (Monday thru Thursday - in office / Friday - at home)
***MUST HAVE 10+ YEARS EXPERIENCE AS A DATA ENGINEER***
***US Citizen or Green Card Only***
The AWS Senior Data Engineer will own the planning, design, and implementation of data structures for this leading Hospitality Corporation in their AWS environment. This role will be responsible for incorporating all internal and external data sources into a robust, scalable, and comprehensive data model within AWS to support business intelligence and analytics needs throughout the company.
Responsibilities:
- Collaborate with cross-functional teams to understand and define business intelligence needs and translate them into data modeling solutions
- Develops, builds and maintains scalable data pipelines, data schema design, and dimensional data modelling in Databricks and AWS for all system data sources, API integrations, and bespoke data ingestion files from external sources. Includes Batch and real-time pipelines.
- Responsible for data cleansing, standardization, and quality control
- Create data models that will support comprehensive data insights, business intelligence tools, and other data science initiatives
- Create data models and ETL procedures with traceability, data lineage and source control
- Design and implement data integration and data quality framework
- Implement data monitoring best practices with trigger based alerts for data processing KPIs and anomalies
- Investigate and remediate data problems, performing and documenting thorough and complete root cause analyses. Make recommendation for mitigation and prevention of future issues.
- Work with Business and IT to assess efficacy of all legacy data sources, making recommendations for migration, anonymization, archival and/or destruction.
- Continually seek to optimize performance through database indexing, query optimization, stored procedures, etc.
- Ensure compliance with data governance and data security requirements, including data life cycle management, purge and traceability.
- Create and manage documentation and change control mechanisms for all technical design, implementations and systems maintenance.
Target Skills and Experience
- Bachelor's or graduate degree in computer science, information systems or related field preferred, or similar combination of education and experience
- At least 10 years’ experience designing and managing data pipelines, schema modeling, and data processing systems.
- Experience with Databricks a plus (or similar tools like Microsoft Fabric, Snowflake, etc.) to drive scalable data solutions.
- Experience with SAP a plus
- Proficient in Python, with a track record of solving real-world data challenges.
- Advanced SQL skills, including experience with database design, query optimization, and stored procedures.
- Experience with Terraform or other infrastructure-as-code tools is a plus.
Who We Are
At Feetures, movement is our business. And we believe that a meaningful business begins with authentic values—and our values were forged by the bonds of family.
What started as a bold idea around a kitchen table has grown into a fast-moving, purpose-driven brand redefining performance. As a family-owned company in North Carolina, we’re fueled by the belief that better is always possible—and that energy drives both our products and our culture.
Movement is at the heart of everything we do. From our socks to our team and to our communities, we are always pushing forward. If you are ready to grow, challenge the status quo, and help shape the next chapter of a brand that is always in stride, come move with us. Feetures is Meant to Move. Are you?
Role Summary:
The Data Analytics Manager is responsible for owning and optimizing the organization’s end-to-end data ecosystem, ensuring that data infrastructure, governance, and analytics processes effectively support business operations. This role leads the design and management of the data stack—from source system integrations and NetSuite Analytics Warehouse to reporting and business intelligence tools—while establishing strong data governance standards, quality monitoring, and documentation practices. The manager also oversees and mentors analytics team members, prioritizes analytics requests, and coordinates cross-functional data workflows. Acting as the central authority for data reliability and insights, the role ensures consistent metric definitions, scalable data models, and accurate reporting while translating complex data into clear, actionable insights for business stakeholders.
Responsibilities:
Data Architecture & Tooling
- Own the end-to-end data stack — from source system integrations and the NetSuite Analytics Warehouse to downstream reporting layers
- Evaluate, select, and implement tools that improve data accessibility, reliability, and performance
- Ensure alignment between data infrastructure and evolving business needs across distribution operations
- Design and maintain scalable data models, SuiteQL queries, and saved searches within NetSuite
Data Governance & Quality
- Define and enforce data standards, metric definitions, and naming conventions across all business domains
- Establish data ownership, lineage documentation, and access governance policies
- Implement monitoring and alerting for data quality issues across source systems and the warehouse
- Build and maintain a data dictionary that serves as the single source of truth for the organization
Orchestration of Analysts & Systems
- Manage and mentor the Data Analyst and Business Analyst — prioritizing requests, unblocking work, and validating outputs
- Triage and prioritize the analytics request queue in alignment with business stakeholders and IT leadership
- Coordinate cross-functional data workflows and ensure handoffs between systems and analysts are clean and documented
- Serve as the escalation point for data discrepancies, report failures, and analytical questions from the business
Qualifications:
Required
- 3-5 years of experience in data analytics, business intelligence, or data engineering
- 2+ years in a lead or management role overseeing analysts or data team members
- Strong proficiency in SQL; experience with SuiteQL or similar ERP query languages
- Hands-on experience with NetSuite, including Analytics Warehouse, saved searches, and reporting
- Proven track record establishing data governance standards and documentation practices
- Experience integrating and managing multiple data sources across SaaS and ERP platforms
- Demonstrated ability to translate complex data into clear, actionable insights for non-technical stakeholders
Preferred
- Experience in distribution, wholesale, or supply chain environments
- Familiarity with SaaS BI platforms (e.g., Tableau, Power BI, Looker, or embedded analytics)
- Exposure to scripting or automation (JavaScript, Python, or similar) for data workflows
- Background working within IT-led or hybrid IT/Analytics teams
Benefits:
- Health insurance
- Dental insurance
- Vision insurance
- Life & Disability insurance
- 401(K) with company match
Company Paid holidays and PTO:
- Feetures offers 20 PTO Days which are available to you on day one of employment and are available to all employees, no matter your role. After working at Feetures for 5 years, your PTO days will increase to 25 days. Days can be used for vacations, appointments and sick days.
- We offer 10 company paid holidays and 1 floating holiday per year.
Perks:
- Parking provided (Charlotte office and onsite at Hickory office)
- Employee Engagement team
- Monthly stipend to pursue an active lifestyle
Feetures is an Equal Opportunity Employer that welcomes and encourages all applicants to apply regardless of age, race, sex, religion, color, national origin, disability, veteran status, sexual orientation, gender identity and/or expression, marital or parental status, ancestry, citizenship status, pregnancy or other reasons protected by law.
Job Title: Senior Data Engineer
Location: Chicago, IL (Hybrid)
Department: Data & Analytics
Reports To: Head of Data Engineering / Data Platform Lead
Role Overview
We are seeking a highly skilled Senior Data Engineer with strong Python development expertise and deep experience in Snowflake to design, build, and optimize scalable enterprise data solutions. This role is based in Chicago, IL and will support regulatory and risk data initiatives in a highly governed environment.
The ideal candidate has hands-on experience building modern cloud data platforms and is familiar with risk management frameworks, BCBS 239 principles, and Governance, Risk & Compliance (GRC) requirements within financial services.
Key Responsibilities
Data Engineering & Architecture
Design, develop, and maintain scalable data pipelines using Python.
Build and optimize data models, transformations, and data marts within Snowflake.
Develop robust ELT/ETL frameworks for structured and semi-structured data.
Optimize Snowflake performance, cost efficiency, clustering, and workload management.
Implement automation, monitoring, and CI/CD for data pipelines.
Risk & Regulatory Data Management
Support regulatory reporting aligned with BCBS 239 (risk data aggregation and reporting).
Ensure data traceability, lineage, reconciliation, and auditability.
Implement controls aligned with Governance, Risk & Compliance (GRC) frameworks.
Partner with Risk, Finance, Compliance, and Audit teams to deliver accurate and governed data assets.
Data Governance & Quality
Develop and enforce data quality validation frameworks.
Maintain metadata, lineage documentation, and data catalog integration.
Implement data access controls and security best practices.
Technical Leadership
Provide mentorship and code reviews for data engineering team members.
Promote engineering best practices and documentation standards.
Collaborate cross-functionally with architects, analysts, and business stakeholders.
Required Qualifications
7+ years of experience in Data Engineering or Data Platform development.
Strong Python programming expertise (Pandas, PySpark, Airflow, etc.).
Hands-on experience with Snowflake (data modeling, Snowpipe, Streams & Tasks, performance tuning).
Advanced SQL skills and deep understanding of data warehousing concepts.
Experience supporting BCBS 239 compliance or similar regulatory reporting frameworks.
Experience working within Governance, Risk & Compliance (GRC) structures.
Experience in cloud environments (AWS, Azure, or GCP).
Strong understanding of data lineage, controls, reconciliation, and audit requirements.
Preferred Qualifications
Experience in banking, capital markets, or financial services.
Knowledge of credit risk, market risk, liquidity risk, or regulatory reporting domains.
Experience with data governance tools (Collibra, Alation, etc.).
Familiarity with DevOps practices, Docker, Kubernetes.
Experience building enterprise data platforms in highly regulated environments.
Key Competencies
Strong problem-solving and analytical thinking.
Ability to operate in a regulated, audit-driven environment.
Excellent communication and stakeholder management skills.
Detail-oriented with a focus on data accuracy and integrity.
Leadership mindset with hands-on technical capability.
Overview
We are seeking a seasoned Analytics leader to build and lead our enterprise Analytics and Data Governance function in a modern group purchasing / procurement environment. This leader will turn our rich ecosystem of member, supplier, contract, and transaction data into a strategic asset that drives savings, compliance, growth, and differentiated insight for our members and suppliers.
This leader will also own the data governance operating model, enterprise metrics, and analytics roadmap that power member-facing insights, internal performance management, and AI use cases across the technology platform (Website, B2B eCommerce, supplier portal, sourcing tools, and partner integrations).
Key responsibilities
Data governance and policy
- Define and run the enterprise data governance framework covering member, supplier, contract, item, and transaction data domains.
- Establish data ownership and stewardship across functions (Category Management, Supplier Management, Finance, Sales, Marketing, Digital) driving clear accountabilities for data quality and definitions.
- Implement policies for responsible use of data in supplier programs, member reporting, and AI/ML models, ensuring compliance with contractual, regulatory, and privacy requirements.
- Drive data quality management (profiling, remediation, SLAs) for critical assets such as contract price files, item catalogs, rebate/accrual data, and member hierarchies.
- Oversee metadata, business glossary, and data lineage so teams can confidently understand “one source of truth” for core GPO metrics (e.g., committed vs. actual spend, penetration, compliance, savings delivered).
Analytics strategy and delivery
- Define the enterprise analytics vision and roadmap aligned to procurement value levers: spend visibility, category performance, contract compliance, leakage detection, rebate optimization, and supplier performance.
- Lead the design and delivery of standardized KPI suites and dashboards for executives, category teams, supplier partners, and member account teams (e.g., savings scorecards, compliance heatmaps, portfolio optimization).
- Partner with Product and Engineering to ensure the data platform (warehouse, semantic layer, BI tools) can support self-service analytics, embedded insights in member/supplier portals, and AI-driven use cases.
- Champion enterprise metrics and advanced analytics capabilities such as, forecasting, benchmarking, opportunity sizing, and integrity analytics, ensuring models are traceable, governed, and auditable.
- Translate business needs into clear data products (curated data sets, subject-area marts, APIs) that serve both internal teams and external-facing solutions.
Stakeholder leadership and collaboration
- Serve as the enterprise “single point of accountability” for data and analytics, aligning priorities across Technology, Category Management, Supplier Relations, Sales, Finance, and Operations.
- Partner with Supplier and Member-facing teams to co-create analytics offerings that differentiate the GPO (e.g., supplier growth playbooks, member CFO dashboards, public-sector transparency packs).
- Educate executives and business leaders on data literacy, standard metrics, and how to use insights in planning, negotiations, and supplier programs.
- Collaborate closely with Security, Legal, and Compliance to ensure that member and supplier data is used ethically and in line with contracts and regulations.
Team building and operations
- Build and lead a high-performing team of data analysts, analytics engineers, data governance managers, and data stewards.
- Define operating rhythms (data council, data domain forums, metric review cadences) that keep governance and analytics tightly connected to business outcomes.
- Establish and track KPIs for the data function itself (data quality scores, adoption of governed datasets, BI usage, time-to-insight).
- Select and manage key tools and vendors in the analytics and governance ecosystem (warehouse, BI, catalog/governance, quality monitoring).
Qualifications
- Bachelor’s or Master’s degree in Data/Computer Science, Information Systems, Analytics, Statistics, Business, or related field.
- 10+ years of experience in analytics, data governance, or enterprise data management, including 3–5+ years leading teams.
- Proven experience in a procurement, supply chain, GPO, distribution, or B2B marketplace environment strongly preferred.
- Demonstrated success implementing data governance frameworks and delivering analytics that directly influenced commercial or procurement outcomes (e.g., savings, compliance, supplier growth).
- Hands-on familiarity with modern data platforms (e.g., Snowflake/BigQuery/Redshift, dbt, Power BI/Tableau/Looker, and one or more data catalog/governance tools).
- Strong grasp of regulatory / contractual considerations relevant to member and supplier data (data sharing agreements, use of benchmarking, privacy/security standards).
- Excellent leadership, storytelling, and stakeholder management skills; able to influence at C-suite and board levels.
Attributes for success
- Business-first mindset: instinctively ties data work to member value, supplier value, and financial impact.
- Pragmatic operator: balances governance rigor with speed, enabling innovation rather than blocking it.
- Skilled translator: can convert complex data and AI topics into clear narratives for executives, sales, and category leaders.
- Culture builder: passionate about creating a data-driven culture that values standard definitions, trusted data, and measurable outcomes.
Compensation:
$150,000 to $200,000 per year annual salary.
Exact compensation may vary based on several factors, including skills, experience, and education.
Benefit packages for this role include: Benefit packages for this role may include healthcare insurance offerings and paid leave as provided by applicable law.
This role is 3 days onsite. NO REMOTE or Relocation. Must be a US Citizen or Green Card Holder. Please do not apply if you are a EAD or H1-VIsa.
Job Description:
We are looking for a Data Architect to take ownership of designing and evolving a modern enterprise data ecosystem that supports analytics, reporting, and business decision-making. This role will focus on building and maintaining a secure, scalable data warehouse leveraging Microsoft cloud technologies such as Azure, Synapse, and Microsoft Fabric, while ensuring strong data quality, accessibility, and consistency across the organization.
This position will play a key role in establishing data standards, driving best practices in data modeling and governance, and partnering with both technical and business stakeholders. The ideal candidate is comfortable working independently and translating complex data concepts into actionable insights for non-technical audiences.
Key Responsibilities
Data Architecture & Modeling
- Design, implement, and maintain enterprise data warehouse solutions within Azure and Microsoft Fabric
- Develop and manage semantic data models to support reporting through Power BI and Azure Analysis Services
- Establish and document standards for data modeling, naming conventions, and dataset design
Data Governance & Quality
- Define and enforce data governance frameworks, including data definitions, access controls, and data policies
- Implement automated processes to monitor and improve data quality and integrity
- Partner with business users to understand requirements and resolve data inconsistencies
Technical Leadership
- Act as a subject matter expert for data architecture and enterprise data strategy
- Translate business requirements into scalable and efficient data solutions
- Provide guidance to stakeholders on data architecture decisions and trade-offs
Enablement & Collaboration
- Create documentation, data dictionaries, and standards to support self-service analytics
- Work closely with BI developers and business teams to ensure data solutions align with reporting needs
Required Qualifications
- 6+ years of experience in data architecture, data engineering, or BI-related roles
- Strong expertise with Microsoft Azure Data Services, including SQL, Synapse, Data Factory, and Fabric
- Advanced SQL skills with experience in query optimization
- Experience with Python or R for data processing and automation
- Deep understanding of semantic modeling in Power BI or Azure Analysis Services
- Hands-on experience with Power BI (Desktop, Service, DAX, Power Query)
- Experience integrating ERP (Epicor preferred) and CRM data for reporting and analytics
- Strong understanding of end-to-end ERP business processes (Quote-to-Cash, AR, AP, GL)
- Knowledge of enterprise data architecture principles and data lifecycle management
- Proven experience establishing and maintaining data governance and quality standards
- Strong communication skills with the ability to work with non-technical stakeholders
Preferred Qualifications
- Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related field
- Experience with modern data architecture patterns such as lakehouse, star schema, or medallion architecture
- Background in operational or supply chain environments
- Exposure to planning tools such as Anaplan or similar platforms