Query Optimization With Example Jobs in Usa
4,270 positions found — Page 2
Key Responsibilities
• Design and implement RESTful APIs and microservices with Go and Gin.
• Write clean, well-tested, maintainable backend code and optimize for performance, scalability and reliability.
• Implement authentication, authorization and other security best practices.
• Integrate third-party APIs and services as needed.
• Build responsive, high-performance user interfaces in React and TypeScript.
• Develop reusable components, maintain component libraries and manage state with Redux, Zustand or React Query.
• Ensure cross-browser compatibility and responsive design in collaboration with UX/UI designers.
• Design and maintain PostgreSQL schemas, write efficient SQL queries, create indexes and migrations, and optimize database performance.
• Design scalable system architectures, participate in code reviews and technical discussions, and implement CI/CD pipelines and automated testing.
• Monitor and debug production issues, mentor junior developers, and contribute to engineering best practices.
Essential Skills
• Go (Golang) with the Gin framework
• React and TypeScript
• PostgreSQL 17 and SQL query optimization
• RESTful API and microservices architecture
• State management tools such as Redux, Zustand or React Query
• CI/CD pipeline creation and automated testing
• Performance tuning and troubleshooting across the stack
Company Description
At Titl, we simplify the real estate process by eliminating paperwork, legal obstacles, and delays associated with buying, owning, or selling a home. Our advanced technology ensures transparency and peace of mind throughout every transaction. We provide a modern and user-friendly way to handle property—designed for today and prepared for future needs.
Role Description
We're seeking an experienced Full-Stack Engineer to join our team working on a sophisticated property data research and report generation platform. This role involves building and maintaining enterprise-grade systems that automate property data extraction from government sources, generate comprehensive property reports, and manage complex business workflows including payments, authentication, and blockchain integration.
What You'll Work On
- Backend Services: Develop and maintain NestJS microservices handling property data scraping, PDF generation, report aggregation, and enterprise account management
- Frontend Applications: Build responsive Next.js applications with complex state management and real-time updates
- Data Pipeline: Work with automated scraping systems using Puppeteer and AI-powered document processing (Google Document AI, OpenAI)
- Integration Development: Implement OAuth flows, Stripe payment processing, webhook handling, and third-party API integrations
- Queue Management: Design and maintain Bull queue systems for background job processing and async workflows
- Blockchain Integration: Work with Polymesh blockchain for property ownership verification and asset tokenization
- Database Design: Create efficient Prisma schemas and optimize PostgreSQL queries for complex property data relationships
Required Technical Skills
Core Stack (Must Have)
- Backend: Advanced proficiency in NestJS with deep understanding of dependency injection, decorators, guards, and service patterns
- Frontend: Expert-level Next.js 14 (App Router) and React with TypeScript
- Database: Strong Prisma ORM experience and PostgreSQL optimization skills
- TypeScript: Production-level TypeScript across full stack
- API Design: RESTful API design, DTOs, validation, and Swagger documentation Infrastructure & DevOps
- Docker: Container orchestration and development environments
- Cloud Platforms: Google Cloud Platform (Cloud Storage, Cloud Run)
- Queue Systems: Bull or similar job queue systems (Redis-backed)
- Monorepo: Experience with pnpm workspaces or similar monorepo tooling Authentication & Payments
- OAuth 2.0: Multi-provider authentication (Google, Facebook, LinkedIn)
- JWT: Token-based authentication and authorization patterns
- Stripe: Payment processing, webhooks, subscription management, and usage-based billing Specialized Skills
- Web Scraping: Puppeteer or similar browser automation tools
- PDF Processing: PDF generation, manipulation, and data extraction
- AI/ML Integration: Experience with AI APIs (OpenAI, Google AI, etc.)
- Background Jobs: Async processing, retry logic, and error handling
Highly Desired Skills
- Blockchain: Polymesh or Ethereum blockchain integration experience
- Document Processing: OCR, document AI, or legal document processing
- Property/Real Estate Domain: Understanding of property records, deeds, liens, title commitments
- Legal Tech: Experience with legal document workflows or compliance systems
- Testing: Jest, testing-library, E2E testing frameworks
- Performance Optimization: Query optimization, caching strategies, lazy loading
- Security: OWASP best practices, rate limiting, encryption
Architecture & Design Requirements
You should be comfortable with:
- Design Patterns: Service-oriented architecture, repository pattern, factory pattern
- Dependency Injection: Understanding NestJS DI container and module system
- Database Relations: Complex multi-tenant data models with proper isolation
- State Management: React Context, server/client component patterns
- Error Handling: Comprehensive error handling, retry logic, fallback mechanisms, API Security: Rate limiting, API key management, webhook signature verification
Experience Requirements
- 5+ years of full-stack development experience
- 3+ years with TypeScript in production environments
- 2+ years with NestJS or similar enterprise Node.js frameworks
- 2+ years with modern React and Next.js
- Experience building production SaaS applications with multi-tenant architecture
- Track record of shipping complex features end-to-end
- Experience with third-party integrations and webhook systems
- Domain Knowledge (Preferred)
- Understanding of property data and real estate records
- Familiarity with government data systems and public records
- Knowledge of legal document structures (deeds, liens, mortgages, title commitments)
- Experience with regulated industries and compliance requirements
- Understanding of Miami-Dade County or similar municipal systems (bonus)
Development Practices
You should have experience with:
- Git workflows: Feature branches, pull requests, code review
- Documentation: Writing clear technical documentation and API specs
- Testing: Unit tests, integration tests, E2E tests
- CI/CD: Automated testing and deployment pipelines
- Agile: Working in iterative development cycles
- Code Quality: ESLint, Prettier, TypeScript strict mode
Problem-Solving Skills
We're looking for someone who can:
- Debug complex distributed systems across multiple services
- Optimize database queries and reduce API response times
- Design scalable architectures for high-volume data processing
- Handle edge cases in automated scraping and data extraction
- Troubleshoot integration issues with third-party services
- Implement robust error handling and monitoring
- Communication & Collaboration
- Clear written communication for documentation and code reviews
- Ability to explain technical concepts to non-technical stakeholders
- Collaborative approach to problem-solving
- Proactive in identifying and addressing technical debt
- Experience mentoring junior developers (preferred)
- Package Manager Note
- This project uses pnpm exclusively for monorepo management. Experience with pnpm workspaces is preferred, but npm/yarn monorepo experience transfers well.
What Makes You Stand Out
- Contributions to open-source projects
- Experience with LangChain or LangGraph for AI orchestration
- FastAPI or Python experience (for AI service integration)
- Understanding of title insurance or property ownership verification
- Experience with Puppeteer clusters and browser farm optimization
- Background in fintech or regulated industries
- Experience with multi-environment deployments (local, staging, production)
Working Style
This role requires:
- Attention to detail when working with legal and financial data
- Systematic approach to debugging complex systems
- Ability to work independently on ambiguous problems
- Comfort with reading and understanding existing codebases
- Pragmatic decision-making balancing speed and quality
- Tech Stack Summary: NestJS • Next.js • TypeScript • Prisma • PostgreSQL • Puppeteer • Bull • OAuth • Stripe • Google Document AI • OpenAI • Docker • GCP • Polymesh • pnpm
- This role offers the opportunity to work on challenging technical problems at the intersection of PropTech, LegalTech, and AI, building systems that handle real-world property data at scale.
Inserso is seeking a candidate to join our team to provide mission-critical Data and Software Development Security and Operations (DevSecOps) support to our federal customers. The Senior SQL Server 2022 Developer will design, develop, and maintain complex database solutions supporting our critical business applications on both NIPR/SIPR. This role requires 7+ years of SQL Server development expertise, including deep knowledge of T-SQL, performance tuning, ETL processes, and security best practices. The ideal candidate will be proficient in SQL Server 2022+, possess strong problem-solving and communication skills, and be able to mentor junior developers. Experience with cloud databases (Azure SQL Database) is a plus. The position will be on-site at Robins Air Force Base, Georgia.
Responsibilities:
- Design, develop, and implement high-performance, scalable, and reliable SQL Server 2022 databases.
- Develop and maintain complex stored procedures, functions, triggers, views, and other database objects.
- Optimize SQL queries and database performance through indexing, partitioning, and other optimization techniques.
- Design and implement data warehousing solutions, including ETL processes (Extract, Transform, Load).
- Implement and maintain database security measures to protect sensitive data.
- Troubleshoot and resolve database-related issues in production and development environments.
- Participate in code reviews and ensure adherence to coding standards and best practices.
- Mentor junior developers and provide technical guidance on SQL Server development.
- Collaborate with application developers, system administrators, and other stakeholders to deliver effective database solutions.
- Contribute to the development and maintenance of database documentation.
- Work with cloud-based database solutions (e.g., Azure SQL Database) as needed.
Required Skills/Experience:
- Must be a U.S. Citizen.
- Possess an active Secret Clearance.
- 7+ years of experience in software development, with a focus on .NET and C#.
- Deep understanding of SQL Server architecture, including database engine, storage engine, and query processor.
- Extensive experience with T-SQL, stored procedures, functions, triggers, and views.
- Proficiency in SQL Server 2022.
- Strong understanding of indexing strategies, query optimization, and performance tuning.
- Solid understanding of relational database design principles and normalization techniques.
- Experience designing and implementing complex database schemas.
- Proven ability to identify and resolve performance bottlenecks in SQL Server databases.
- Experience using performance monitoring tools and techniques.
- Experience designing and implementing ETL processes using SQL.
- Understanding of data warehousing concepts and methodologies.
- Strong understanding of database security principles and best practices.
- Experience implementing and maintaining database security measures.
Preferred Skills/Experience:
- Bachelor's degree in computer science or an IT-related discipline or equivalent experience.
- Experience with cloud-based database solutions (e.g., Azure SQL Database, AWS RDS).
- Experience with NoSQL databases.
- Experience with database administration tasks.
- Experience with Agile development methodologies.
- SQL Server certification (e.g., Microsoft Certified: Azure Data Engineer Associate).
- Experience with data visualization tools such as Power BI or Tableau.
- Experience with Python/PySpark.
Physical and/or Mental Qualifications:
- Excellent communication, interpersonal, and collaboration skills.
- Ability to effectively communicate technical concepts to both technical and non-technical audiences.
- Ability to work independently and as part of a team.
- Ability to prioritize tasks and manage time effectively.
- Ability to adapt to changing priorities and requirements.
EOE, including Disability/Vets.
Reasonable accommodation will be made for qualified individuals with a disability, where such accommodation will not impose an undue hardship during the application process and on the job.
Job Description
NoSQL DBA with Mongo DB, Casandra, Couchbase, Elastic Search
We are seeking an experienced Database Engineer with strong expertise in MongoDB and working knowledge of additional NoSQL technologies including Elasticsearch, Cassandra, and Couchbase. The ideal candidate will be responsible for designing, implementing, and maintaining high-performance database solutions that support our enterprise applications and data infrastructure.
Primary Skills (MongoDB):
• 5+ years of hands-on experience with MongoDB in production environments
• Expert-level knowledge of MongoDB architecture, replication, and sharding
• Proven experience with MongoDB performance tuning and optimization
• Strong understanding of MongoDB Atlas and cloud-based deployments
• Proficiency in MongoDB query optimization and index design
• Experience with MongoDB backup, recovery, and disaster recovery strategies
• Knowledge of MongoDB security best practices and implementation
• Familiarity with MongoDB aggregation framework and data modeling
Secondary Skills (Nice to Have):
• 2+ years of experience with Elasticsearch for search and analytics
• Working knowledge of Cassandra for distributed database management
• Familiarity with Couchbase for caching and document storage
• Understanding of NoSQL database selection criteria and trade-offs
Technical Requirements:
• 7+ years of overall database administration and engineering experience
• Strong proficiency in scripting languages (Python, Bash, JavaScript)
• Experience with database monitoring tools and APM solutions
• Knowledge of Linux/Unix systems administration
• Understanding of containerization technologies (Docker, Kubernetes)
• Experience with CI/CD pipelines and infrastructure as code
• Familiarity with cloud platforms (AWS, Azure, or GCP)
KEY RESPONSIBILITIES
MongoDB Database Management:
• Design and implement MongoDB database architectures for scalability and high availability
• Perform capacity planning and resource optimization for MongoDB clusters
• Configure and maintain replica sets and sharded clusters
• Develop and enforce MongoDB coding standards and best practices
• Monitor database performance and implement tuning strategies
• Manage database security, including authentication, authorization, and encryption
• Execute backup and recovery procedures ensuring data integrity
• Troubleshoot complex database issues and performance bottlenecks
Multi-Database Support:
• Assist with Elasticsearch cluster configuration and index management
• Support Cassandra deployments for high-throughput workloads
• Provide guidance on Couchbase implementation for caching solutions
• Evaluate and recommend appropriate NoSQL solutions based on use cases
• Maintain documentation for all database platforms
Collaboration & Development:
• Work closely with development teams on database design and optimization
• Review database schemas and queries for performance improvements
• Participate in code reviews focusing on database interactions
• Provide technical guidance and mentorship to junior team members
Operations & Maintenance:
• Implement monitoring and alerting solutions for database health
• Perform regular maintenance tasks including patching and upgrades
• Develop disaster recovery plans and conduct failover testing
• Maintain database documentation and runbooks
• Participate in on-call rotation for production support
PREFERRED QUALIFICATIONS
• MongoDB Certified DBA or Developer certification
• Experience with data migration projects across different database platforms
• Knowledge of data modeling for both relational and NoSQL databases
• Understanding of microservices architecture
• Contributions to open-source database projects
Meet REVOLVE:
REVOLVE is the next-generation fashion retailer for Millennial and Generation Z consumers. As a trusted, premium lifestyle brand, and a go-to online source for discovery and inspiration, we deliver an engaging customer experience from a vast yet curated offering totaling over 45,000 apparel, footwear, accessories and beauty styles. Our dynamic platform connects a deeply engaged community of millions of consumers, thousands of global fashion influencers, and more than 500 emerging, established and owned brands. Through 16 years of continued investment in technology, data analytics, and innovative marketing and merchandising strategies, we have built a powerful platform and brand that we believe is connecting with the next generation of consumers and is redefining fashion retail for the 21st century. For more information please visit REVOLVE the most successful team members have a thirst and the creativity to make this the top e-commerce brand in the world. With a team of 1,000+ based out of Cerritos, California we are a dynamic bunch that are motivated by getting the company to the next level. It’s our goal to hire high-energy, diverse, bright, creative, and flexible individuals who thrive in a fast-paced work environment. In return, we promise to keep REVOLVE a company where inspired people will always thrive.
To take a behind the scenes look at the REVOLVE “corporate” lifestyle check out our Instagram @REVOLVEcareers or #lifeatrevolve.
Are you ready to set the standard for Premium apparel?
Main purpose of the Senior Data Science Analyst role:
Use a diverse skill sets across math and computer science, dedicated to solving complex and analytically challenging problems here at Revolve.
Major Responsibilities:
Essential Duties and Responsibilities include the following. Other duties may be assigned.
- Partner closely with business leaders in Marketing, Product, Operations, Buying team to plan out valuable data science projects
- Conduct complex analysis and build models to uncover key learning form data, leading to appropriate strategy recommendations.
- Work closely with the DBA to improve BI’s infrastructure, architect the reporting system, and invest in time for technical proof of concept.
- Work closely with the business intelligence and tech team to define, automate and validate the extraction of new metrics from various data sources for use in future analysis
- Work alongside business stakeholders to apply our findings and models in website personalization, product recommendations, marketing optimization, to fraud detection, demand forecast, CLV prediction.
Required Competencies:
To perform the job successfully, an individual should demonstrate the following competencies:
- Outstanding analytical skills, with strong academic background in statistics, math, science or technology.
- High comfort level with programming, ability to learn and adopt new technology with short turn-around time.
- Knowledge of quantitative methods in statistics and machine learning
- Intense intellectual curiosity – strong desire to always be learning
- Proven business acumen and results oriented.
- Ability to demonstrate logical thinking and problem solving skills
- Strong attention to detail
Minimum Qualifications:
- Master Degree is required
- 3+ years of DS and ML experience in a strong analytical environment.
- Proficient in Python, NumPy and other packages
- Familiar with statistical and ML methodology: causal inference, logistic regression, tree-based models, clustering, model validation and interpretations.
- Experience with AB Testing and pseudo-A/B test setup and evaluations
- Advanced SQL experience, query optimization, data extract
- Ability to build, validate, and productionize models
Preferred Qualifications:
- Strong business acumen
- Experience in deploying end to end Machine Learning models
- 5+ years of DS and ML experience preferred
- Advanced SQL and Python, with query and coding optimization experience
- Experience with E-commerce marketing and product analytics is a plus
A successful candidate works well in a dynamic environment with minimal supervision. At REVOLVE we all roll up our sleeves to pitch-in and do whatever it takes to get the job done. Each day is a little different, it’s what keeps us on our toes and excited to come to work every day.
A reasonable estimate of the current base salary range is $120,000 to $150,000 per year.
Visa Status: US Citizen or Green Card Only
Location: Irving, TX (Local Candidates Only)
Employment Type: Full-time / Direct Hire
Work Environment: Hybrid (Monday thru Thursday - in office / Friday - at home)
***MUST HAVE 10+ YEARS EXPERIENCE AS A DATA ENGINEER***
***US Citizen or Green Card Only***
The AWS Senior Data Engineer will own the planning, design, and implementation of data structures for this leading Hospitality Corporation in their AWS environment. This role will be responsible for incorporating all internal and external data sources into a robust, scalable, and comprehensive data model within AWS to support business intelligence and analytics needs throughout the company.
Responsibilities:
- Collaborate with cross-functional teams to understand and define business intelligence needs and translate them into data modeling solutions
- Develops, builds and maintains scalable data pipelines, data schema design, and dimensional data modelling in Databricks and AWS for all system data sources, API integrations, and bespoke data ingestion files from external sources. Includes Batch and real-time pipelines.
- Responsible for data cleansing, standardization, and quality control
- Create data models that will support comprehensive data insights, business intelligence tools, and other data science initiatives
- Create data models and ETL procedures with traceability, data lineage and source control
- Design and implement data integration and data quality framework
- Implement data monitoring best practices with trigger based alerts for data processing KPIs and anomalies
- Investigate and remediate data problems, performing and documenting thorough and complete root cause analyses. Make recommendation for mitigation and prevention of future issues.
- Work with Business and IT to assess efficacy of all legacy data sources, making recommendations for migration, anonymization, archival and/or destruction.
- Continually seek to optimize performance through database indexing, query optimization, stored procedures, etc.
- Ensure compliance with data governance and data security requirements, including data life cycle management, purge and traceability.
- Create and manage documentation and change control mechanisms for all technical design, implementations and systems maintenance.
Target Skills and Experience
- Bachelor's or graduate degree in computer science, information systems or related field preferred, or similar combination of education and experience
- At least 10 years' experience designing and managing data pipelines, schema modeling, and data processing systems.
- Experience with Databricks a plus (or similar tools like Microsoft Fabric, Snowflake, etc.) to drive scalable data solutions.
- Experience with SAP a plus
- Proficient in Python, with a track record of solving real-world data challenges.
- Advanced SQL skills, including experience with database design, query optimization, and stored procedures.
- Experience with Terraform or other infrastructure-as-code tools is a plus.
Purpose
The IT Database Engineer is responsible for designing, implementing, and supporting relational database platforms in both traditional data centers and Azure cloud environments. The role covers installation, configuration, performance tuning, high availability, backup and recovery, monitoring, and incident response for Microsoft SQL Server, MySQL, and PostgreSQL, with participation in an on-call rotation to support mission-critical workloads.
Key Responsibilities
- Install, configure, and upgrade MSSQL, MySQL, and PostgreSQL in data center and Azure environments (IaaS and/or PaaS as applicable).
- Perform day-to-day database administration, including user and role management, permissions, schema changes, and maintenance tasks.
- Monitor database health, performance, and capacity using native and third-party tools; define meaningful alerts and dashboards for proactive issue detection.
- Troubleshoot database incidents (performance issues, blocking/deadlocks, failed jobs, connectivity problems, resource constraints) and drive root-cause analysis and permanent fixes.
- Design, implement, and maintain backup and recovery strategies (full/diff/log, PITR, snapshots, Azure backup options) and regularly test restore procedures.
- Implement and support high availability and disaster recovery configurations (e.g., SQL Server Always On, failover clustering, log shipping, MySQL/Postgres replication, Azure availability sets/zones).
- Optimize database performance through indexing strategies, query tuning, statistics management, and configuration tuning at both OS and database levels.
- Implement and enforce security controls (authentication, authorization, encryption at rest/in transit, auditing) aligned with organizational and regulatory requirements.
- Support application and development teams with database design, query optimization, and controlled deployment of schema changes across environments.
- Maintain detailed documentation including runbooks, standards, topology diagrams, data flows, and operational procedures for both on-prem and Azure deployments.
- Participate in an on-call rotation, responding to after-hours incidents, and perform planned maintenance during maintenance windows.
- Automate routine tasks (provisioning, checks, patching, reporting) using scripts and tooling (e.g., T-SQL, PowerShell, Bash, Python, Azure CLI).
Required Qualifications
- Proven experience as a Database Engineer/DBA supporting MSSQL, MySQL, and PostgreSQL in production environments.
- Hands-on experience managing databases in traditional data centers (physical/virtual servers) and Azure (e.g., SQL Server on Azure VMs, Azure SQL Database, Azure Database for MySQL/PostgreSQL or similar).
- Strong understanding of database internals: storage structures, indexing, transactions, isolation levels, and locking.
- Demonstrated skills in performance troubleshooting and tuning using execution plans, wait statistics, and monitoring metrics.
- Practical experience with HA/DR solutions and backup/restore strategies, including testing and documentation of failover/recovery procedures.
- Proficiency with scripting/automation for database operations and integration with operational tooling.
- Familiarity with networking, OS, and virtualization concepts relevant to database performance and connectivity (subnets, firewalls, load balancers, storage latency).
- Solid understanding of security best practices for databases.
Preferred Qualifications
- Experience with Azure-native monitoring and management tools (e.g., Azure Monitor, Log Analytics, Alerts, Managed Identities, Key Vault).
- Experience with CI/CD and database change automation, including schema versioning and deployment pipelines.
- Exposure to large-scale or high-volume databases, partitioning, and scaling strategies (vertical/horizontal).
- Knowledge of regulatory and compliance requirements related to data (e.g., PCI, HIPAA, GDPR) and data protection techniques (masking, tokenization).
- Relevant certifications (e.g., Microsoft Azure, SQL Server, MySQL, PostgreSQL).
Soft Skills
- Strong analytical and problem-solving skills, especially under time pressure during incidents and on-call situations.
- Clear communication skills to work effectively with developers, infrastructure teams, security, and business stakeholders.
- High sense of ownership for data integrity, availability, and reliability, with a structured approach to documentation and process.
Working Conditions (travel, hours, environment)
- Limited travel required including air and car travel
- While performing the duties of this job, the employee is occasionally exposed to a warehouse environment and moving vehicles. The noise level in the work environment is typically quiet to moderate.
Physical/Sensory Requirements
Sedentary Work – Ability to exert 10 - 20 pounds of force occasionally, and/or negligible amount of force frequently to lift, carry, push, pull or otherwise move objects. Sedentary work involves sitting most of the time but may involve walking or standing for brief periods of time.
Benefits & Rewards
- Bonus opportunities at every level
- Non-traditional retail hours (we close at 7p!)
- Career advancement opportunities
- Relocation opportunities across the country
- 401k with discretionary company match
- Employee Stock Purchase Plan
- Referral Bonus Program
- 80 hrs. annualized paid vacation (full-time associates)
- 4 paid holidays per year (full-time hourly store associates only)
- 1 paid personal holiday of associate’s choice and Volunteer Time Off program
- Medical, Dental, Vision, Life and other Insurance Plans (subject to eligibility criteria)
Equal Employment Opportunity
Floor & Decor provides equal employment opportunities to all associates and applicants without regard to age, race, color, religion or creed, national origin or ancestry, sex (including pregnancy), sexual orientation, gender, gender identity, disability, veteran status, genetic information, ethnicity, citizenship, or any other category protected by law.
This policy applies to all areas of employment, including recruitment, testing, screening, hiring, selection for training, upgrading, transfer, demotion, layoff, discipline, termination, compensation, benefits and all other privileges, terms and conditions of employment. This policy and the law prohibit employment discrimination against any associate or applicant on the basis of any legally protected status outlined above.
Job Title: Senior Data Engineer / Analytics Engineer
Location: West Los Angeles, CA (Onsite)
Compensation: $180,000 base salary + 10% bonus
Overview
We are looking for a Senior Data Engineer / Analytics Engineer to help architect and build scalable data solutions that power business insights for sales and marketing teams. This role is ideal for someone who enjoys being both strategic and hands-on, designing modern data architectures while actively building pipelines, models, and dashboards.
The ideal candidate has deep experience in modern data stack technologies and has worked closely with high-volume sales and marketing organizations, particularly supporting Salesforce-driven environments.
Key Responsibilities
Data Architecture & Engineering
- Design and build scalable data pipelines and data models that support analytics and reporting across the organization.
- Architect and implement solutions using Snowflake, DBT, Python, and Fivetran within a modern data stack.
- Optimize Snowflake environments for cost and performance, including warehouse configuration, query optimization, and storage strategies.
- Build and maintain robust data transformation pipelines using DBT for modeling, testing, and validation.
Analytics & Business Intelligence
- Develop high-impact dashboards and reporting solutions using Power BI to support decision-making across the business.
- Partner with stakeholders to define KPIs, metrics, and data models that support sales and marketing performance tracking.
- Ensure data reliability, consistency, and accessibility across analytics platforms.
CRM Data & Sales Analytics
- Work extensively with Salesforce data, helping clean, structure, and optimize complex CRM datasets.
- Design scalable data models that support reporting on sales performance, marketing attribution, pipeline analytics, and revenue metrics.
- Implement solutions to improve data quality and usability across CRM-driven reporting.
Business Partnership
- Partner closely with Sales and Marketing teams in a high-volume sales environment to understand reporting needs and deliver actionable insights.
- Translate business questions into scalable data solutions and analytics frameworks.
- Communicate technical concepts clearly to non-technical stakeholders and collaborate effectively across teams.
Required Qualifications
- 5+ years of BI Engineering, Data Engineering, or Analytics Engineering experience.
- Proven experience acting as both a data architect and hands-on builder.
- Strong experience with:
- Snowflake (including cost and performance optimization)
- DBT for transformations, modeling, and data validations
- Python
- Power BI - must have
- Experience working with Salesforce data, including cleaning, structuring, and building scalable reporting solutions for complex CRM datasets. or similar CRM tools.
- Experience supporting Sales and Marketing teams in high-volume sales environments.
- Strong communication skills and ability to work collaboratively with cross-functional stakeholders.
Preferred Qualifications
- Experience with Salesforce data architecture and CRM analytics.
- Background working with large-scale sales operations or marketing analytics teams.
- Experience building modern ELT data pipelines and scalable analytics frameworks.
Work Environment
- Onsite role in West Los Angeles
- Highly collaborative environment working closely with data, sales, marketing, and leadership teams.
About Wakefern
Wakefern Food Corp. is the largest retailer-owned cooperative in the United States and supports its co-operative members' retail operations, trading under the ShopRite®, Price Rite®, The Fresh Grocer®, Dearborn Markets®, and Gourmet Garage® banners.
Employing an innovative approach to wholesale business services, Wakefern focuses on helping the independent retailer compete in a big business world. Providing the tools entrepreneurs need to stay a step ahead of the competition, Wakefern’s co-operative members benefit from the company’s extensive portfolio of services, including innovative technology, private label development, and best-in-class procurement practices.
The ideal candidate will have a strong background in designing, developing, and implementing complex projects, with focus on automating data processes and driving efficiency within the organization. This role requires a close collaboration with application developers, data engineers, data analysts, data scientists to ensure seamless data integration and automation across various platforms. The Data Integration & AI Engineer is responsible for identifying opportunities to automate repetitive data processes, reduce manual intervention, and improve overall data accessibility.
Essential Functions
- Participate in the development life cycle (requirements definition, project approval, design, development, and implementation) and maintenance of the systems.
- Implement and enforce data quality and governance standards to ensure the accuracy and consistency.
- Provide input for project plans and timelines to align with business objectives.
- Monitor project progress, identify risks, and implement mitigation strategies.
- Work with cross-functional teams and ensure effective communication and collaboration.
- Provide regular updates to the management team.
- Follow the standards and procedures according to Architecture Review Board best practices, revising standards and procedures as requirements change and technological advancements are incorporated into the >tech_ structure.
- Communicates and promotes the code of ethics and business conduct.
- Ensures completion of required company compliance training programs.
- Is trained – either through formal education or through experience – in software / hardware technologies and development methodologies.
- Stays current through personal development and professional and industry organizations.
Responsibilities
- Design, build, and maintain automated data pipelines and ETL processes to ensure scalability, efficiency, and reliability across data operations.
- Develop and implement robust data integration solutions to streamline data flow between diverse systems and databases.
- Continuously optimize data workflows and automation processes to enhance performance, scalability, and maintainability.
- Design and develop end-to-end data solutions utilizing modern technologies, including scripting languages, databases, APIs, and cloud platforms.
- Ensure data solutions and data sources meet quality, security, and compliance standards.
- Monitor and troubleshoot automation workflows, proactively identifying and resolving issues to minimize downtime.
- Provide technical training, documentation, and ongoing support to end users of data automation systems.
- Prepare and maintain comprehensive technical documentation, including solution designs, specifications, and operational procedures.
Qualifications
- A bachelor's degree or higher in computer science, information systems, or a related field.
- Hands-on experience with cloud data platforms (e.g., GCP, Azure, etc.)
- Strong knowledge and skills in data automation technologies, such as Python, SQL, ETL/ELT tools, Kafka, APIs, cloud data pipelines, etc.
- Experience in GCP BigQuery, Dataflow, Pub/Sub, and Cloud storage.
- Experience with workflow orchestration tools such as Cloud Composer or Airflow
- Proficiency in iPaaS (Integration Platform as a Service) platforms, such as Boomi, SAP BTP, etc.
- Develop and manage data integrations for AI agents, connecting them to internal and external APIs, databases, and knowledge sources to expand their capabilities.
- Build and maintain scalable Retrieval-Augmented Generation (RAG) pipelines, including the curation and indexing of knowledge bases in vector databases (e.g., Pinecone, Vertex AI Vector Search).
- Leverage cloud-based AI/ML platforms (e.g., Vertex AI, Azure ML) to build, train, and deploy machine learning models on a scale.
- Establish and enforce data quality and governance standards for AI/ML datasets, ensuring the accuracy, completeness, and integrity of data used for model training and validation.
- Collaborate closely with data scientists and machine learning engineers to understand data requirements and deliver optimized data solutions that support the entire machine learning lifecycle.
- Hands-on experience with IBM DataStage and Alteryx is a plus.
- Strong understanding of database design principles, including normalization, indexing, partitioning, and query optimization.
- Ability to design and maintain efficient, scalable, and well-structured database schemas to support both analytical and transactional workloads,
- Familiarity with BI visualization tools such as MicroStrategy, Power BI, Looker, or similar.
- Familiarity with data modeling tools.
- Familiarity with DevOps practices for data (CI/CD pipelines)
- Proficiency in project management software (e.g., JIRA, Clarizen, etc.)
- Familiarity with DevOps practices for data (CI/CD pipelines)
- Strong knowledge and skills in data management, data quality, and data governance.
- Strong communication, collaboration, and problem-solving skills.
- Ability to work on multiple projects and prioritize tasks effectively.
- Ability to work independently and in a team environment.
- Ability to learn new technologies and tools quickly.
- The ability to handle stressful situations.
- Highly developed business acuity and acumen.
- Strong critical thinking and decision-making skills.
Working Conditions & Physical Demands
This position requires in-person office presence at least 4x a week.
Compensation and Benefits
The salary range for this position is $75,868 - $150,644. Placement in the range depends on several factors, including experience, skills, education, geography, and budget considerations.
Wakefern is proud to offer a comprehensive benefits package designed to support the health, well-being, and professional development of our Associates. Benefits include medical, dental, and vision coverage, life and disability insurance, a 401(k) retirement plan with company match & annual company contribution, paid time off, holidays, and parental leave.
Associates also enjoy access to wellness and family support programs, fitness reimbursement, educational and training opportunities through our corporate university, and a collaborative, team-oriented work environment. Many of these benefits are fully or partially funded by the company, with some subject to eligibility requirements
We Are Hiring: Databricks Lead Data Engineer – Director Equivalent Role
Location: Atlanta, USA
Work Model: Hybrid – 3 to 4 days in office per week (mandatory)
Eligibility: US Citizens and Green Card (GC) holders only
How to Apply
If you are interested in this position and have the required skills, please send across your resume at:
; ;
Paves Technologies is seeking a highly experienced Databricks Lead Data Engineer – Lead Level (Director Equivalent Role) to drive enterprise-scale data architecture, governance, and advanced analytics initiatives on Azure Cloud. This is a senior leadership role requiring deep Databricks expertise, strong data modeling capabilities, and hands-on architectural ownership across PySpark based distributed systems.
Role Overview
The ideal candidate will bring 10-12 + years of overall data engineering experience, including strong hands-on expertise with Azure Databricks, PySpark, Python, and Azure Cloud data services. You will define architecture standards, lead modernization initiatives, and implement scalable Medallion Architecture (Bronze, Silver, Gold layers) to support enterprise analytics and business intelligence.
Key Responsibilities
- Lead end-to-end architecture and implementation of enterprise-scale data platforms using Azure Databricks on Azure Cloud.
- Design and implement Medallion Architecture (Bronze, Silver, Gold layers) using Delta Lake best practices.
- Build scalable PySpark-based ETL/ELT pipelines across ingestion (Bronze), transformation (Silver), and curated analytics (Gold) layers.
- Develop advanced data transformations using Python, PySpark, Spark SQL, and advanced SQL constructs.
- Architect robust data models (dimensional, star schema, normalized models) aligned to analytics and reporting needs.
- Drive adoption of advanced Databricks capabilities including Unity Catalog, Declarative Pipelines, Delta Lake optimization, and governance frameworks.
- Establish best practices for partitioning strategies, file compaction, Z-ordering, caching, broadcast joins, and query optimization.
- Define and standardize reusable Azure Cloud data platform tools, templates, CI/CD frameworks, and infrastructure automation.
- Work across Azure ecosystem components such as Azure Data Factory (ADF), Azure Data Lake Storage (ADLS), Azure DevOps, networking, and security services.
- Ensure high standards for data quality, RBAC, lineage tracking, governance, and production stability.
- Provide architectural leadership and mentorship to data engineering teams.
Required Experience & Skills
- 10–12+ years of overall experience in Data Engineering.
- Minimum 3+ years of strong hands-on Databricks experience.
- Mandatory Certifications:
- Databricks Certified Data Engineer Associate
- Databricks Certified Data Engineer Professional
- Deep hands-on expertise in PySpark, Python programming, and distributed Spark processing.
- Strong experience designing and implementing Medallion Architecture (Bronze/Silver/Gold layers).
- Advanced knowledge of Data Modeling, Data Analysis, and complex SQL (window functions, CTEs, execution plan tuning).
- Strong understanding of Delta Lake architecture, schema evolution, partition strategies, performance optimization, and data governance.
- Well-versed in enterprise Azure Cloud data platforms, reusable accelerators, CI/CD templates, and governance standards.
- Proven experience architecting scalable, secure, cloud-native data solutions.
- Strong leadership, stakeholder management, and executive communication skills.
How to Apply
If you are interested in this position and have the required skills, please send across your resume at:
; ;