Stack Using Array Definition In Data Structure Jobs Salary Jobs in Usa
31,302 positions found — Page 7
LHH Recruitment Solutions has partnered with a growing organization, and they are seeking a motivated Manufacturing Data & Sales Analyst to join their team. Seeking a data-driven analytics professional who thrives at the intersection of manufacturing operations, business intelligence, and executive decision support. This is a high-impact role for someone who enjoys building insight from the ground up—designing dashboards, automating reporting, owning data integrity, and translating complex information into clear, actionable business outcomes.
Why This Role Stands Out:
- High visibility and direct partnership with senior leadership.
- Opportunity to own and evolve enterprise-level analytics and reporting.
- Manufacturing environment where data truly drives strategy.
- Long-term growth potential in a stable, well-capitalized organization.
Key Responsibilities:
Data, Analytics & Reporting:
- Design, build, and continuously enhance dashboards, scorecards, and KPI reporting to support operational and commercial performance.
- Translate raw data into meaningful insights that influence decision-making at the executive level.
- Automate recurring reports and analytics processes to improve efficiency, accuracy, and scalability.
- Analyze trends related to revenue, production performance, forecasting, and product initiatives.
Manufacturing & Cross-Functional Partnership:
- Collaborate closely with Operations, Finance, IT, and Commercial teams to align data, metrics, and performance goals.
- Support forecasting, planning cycles, and performance reviews with reliable, actionable analytics.
- Identify risks, opportunities, and performance gaps within data sets and recommend solutions.
Systems & Data Ownership:
- Act as the primary owner of manufacturing and sales-related data systems, ensuring usability, accuracy, and value.
- Lead continuous improvement of reporting tools and system integrations.
- Partner with internal and external stakeholders to enhance system reporting capabilities.
- Champion data governance, consistency, and best practices across the organization.
Qualifications and Skills:
- Bachelor’s Degree in Data Science, Analytics, Business Intelligence, or a related field
- Proven experience building and maintaining dashboards, scorecards, and analytics tools.
- Background supporting a manufacturing environment.
- Strong ability to own data end-to-end—from extraction to interpretation to executive presentation.
- Experience automating reporting and analytics processes.
- Advanced analytical, problem-solving, and critical-thinking skills.
- Ability to clearly communicate insights to both technical and non-technical audiences.
- Advanced proficiency with Excel, reporting platforms, and Microsoft Office Suite.
- Advanced proficiency in SQL, PowerBI, and/or Tableau.
- Experience with IQMS is preferred.
- Strategic mindset with exceptional attention to detail.
Compensation Range: $90,000 - $120,000 + 15% Bonus
Benefits Offered: 2 weeks of vacation, paid sick leave where applicable by state law, Medical Insurance, Dental Insurance Vision Insurance, 401K, and Life Insurance.
If you are a passionate Manufacturing Data & Sales Analystlooking for anew and rewarding career, please apply today! You don’t want to miss out on this opportunity!
LHH is a leader in permanent recruitment—and in the placement of top talent. Our areas of specialty include office administration, customer service, human resources, engineering, and supply chain and logistics. Please feel to check us out and apply for other opportunities if this role isn’t a perfect match.
Equal Opportunity Employer/Veterans/Disabled
To read our Candidate Privacy Information Statement, which explains how we will use your information, please visit
About Cygnus Professionals, Inc.
Cygnus is a Princeton, NJ-headquartered global Business IT consulting and software Services firm with offices in the USA and Asia. Cygnus offers and enables innovation and helps our clients accelerate time to market & grow their business. Over 15 years, we have taken great pride in continuing our deep relationships with our clients.
For further information about CYGNUS, please visit our website Title: Data Architect
Location: Princeton, New Jersey – Onsite
W2 Contract
Job Summary
We are seeking an experienced Data Architect to design, build, and maintain scalable data architecture solutions supporting enterprise analytics, data integration, and digital transformation initiatives. The ideal candidate will work closely with business stakeholders, data engineers, and application teams to design robust data models, data pipelines, and enterprise data platforms that support advanced analytics and reporting.
Key Responsibilities
- Design and implement enterprise data architecture frameworks and best practices.
- Develop logical and physical data models for enterprise data platforms.
- Architect data lakes, data warehouses, and data integration solutions across cloud and on-prem environments.
- Collaborate with data engineers and application teams to build scalable data pipelines and ETL/ELT processes.
- Ensure data governance, data quality, security, and compliance standards are implemented across the data ecosystem.
- Evaluate and recommend data technologies, tools, and frameworks aligned with enterprise strategy.
- Provide architectural guidance for cloud-based data platforms (AWS/Azure/GCP).
- Optimize performance for large-scale data processing and analytics workloads.
- Support business intelligence, reporting, and advanced analytics initiatives.
Required Qualifications
- 10+ years of experience in data architecture, data engineering, or enterprise data management.
- Strong experience with data modeling (conceptual, logical, physical).
- Expertise with data warehouse and data lake architectures.
- Hands-on experience with ETL/ELT tools and data integration platforms.
- Experience with SQL and large-scale data platforms (Snowflake, Redshift, BigQuery, etc.).
- Experience working with cloud data platforms (AWS, Azure, or GCP).
- Strong understanding of data governance, data quality, and metadata management.
- Experience with big data technologies (Spark, Hadoop, Kafka) is a plus.
Preferred Skills
- Experience in Healthcare, Pharmaceutical, or Life Sciences domain.
- Knowledge of Master Data Management (MDM) and data catalog tools.
- Familiarity with BI tools such as Tableau, Power BI, or Looker.
- Strong communication skills to interact with business and technical teams.
Education
- Bachelor’s or Master’s degree in Computer Science, Information Systems, Data Science, or related field.
Cygnus Belief
We believe in our commitment to diversity & inclusion.
Equal Employment Opportunity Statement
Cygnus is an Equal Opportunity Employer. We ensure that no one should be discriminated against because of their differences, such as age, disability, ethnicity, gender, gender identity and expression, religion, or sexual orientation.
All our employment decisions are taken without looking into age, race, creed, color, religion, sex, nationality, disability status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status, or any other aspects of employment protected by federal, state, or local law. Applicants for employment in the US must have work authorization.
**Seeking a Data Quality Control Specialist in Las Vegas, NV**
Pay: $28- 35 / hr
Schedule: Full time, onsite, 40 hrs a week
Las Vegas, NV | On-site
Seeking a detail-driven Data Quality Control Specialist to support the accuracy, integrity, and compliance of clinical trial documentation across multiple studies. This role is ideal for an experienced clinical research professional who thrives in data review, quality oversight, and audit readiness.
What You’ll Do:
- Coordinate and oversee clinical data across various phases of clinical trials, ensuring accuracy and completeness
- Perform quality control (QC) reviews of source documents, medical records, eSource, and essential trial documentation
- Identify and communicate data discrepancies, protocol deviations, and documentation issues to PIs and Study Coordinators
- Collaborate with clinical teams to ensure adherence to SOPs, Good Documentation Practices (GDP), and GCP guidelines
- Support audit and inspection readiness, including internal QC efforts and inspection prep
- Monitor key data quality KPIs and assist in driving continuous quality improvement initiatives
- Partner cross-functionally to uphold data integrity, regulatory compliance, and site quality standards
What We’re Looking For:
- Bachelor’s degree in Clinical Research, Health Sciences, or related field (or equivalent experience)
- 3+ years of experience in clinical research, data management, QA/QC, or a related role
- Strong understanding of GCP, GDP, and regulatory requirements
- Experience reviewing clinical research documentation (source, CRFs/eCRFs, medical records)
- Familiarity with eSource platforms (CRIO strongly preferred)
- Detail-oriented, organized, and process-driven with strong communication skills
- Comfortable collaborating with coordinators, investigators, and cross-functional teams
Nice to Have:
- Site-level clinical research experience (CRC, Senior CRC, Data or Regulatory focused roles)
- Audit or inspection preparation experience
- Passion for data integrity and clinical trial quality
Title: Data Entry Clerk
Client: Global leader in Technology/ Electronics
Duration: 6 months contract with HIGH chance for extension or become permanent.
Location: Englewood Cliffs, NJ (Onsite)
Pay: $16-18/hr W2 + Benefits/PTO
Top Skills / Experience
- Required Education/Experience: High school diploma or GED with 5 years related experience or bachelor’s degree with 1 year of experience.
- Basic MS office (Excel, Word, and PowerPoint)
- Clear verbal and written communication
Key Responsibilities:
Access Management
- Create and manage access points for third-party servicers in the Global Service Portal and STG technician portal.
- Restore, reset, and activate user logins for the CE service network.
- Collaborate with newly authorized service providers to ensure their onboarding needs are met.
Financial And Administrative Support
- Review and submit accounting approvals and invoices for Field Service Operations functions.
- Verify budget accuracy and ensure proper system data entry.
- Review and maintain accurate documentation.
Field Service Assessments
- Support scheduling and track progress for annual Field Service Network Assessments.
- Submission of Assessment Results Data to CS Portal
- Review and analyze the annual Field Service Assessment survey.
Additional Projects
- Assist the Admin team with ad-hoc projects as needed.
Qualifications:
- High school diploma or GED with 5 years related experience or bachelor’s degree with 1 year of experience
- Strong organizational and multitasking skills.
- Detail oriented in data entry and system management.
- Excellent communication and interpersonal skills.
Preferred Skills
- Familiarity with service portals or similar systems.
- Basic knowledge of Microsoft Excel, Word, and PowerPoint.
- Basic understanding of financial processes and budgeting/invoicing.
- Extreme attention to detail
- Bachelor's degree, but not required
LocationAtlanta, Georgia
Full/Part TimeFull-Time
Regular/TemporaryRegular
Add to Favorite JobsEmail this Job
About Us
Georgia Tech is a top-ranked public research university situated in the heart of Atlanta, a diverse and vibrant city with numerous economic and cultural strengths. The Institute serves more than 45,000 students through top-ranked undergraduate, graduate, and executive programs in engineering, computing, science, business, design, and liberal arts. Georgia Tech's faculty attracted more than $1.4 billion in research awards this past year in fields ranging from biomedical technology to artificial intelligence, energy, sustainability, semiconductors, neuroscience, and national security. Georgia Tech ranks among the nation's top 20 universities for research and development spending and No. 1 among institutions without a medical school.Georgia Tech's Mission and Values
Georgia Tech's mission is to develop leaders who advance technology and improve the human condition. The Institute has nine key values that are foundational to everything we do:
1. Students are our top priority.
2. We strive for excellence.
3. We thrive on diversity.
4. We celebrate collaboration.
5. We champion innovation.
6. We safeguard freedom of inquiry and expression.
7. We nurture the wellbeing of our community.
8. We act ethically.
9. We are responsible stewards.
Over the next decade, Georgia Tech will become an example of inclusive innovation, a leading technological research university of unmatched scale, relentlessly committed to serving the public good; breaking new ground in addressing the biggest local, national, and global challenges and opportunities of our time; making technology broadly accessible; and developing exceptional, principled leaders from all backgrounds ready to produce novel ideas and create solutions with real human impact.
Job Summary
The Manager of Data is responsible for overseeing the collection, management, and analysis of institutional data to support decision-making and strategic planning. This role involves leading a team of data analysts and ensuring data integrity, security, and compliance with relevant regulations. Additionally, the manager collaborates with various departments to develop data governance policies and implement effective data management practices that enhance the institution's ability to leverage data for improved outcomes.
Responsibilities
Job Duty 1 -
Oversee the development and implementation of data management strategies to ensure the accurate collection, storage, and retrieval of institutional data.
Job Duty 9 -
Collaborate with academic and administrative departments to identify data needs and develop solutions that enhance data accessibility and usability.
Job Duty 10 -
Perform other duties as assigned.
Job Duty 2 -
Lead a team of data analysts in conducting data analysis and reporting to support institutional decision-making and strategic initiatives.
Job Duty 3 -
Establish and enforce data governance policies to ensure data quality, integrity, and compliance with relevant regulations and standards.
Job Duty 4 -
Monitor data management systems and tools, ensuring they are maintained, updated, and aligned with best practices in data security and privacy.
Job Duty 5 -
Provide training and support to staff on data management practices, tools, and analytical techniques to foster a data-driven culture within the institution.
Job Duty 6 -
Conduct regular audits of data processes and systems to identify areas for improvement and implement corrective actions as needed.
Job Duty 7 -
Prepare and present comprehensive reports on data trends, analysis findings, and management initiatives to senior leadership and relevant stakeholders.
Job Duty 8 -
Stay informed about emerging data management technologies and methodologies to continually enhance the institution's data management capabilities.
Required Qualifications
Educational Requirements
Bachelor's degree in related discipline or equivalent, related experience.
Required Experience
5+ years of relevant experience; 3+ years of supervisory knowledge.
Preferred Qualifications
Preferred Educational Qualifications
Master's degree in related discipline or equivalent, related experience.
- Master's degree in Computer Science, Information Technology, Information System, Data Science, Business Administration, related discipline or equivalent, related experience.
- Certified Data Management Professional certification.
- Experience designing, implementing and operating Security Information Management solutions such as SIMS or ThreatSwitch.
- Advanced knowledge of SQL, database design and data modeling expertise.
- Experience in managing and securing enterprise security database systems containing sensitive and regulated data.
- Experience in cross-departmental collaboration during security investigations, assessments, and compliance reviews.
USG Core Values
The University System of Georgia is comprised of our 25 institutions of higher education and learning as well as the System Office. Our USG Statement of Core Values are Integrity, Excellence, Accountability, and Respect. These values serve as the foundation for all that we do as an organization, and each USG community member is responsible for demonstrating and upholding these standards. More details on the USG Statement of Core Values and Code of Conduct are available in USG Board Policy 8.2.18.1.2 and can be found on-line at policymanual/section8/C224/#p8.2.18_personnel_conduct.
Additionally, USG supports Freedom of Expression as stated in Board Policy 6.5 Freedom of Expression and Academic Freedom found on-line at policymanual/section6/C2653.
Equal Employment Opportunity
The Georgia Institute of Technology (Georgia Tech) is an Equal Employment Opportunity Employer. The Institute is committed to maintaining a fair and respectful environment for all. To that end, and in accordance with federal and state law, Board of Regents policy, and Institute policy, Georgia Tech provides equal opportunity to all faculty, staff, students, and all other members of the Georgia Tech community, including applicants for admission and/or employment, contractors, volunteers, and participants in institutional programs, activities, or services. Georgia Tech complies with all applicable laws and regulations governing equal opportunity in the workplace and in educational activities.
Equal opportunity and decisions based on merit are fundamental values of the University System of Georgia ("USG") and Georgia Tech. Georgia Tech prohibits discrimination, including discriminatory harassment, on the basis of an individual's race, ethnicity, ancestry, color, religion, sex (including pregnancy), national origin, age, disability, genetics, or veteran status in its programs, activities, employment, and admissions. Further, Georgia Tech prohibits citizenship status, immigration status, and national origin discrimination in hiring, firing, and recruitment, except where such restrictions are required in order to comply with law, regulation, executive order, or Attorney General directive, or where they are required by Federal, State, or local government contract.
Other Information
This is a supervisory position.This position does not have any financial responsibilities.This position will have some driving.This role is considered a position of trust.This position does not require a purchasing card (P-Card).This position will have some traveling.This position does not require security clearance or ability to obtain one.This position is located in Atlanta, GASalary range is dependent on candidates experiences and skills that ranges from $109,136 - $159,284.You must be a US citizen to be considered for this role.
Other Information
The Georgia Tech Research Institute (GTRI) is the nonprofit, applied research division of the Georgia Institute of Technology (Georgia Tech). This position is in the Research Security Department (RS) of GTRI.
Background Check
Successful candidate must be able to pass a background check. Please visit employment/pre-employment-screening
Job Title – Lead Data Engineer
Please note this role is not able to offer visa transfer or sponsorship now or in the future
About the role
As a Lead Data Engineer, you will make an impact by designing, building, and operating scalable, cloud‑native data platforms supporting batch and streaming use cases, with strong focus on governance, performance, and reliability. You will be a valued member of the Data Engineering team and work collaboratively with cross‑functional engineering, cloud, and architecture stakeholders.
In this role, you will:
- Design, build, and operate scalable cloud‑native data platforms supporting batch and streaming workloads with strong governance, performance, and reliability.
- Develop and operate data systems on AWS, Azure, and GCP, designing cloud‑native, scalable, and cost‑efficient data solutions.
- Build modern data architectures including data lakes, data lakehouses, and data hubs, with strong understanding of ingestion patterns, data governance, data modeling, observability, and platform best practices.
- Develop data ingestion and collection pipelines using Kafka and AWS Glue; work with modern storage formats such as Apache Iceberg and Parquet.
- Design and develop real‑time streaming pipelines using Kafka, Flink, or similar streaming frameworks, with understanding of event‑driven architectures and low‑latency data processing.
- Perform data transformation and modeling using SQL‑based frameworks and orchestration tools such as dbt, AWS Glue, and Airflow, including Slowly Changing Dimensions (SCD) and schema evolution.
- Use Apache Spark extensively for large‑scale data transformations across batch and streaming workloads.
Work model
We believe hybrid work is the way forward as we strive to provide flexibility wherever possible. Based on this role’s business requirements, this is a hybrid position requiring 4 days a week in a client or Cognizant office in Atlanta, GA. Regardless of your working arrangement, we are here to support a healthy work-life balance though our various wellbeing programs.
The working arrangements for this role are accurate as of the date of posting. This may change based on the project you’re engaged in, as well as business and client requirements. Rest assured; we will always be clear about role expectations.
What you need to have to be considered
- Hands‑on experience developing and operating data systems on AWS, Azure, and GCP.
- Proven ability to design cloud‑native, scalable, and cost‑efficient data solutions.
- Experience building data lakes, data lakehouses, and data hubs with strong understanding of ingestion patterns, governance, modeling, observability, and platform best practices.
- Expertise in data ingestion and collection using Kafka and AWS Glue, with experience in Apache Iceberg and Parquet.
- Strong experience designing and developing real‑time streaming pipelines using Kafka, Flink, or similar streaming frameworks.
- Deep expertise in data transformation and modeling using SQL‑based frameworks and orchestration tools including dbt, AWS Glue, and Airflow, with knowledge of SCD and schema evolution.
- Extensive experience using Apache Spark for large‑scale batch and streaming data transformations.
These will help you stand out
- Experience with event‑driven architectures and low‑latency data processing.
- Strong understanding of schema evolution, SCD modeling, and modern data modeling concepts.
- Experience with Apache Iceberg, Parquet, and modern ingestion/storage patterns.
- Strong knowledge of observability, governance, and platform best practices.
- Ability to partner effectively with cloud, architecture, and engineering teams.
Salary and Other Compensation:
Applications will be accepted until March 17, 2025.
The annual salary for this position is between $81,000 - $135,000, depending on experience and other qualifications of the successful candidate.
This position is also eligible for Cognizant’s discretionary annual incentive program, based on performance and subject to the terms of Cognizant’s applicable plans.
Benefits: Cognizant offers the following benefits for this position, subject to applicable eligibility requirements:
- Medical/Dental/Vision/Life Insurance
- Paid holidays plus Paid Time Off
- 401(k) plan and contributions
- Long‑term/Short‑term Disability
- Paid Parental Leave
- Employee Stock Purchase Plan
Disclaimer: The salary, other compensation, and benefits information is accurate as of the date of this posting. Cognizant reserves the right to modify this information at any time, subject to applicable law.
OZ – Databricks Architect/ Senior Data Engineer
Note: Only applications from U.S. citizens or lawful permanent residents (Green Card Holders) will be considered.
We believe work should be innately rewarding and a team-building venture. Working with our teammates and clients should be an enjoyable journey where we can learn, grow as professionals, and achieve amazing results. Our core values revolve around this philosophy. We are relentlessly committed to helping our clients achieve their business goals, leapfrog the competition, and become leaders in their industry. What drives us forward is the culture of creativity combined with a disciplined approach, passion for learning & innovation, and a ‘can-do’ attitude!
What We're Looking For:
We are seeking a highly experienced Databricks professional with deep expertise in data engineering, distributed computing, and cloud-based data platforms. The ideal candidate is both an architect and a hands-on engineer who can design scalable data solutions while actively contributing to development, optimization, and deployment.
This role requires strong technical leadership, a deep understanding of modern data architectures, and the ability to implement best practices in DataOps, performance optimization, and data governance.
Experience with modern AI/GenAI-enabled data platforms and real-time data processing environments is highly desirable.
Position Overview:
The Databricks Senior Data Engineer will play a critical role in designing, implementing, and optimizing enterprise-scale data platforms using the Databricks Lakehouse architecture. This role combines architecture leadership with hands-on engineering, focusing on building scalable, secure, and high-performance data pipelines and platforms. The ideal candidate will establish coding standards, define data architecture frameworks such as the Medallion Architecture, and guide the end-to-end development lifecycle of modern data solutions.
This individual will collaborate with cross-functional stakeholders, including data engineers, BI developers, analysts, and business leaders, to deliver robust data platforms that enable advanced analytics, reporting, and AI-driven decision-making.
Key Responsibilities:
- Architecture & Design: Architect and design scalable, reliable data platforms and complex ETL/ELT and streaming workflows for the Databricks Lakehouse Platform (Delta Lake, Spark).
- Hands-On Development: Write, test, and optimize code in Python, PySpark, and SQL for data ingestion, transformation, and processing.
- DataOps & Automation: Implement CI/CD, monitoring, and automation (e.g., with Azure DevOps, DBX) for data pipelines.
- Stakeholder Collaboration: Work with BI developers, analysts, and business users to define requirements and deliver data-driven solutions.
- Performance Optimization: Tune delta tables, Spark jobs, and SQL queries for maximum efficiency and scalability.
- GenAI Applications Development: It is a big plus to have experience in GenAI application development
Requirements:
- 8+ years of experience in data engineering, with strong hands-on expertise in Databricks and Apache Spark.
- Proven experience designing and implementing scalable ETL/ELT pipelines in cloud environments.
- Strong programming skills in Python and SQL; experience with PySpark required.
- Hands-on experience with Databricks Lakehouse, Delta Lake, and distributed data processing.
- Experience working with cloud platforms such as Microsoft Azure, AWS, or GCP (Azure preferred).
- Experience with CI/CD pipelines, Git, and DevOps practices for data engineering.
- Strong understanding of data architecture, data modeling, and performance optimization.
- Experience working with cross-functional teams to deliver enterprise data solutions.
- Tackles complex data challenges, ensuring data quality and reliable delivery.
Qualifications:
- Bachelor’s degree in Computer Science, Information Technology, Engineering, or a related field.
- Experience designing enterprise-scale data platforms and modern data architectures.
- Experience with data integration tools such as Azure Data Factory or similar platforms.
- Familiarity with cloud data warehouses such as Databricks, Snowflake, or Azure Fabric.
- Experience supporting analytics, reporting, or AI/ML workloads is highly desirable.
- Databricks, Azure, or cloud certifications are preferred.
- Strong problem-solving, communication, and technical leadership skills.
Technical Proficiency in:
- Databricks, Apache Spark, PySpark, Delta Lake
- Python, SQL, Scala (preferred)
- Cloud platforms: Azure (preferred), AWS, or GCP
- Azure Data Factory, Kafka, and modern data integration tools
- Data warehousing: Databricks, Snowflake, or Azure Fabric
- DevOps tools: Git, Azure DevOps, CI/CD pipelines
- Data architecture, ETL/ELT design, and performance optimization
What You’re Looking For:
Join a fast-growing organization that thrives on innovation and collaboration. You’ll work alongside talented, motivated colleagues in a global environment, helping clients solve their most critical business challenges. At OZ, your contributions matter – you’ll have the chance to be a key player in our growth and success. If you’re driven, bold, and eager to push boundaries, we invite you to join a company where you can truly make a difference.
About Us:
OZ is a 28-year-old global technology consulting, services, and solutions leader specializing in creating business-focused solutions for our clients by leveraging disruptive digital technologies and innovation.
OZ is committed to creating a continuum between work and life by allowing people to work remotely. We offer competitive compensation and a comprehensive benefits package. You’ll enjoy our work style within an incredible culture. We’ll give you the tools you need to succeed so you can grow and develop with us and become part of a team that lives by its core values.
Junior Data Engineer
Location: East Windsor, New Jersey
E-Verified | Visa Sponsorship Available
About Us:
BeaconFire, based in Central NJ, is a fast-growing company specializing in Software Development, Web Development, and Business Intelligence. We're looking for self-motivated and strong communicators to join our team as a Junior Data Engineer!
If you're passionate about data and eager to learn, this is your opportunity to grow in a collaborative and innovative environment.
Qualifications We’re Looking For:
- Passion for data and a strong desire to learn and grow.
- Master’s Degree in Computer Science, Information Technology, Data Analytics, Data Science, or a related field.
- Intermediate Python skills (Experience with NumPy, Pandas, etc. is a plus!)
- Experience with relational databases like SQL Server, Oracle, or MySQL.
- Strong written and verbal communication skills.
- Ability to work independently and collaboratively within a team.
Your Responsibilities:
- Collaborate with analytics teams to deliver reliable, scalable data solutions.
- Design and implement ETL/ELT processes to meet business data demands.
- Perform data extraction, manipulation, and production from database tables.
- Build utilities, user-defined functions, and frameworks to optimize data flows.
- Create automated unit tests and participate in integration testing.
- Troubleshoot and resolve operational and performance-related issues.
- Work with architecture and engineering teams to implement high-quality solutions and follow best practices.
Why Join BeaconFire?
- E-Verified employer
- Work Visa Sponsorship Available
- Career growth in data engineering and BI
- Supportive and collaborative work culture
- Exposure to real-world, enterprise-level projects
Ready to launch your career in Data Engineering?
Apply now and let’s build something amazing together!
Data Analytics Internship
Los Angeles, CA, USA (Hybrid role)
Part-Time, $17.87/hr, Mid-April 2026 to Mid-August 2026.
DailyLook, a subsidiary of Victoria’s Secret & Co. (NYSE: VSCO) since being acquired in December 2022, is seeking a Data Analytics Intern. This internship offers the opportunity to work across 2 key teams at Dailylook: Demand Planning and Data Growth. The intern will be at the core of the business, leveraging data and analytics to support strategic initiatives and help drive data-informed improvements across operations, inventory planning, and growth initiatives. This is a great chance to gain hands-on experience working with real business data while contributing to impactful decisions!
Qualifications for the Position
- A degree in (or a junior, senior or graduate student pursing a degree in): data science, statistics, computer science, economics (quantitative track), applied analytics, mathematics or business analytics.
- GPA 3.3+ preferred
- Coursework or experience in: Statisical analysis, data analytics, machine learning.
- Experience with database systems, SQL and Python
- Familiarity with BI tools such as Looker or Tableau.
- Exemplary interpersonal communication skills both verbal and written
- Highly motivated, collaborative
- Experience in a Startup or Retail industry is an extra plus!
- An intellectually curious team player with a no-compromises approach to work quality, attention to detail, organization, and the ability to manage multiple priorities and projects in a fast-paced environment
- Self-motivated, detail-oriented, hands-on go-getter with the ability to build and suggest overhaul processes where needed, take initiative, work independently and proactively, multi-task, and remain flexible with changing priorities
- “I’ll find a way!” mindset where you can leverage your autonomy within your role to think outside the box.
- Demonstrated ability to communicate and collaborate effectively across global teams by adapting to diverse cultural norms, respecting time zone differences, and leveraging digital collaboration tools to maintain alignment and productivity
- Skilled in building trust and fostering inclusive communication styles that support clarity, empathy, and shared goals in international work environments
- Ability and willingness to work on-site at our office in Downtown LA at least once a week.
Responsibilities
- Reports to the Planning Team.
- Maintain and migrate existing demand planning and inventory reports to the current BI tool.
- Build and update weekly and monthly dashboards covering product performance, box performance, and styling metrics
- Assist in developing demand planning assumptions and forecasting frameworks (style demand, size curves, inventory flow)
- Build basic planning tools in Google Sheets / BI tools to support: Size curve projections & Product lifecycle tracking
- Conduct assortment and scenario analysis to support predictive demand planning
- Analyze inventory health, sell-through trends, and replenishment opportunities
- Identify optimization opportunities within the current planning workflow and BI infrastructure
- Document demand planning processes and support improvements to internal planning tools.
- Support the team in analyzing marketing and subscription performance, including acquisition, traffic/funnel, CRM, engagement, etc.
- Support migration and setup of analytics tools and platforms to improve tracking of user behavior and marketing performance
- Assist with dashboard updates, reporting, and basic data checks to ensure data quality
- Help monitor A/B tests and experiments for CRM campaigns and website initiatives
- Conduct ad-hoc analyses to provide insights and recommendations for the team
- Document data workflows & the new data infrastructure.
Compensation & Benefits
The pay for this position is $17.87 an hour. This is a non-exempt, part-time position.
DailyLook is proud to provide equal opportunity to all employees and qualified applicants without regard to race, color, religion, national origin or citizenship, age, sex, marital status, ancestry, legally protected physical or mental disability, veteran status, gender identity, sexual orientation or any other basis protected under applicable law.
By applying for this position, the applicant authorizes DailyLook to check all references list on your application and/or resume.
This is a great opportunity for anyone with construction, fabrication, or trade experience (or just a strong work ethic and willingness to learn) to launch a stable career with growth potential.
What You’ll Do as a Field Technician – Entry-Level (Construction / Data Centers)
As a Field Technician, you’ll:
- Install, assemble, and modify containment systems that improve cooling efficiency in data centers
- Perform specialized cleaning and decontamination of equipment and areas to keep facilities running at peak performance
- Assist with deliveries, organize materials, and maintain tools and equipment
- Follow direction from supervisors to complete tasks safely, accurately, and on time
- Identify and report potential risks, always prioritizing safety
- Represent the company professionally with clients and team members
What We’re Looking For in a Field Technician – Entry-Level (Construction / Data Centers)
- 0–2 years of construction, technician, or trade experience (data center experience is a plus)
- U.S. citizenship or naturalized citizen, 18+ years old
- Reliable transportation to job sites
- Able to pass a background check and drug screen
- Comfortable working at heights, around noise, and in temperatures from 0–100°F+
- Physically able to lift 50 lbs and stay on your feet most of the day
- Positive attitude, strong work ethic, and good communication skills
Schedule & Pay for Field Technician – Entry-Level (Construction / Data Centers)
- Monday–Friday, 6:00 AM to 3:00 PM (overtime available)
- Full-time, on-site role
- Competitive hourly pay with overtime opportunities
- Full training, safety gear (PPE), and on-the-job mentorship provided
Why Join Us?
- Be part of the growing data center industry
- Gain hands-on technical skills with full training
- Work with a supportive team in a professional environment
- Build a career with opportunities for advancement
Apply today and start your career in data center construction with a growing technology company!