Colab Python Download Jobs in Usa
1,610 positions found — Page 90
A family-owned business since 1924, A. Duie Pyle provides a range of integrated transportation and distribution solutions throughout the Northeast and Mid-Atlantic. Supported by our vast network of Less-Than-Truckload (LTL) service centers, warehouse facilities, and dedicated locations, we have the ability to offer flexible and seamless integrated solutions tailored to our customer’s needs.
Simply put, when it comes to integrated supply chain solutions, Pyle People Deliver. Our promise is to provide outstanding service as it remains to be our first and foremost mission.
Position Summary:
We are seeking a Senior Data Architect to lead the design, modernization, and operational excellence of our enterprise data platform. This role blends hands on data architecture with cloud and on prem platform engineering, reliability, and DevOps practices.
The ideal candidate brings deep experience designing scalable data solutions, modernizing database environments, implementing automation and CI/CD pipelines, and driving platform reliability across mission critical systems. This role requires both strategic architectural thinking and hands on implementation across cloud services, relational databases, automation tooling, and enterprise system integrations.
The responsibilities of the position include, but are not limited to:
Data Platform Architecture & Modernization
- Architect scalable, secure, and high availability data platforms across cloud/hybrid environments
- Designing and overseeing database modernization initiatives (e.g., On-prem SQL server to managed services such as RDS or equivalent)
- Defining data storage strategies across relational and operational systems
- Establishing standards for availability, resilience, performance optimization, and cost efficiency
- Producing architectural diagrams and documentation to guide implementation and long-term platform strategy
Data Ingestion & Integration
- Designing and implementing scalable ingestion pipelines across enterprise systems
- Developing ingestion and transformation logic using SQL and Python
- Supporting integration patterns across APIs, batch systems, and event-driven architectures
- Designing monitoring and alerting mechanisms to ensure ingestion reliability and observability
- Enabling data availability for analytics and operational reporting without compromising system performance
Cloud & Infrastructure Engineering
- Architecting and managing cloud-based data services
- Designing monitoring frameworks using tools such as CloudWatch, New Relic, or equivalent
- Optimizing cloud infrastructure costs while maintaining performance and reliability
- Supporting secure access patterns, identity management, and operational governance
DevOps & Platform Reliability
- Implementing CI/CD pipelines for data and database deployments (Azure DevOps or similar)
- Establishing version control and automated deployment standards for data environments
- Improving SDLC processes for database and data platform releases
- Ensuring high system availability (99.9%+ targets) and proactive incident management
- Supporting incident response processes and RCA for data related systems and/or outages
Database Architecture & Performance Optimization
- Designing relational database schemas for scalability and performance
- Clearly define and implement indexing, partitioning, and query optimization standards
- Implementing backup, disaster recovery, business continuity and high availability strategies
- Guiding database tuning and performance monitoring practices
Governance & Technical Leadership
- Establishing data architecture standards and naming conventions
- Driving platform documentation and operational best practices
- Partnering with application, infrastructure, and analytics teams
- Serving as technical authority across data centric initiatives
- Mentoring engineers through design reviews and architecture governance
To be qualified for this position, you must possess the following:
- 8+ years of experience in data architecture, cloud engineering, or platform focused roles
- Strong experience with designing, implementing, and maintenance of data solutions across on-premises and cloud platforms (Snowflake/Databricks/MS Fabric, and SQL Server)
- Advanced SQL proficiency and strong Python coding skills
- Proven experience modernizing enterprise database environments
- Experience implementing CI/CD pipelines for data platforms, preferably Azure DevOps
- Strong understanding of database performance tuning and availability design
- Experience designing systems for high availability and operational reliability
The following skills are preferred, but not required:
- Experience with CDC, streaming, or event-driven ingestion architectures is a plus
- Familiarity with enterprise CRM (Salesforce, home grown) or billing platforms (Great Plains, Dynamics) and data integration across these as data sources into a cloud DWH
- Experience with Elasticsearch or similar search/indexing platforms
- Knowledge of cost optimization in data cloud environments – across storage, usage and data accessibility
- Experience working in highly regulated or operationally critical industries, influencing data governance principles and industry best practices
For a full job description associated with this posting, please contact A. Duie Pyle’s Human Resources department. This job posting is intended solely for external advertising purposes and does not represent a comprehensive list of all job-related duties and qualifications.
We are an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity or expression, pregnancy, age, national origin, disability status, genetic information, protected veteran status, or any other characteristic protected by law.
Job Description:
This position requires office presence of a minimum of 5 days per week and is only located in the location(s) posted. No relocation is offered.
AT&T will not hire any applicants for this position who require employer sponsorship now or in the future.
Join AT&T and reimagine the communications and technologies that connect the world. The Chief Information Office is responsible for advancing information technology performance and delivering solutions with a focus on maximizing ROI, increasing efficiency and enhancing the experience of end users. Guided by experienced leaders, Corporate Systems seamlessly integrate with advanced Technology and Operations to drive our enterprise forward. Our Systems Reliability and Software Delivery teams are unwavering in their commitment to excellence, ensuring every solution is robust and efficient. When you step into a career with AT&T, you won’t just imagine the future-you’ll create it.
What you’ll do:
Design and deliver enterprise-grade solutions for Release and Change Management within IT Service Management. Drive automation, data driven and AI enabled integration, and process optimization to enhance resilience, accelerate delivery, and ensure service reliability. Focus on transitioning from fragmented, application-specific practices to mature enterprise models by embedding AI-powered capabilities such as predictive risk analysis, automated change validation, and intelligent scheduling across RM/CM processes.
- Architecture & Design: Build scalable, high-availability systems for standardized RM/CM processes. Integrate automation, AI, and data analytics for efficiency and consistency.
- Agile & DevOps Enablement: Embed SAFe, Agile, DevOps, and CI/CD principles. Optimize pipelines for rapid, secure deployments.
- Data and AI-Enabled Release Management Strategy: Transition from fragmented, app-driven practices to a mature enterprise model by embedding data and AI-powered capabilities across solution areas—predictive analytics, automated risk assessment, and intelligent change validation.
- Requirements & Integration: Translate business needs into technical specifications. Ensure seamless integration with ITSM platforms, cloud services, and infrastructure.
- Sprint-Based Delivery: Use iterative sprint approach to develop standardized policies and tools, pilot quickly, and scale across the enterprise.
- Technical Leadership: Guide cross-functional teams, foster innovation, and align solutions with business objectives.
- Performance & Reliability: Analyze system performance, troubleshoot proactively, and minimize incidents. Benchmark metrics. Building Dashboard for tracking progress and metrics.
- Governance & Compliance: Standardize RM/CM processes with secure, auditable, and repeatable patterns.
- Continuous Improvement: Use post-release analytics to refine processes and adopt emerging trends.
- Documentation & Training: Maintain clear documentation and deliver training on AI-enabled RM/CM strategy to enterprise partners.
- Mentoring; Drives Culture of Continuous Improvement
What you’ll need:
- Expert level Knowledge of SDLC for SAFe Agile and DevOps environments; best-in-class Release and Change Management framework and IT Service Management.
- Data and AI Skillset: Advanced Data analytics, KPI metrics, and Prompt Engineering expertise; Guiding development of Agentic AI workflows and Gen AI use cases; Power BI, Python
- Governance and Communication: Establishing process framework, Implementing solutions and tools, Building standardized playbooks, and leading governance boards for ATS-wide implementation
What you’ll bring:
Required
- 7+ years in Systems Engineering, ITSM, RM/CM
- Expertise in SAFe, Agile, DevOps, CI/CD, Data Analytics, building Gen AI use cases
- Experience with AI technologies, Python, SQL, data analytics, Power BI and ITSM tools (e.g., ServiceNow)
- Modern Enterprise Release Management/Change Management and ITSM
- Advanced expertise in Excel, PowerPoint, PowerBI
Preferred
- BS/BA in Computer Science
- Preferred Tools (Modern Release Management processes for Agile and DevOps environments)
- Jira Align, JSM, Jira Cloud, Git for enterprise RM/CM
- Relevant certifications (SAFe, Agile, DevOps, AI/ML)
Our Principal System Engineering earns between $141,300-$237,400 USD Annual, not to mention all the other amazing rewards that working at AT&T offers. Individual starting salary within this range may depend on geography, experience, expertise, and education/training.
Joining our team comes with amazing perks and benefits:
- Medical/Dental/Vision coverage
- 401(k) plan
- Tuition reimbursement program
- Paid Time Off and Holidays (based on date of hire, at least 23 days of vacation each year and 9 company-designated holidays)
- Paid Parental Leave
- Paid Caregiver Leave
- Additional sick leave beyond what state and local law require may be available but is unprotected
- Adoption Reimbursement
- Disability Benefits (short term and long term)
- Life and Accidental Death Insurance
- Supplemental benefit programs: critical illness/accident hospital indemnity/group legal
- Employee Assistance Programs (EAP)
- Extensive employee wellness programs
- Employee discounts up to 50% off on eligible AT&T mobility plans and accessories,
- AT&T internet (and fiber where available) and AT&T phone.
About Wondercide
Wondercide was founded 15 years ago by Stephanie Boone when her dog Luna became ill from what her vet suspected was conventional flea and tick treatments and monthly pest control services. Stephanie knew there had to be a better way and set out on a mission to invent a plant-powered alternative. Today, Wondercide offers a comprehensive line of plant-powered pest control solutions for your pets, yard, home, and family with +50,000 5-star reviews on Amazon.
Wondercide, based in Austin, TX, is a privately held, high-growth, and digitally native consumer packaged goods company that has an omnichannel presence and is expanding into specialty brick-and-mortar and beyond. The company is a vertically integrated organization where sales, marketing, creative content, customer service, innovation, procurement, mixing, production, fulfillment, and more are all done in-house. This allows the team to control their own destiny from a multi-year roadmap down to quality of execution via operational excellence.
We are a close-knit, highly collaborative team of ‘doers’ who operate in an entrepreneurial and KPI-driven environment. Grit, Action, Curiosity, Ownership, and Insight are the five operating values we embody in our day-to-day work.
At Wondercide, we’re driven by a Fierce Love® for families. We wake up every day inspired by our mission to protect families of every kind, everywhere, from pests with safe, effective pest control solutions. We work with Mother Nature to deliver plant-powered products that promote well-being. We do this so families can live long, happy, and healthy lives together. We believe in doing whatever it takes to protect those we love…and that when you know better, you can do better. Our promise to customers: they’ll never have to go it alone. We’re in this together, and we’ll be there to support each step of the way.
About the Role
This role reports to the Chief Growth Officer and plays a critical part in commercial data visibility, performance reporting, and insight generation across Sales, Finance, Operations, Brand, and Growth.
You will build scalable reporting systems, automate workflows, and create dashboards that provide clear performance visibility across B2B, D2C, retail, and POS channels.
This is a hands-on analytics role focused on delivering accurate reporting, identifying key drivers, and supporting better business decisions.
What You’ll Do
Analytics & Reporting
- Write and maintain complex SQL queries and scripts to extract, transform, and analyze data from multiple systems.
- Build scalable, automated reporting models using advanced Excel/Google Sheets formulas.
- Develop and maintain dashboards in Hex (or similar BI tools), leveraging some SQL and Python as necessary.
- Create executive-ready visualizations and performance reporting frameworks.
- Ensure data accuracy, consistency, and integrity across systems.
S&OP and Forecast Visibility
- Maintain forecasting dashboards and reporting frameworks that support sales, financial, and operational planning.
- Analyze forecast variance, accuracy, and bias to surface key drivers and risks.
- Support S&OP by ensuring inputs are consolidated, validated, and clearly visualized.
- Analyze performance across omnichannel business (Retail, B2B, D2C, POS).
Syndicated Data & Category Support
- Support analysis of syndicated data sources (e.g., Nielsen or similar) to provide category performance visibility and competitive benchmarking.
- Maintain category scorecards, including distribution, velocity, pricing, and promotional metrics.
- Surface key trends and competitive movements to support retail sales strategy.
- Provide analytical support for buyer meetings and line reviews.
Data Systems & Process Optimization
- Partner with IT and data engineering resources to improve data pipelines and system integrations.
- Define data acquisition and integration logic to ensure scalability and reliability.
- Improve workflows through automation, documentation, and streamlined reporting structures.
- Document models, queries, and reporting logic for long-term scalability.
Cross-Functional Leadership
- Partner with Sales, Finance, Operations, Brand, and Product teams to answer strategic business questions.
- Translate complex data into clear insights that influence decision-making.
- Provide data-driven recommendations with clearly stated assumptions and confidence levels.
- Manage multiple priorities in a fast-paced, growth-focused environment.
What We’re Looking For
- 3+ years of experience in data analysis, business analytics, or a related field.
- Strong SQL skills and experience writing complex queries.
- Advanced Excel or Google Sheets skills.
- Experience with BI/dashboarding tools (Hex, Tableau, Power BI, Looker, or similar).
- Experience supporting commercial forecasting processes (variance analysis, accuracy tracking).
- Experience working with syndicated retail data (Nielsen, IRI, SPINS, etc.).
- Python experience (NumPy, Pandas) preferred.
- Strong analytical mindset with the ability to identify meaningful performance drivers.
- Excellent communication skills and ability to influence cross-functional stakeholders.
- Ability to work independently while collaborating effectively across teams.
Preferred Experience
- Omnichannel CPG, retail, manufacturing, or consumer goods environment.
- Experience supporting S&OP or demand planning processes.
- Pet industry or pest control experience is a plus.
Why This Role Matters
This role directly impacts how Wondercide plans, prioritizes, and grows. Your work will shape forecasting accuracy, inventory strategy, channel performance, and leadership decision-making.
If you’re excited about building scalable analytics systems that drive real business outcomes, we’d love to meet you.
What’s in it for you?
- We mentioned changing the world, right? Need more? You got it!
- Work with a dream team that will support you and help you succeed
- Good pay and benefits, including low healthcare premiums, 100% of vision and dental covered, paid volunteer time off, and extended maternity and paternity leave
- Bonus program that is based on business performance
- Performance-based review process, giving you direct influence on your performance/merit increase
- Company-wide Thankful Thursdays and Thrilling Thursdays.
- Fun swag and free Wondercide gear/products
This position is based in Round Rock, TX, at the new Wondercide headquarters, with a hybrid option available. This is not a remote position.
Here at Wondercide, we celebrate, support, and thrive on diversity and inclusion. We’re a proud Equal Opportunity/Affirmative Action Employer. If you’re interested in joining the Wondercide Pack, apply today!
IDR is seeking a Clinical Data Engineer to join one of our top clients for a remote opportunity. This role involves developing and maintaining scalable data pipelines within healthcare environments, focusing on enabling advanced analytics and machine learning applications for healthcare providers. The company specializes in healthcare data solutions, leveraging innovative technologies to improve clinical and operational outcomes.
Position Overview for the Clinical Data Engineer:
- Build and optimize scalable, near real-time data pipelines tailored for healthcare data systems
- Collaborate with data scientists, clinicians, and stakeholders to deliver high-performance, compliant data solutions
- Work extensively with Epic healthcare data, HL7, and FHIR interoperability standards
- Develop and maintain data pipelines using SQL, Python, and Snowflake with a focus on data accuracy and robustness
- Support advanced analytics, predictive modeling, and machine learning use cases in a healthcare setting
Requirements for the Clinical Data Engineer:
- 5+ years' experience within healthcare data engineering or healthcare analytics environments
- 5+ years' experience in SQL & Python
- 2+ years' experience developing Snowflake stored procedures and optimizing data transformations
- Experience working with both structured & unstructured data (JSON, PDFs, clinical event streams)
- Experience implementing robust error handling and monitoring within API-driven data pipelines
What's in it for you?
- Competitive compensation package
- Full Benefits; Medical, Vision, Dental, and more!
- Opportunity to get in with an industry leading organization
Why IDR?
- 25+ Years of Proven Industry Experience in 4 major markets
- Employee Stock Ownership Program
- Dedicated Engagement Manager who is committed to you and your success
- Medical, Dental, Vision, and Life Insurance
- ClearlyRated's Best of Staffing® Client and Talent Award winner 12 years in a row
Project description
We are seeking a hands-on Lab & Execution Engineer to support ADAS subsystem stability in an automotive development environment. This role is responsible for lab bring-up, HIL maintenance, test execution, troubleshooting, and ensuring bench stability aligned with production-level software releases.
The ideal candidate has strong experience in automotive test environments, hardware troubleshooting, vehicle networking, and automation support.
Responsibilities
Lab Bring-up & Commissioning
Set up, wire, and configure automotive test benches (HIL/SIL) from scratch
Integrate ECUs and manage power distribution and grounding
Perform bench validation prior to execution activities
Support new hardware integration and lab expansions
HIL Maintenance & Troubleshooting
Maintain and troubleshoot ADAS HIL environments (dSPACE SCALEXIO/AutoBox or NI PXI)
Configure and validate I/O channels
Support real-time debugging and signal validation
Identify root cause of hardware vs software failures
Vehicle Networking & Diagnostics
Monitor and debug CAN, CAN-FD, LIN, and Automotive Ethernet networks
Use Vector CANoe/CANalyzer to monitor bus health and analyze traffic
Support ECU flashing, reprogramming, and diagnostics activities
Test Execution & Failure Triage
Execute automated test suites in lab environments
Perform initial root cause analysis of test failures
Differentiate between:
Hardware connectivity issues
Firmware/software mismatches
Automation script defects
-Document findings and escalate appropriately
Lab Automation & Efficiency
Develop and maintain Python utility scripts
Support and enhance existing automation frameworks
Improve lab efficiency through scripting and process optimization
Environment & Configuration Management
Manage firmware/software versions across multiple test benches
Ensure parity between lab configuration and production vehicle software
Prevent configuration drift and maintain documentation
Instrumentation & Electrical Troubleshooting
Use digital multimeters, oscilloscopes, and programmable power supplies
Diagnose power, grounding, and signal integrity issues
Support low-level electrical debugging on ECUs and test hardware
Skills
Must have
3+ years of experience managing automotive lab and testing environments
ADAS subsystem test experience
Minimum 3 years of experience in software/hardware testing with focus on execution and hardware troubleshooting
Hands-on experience with HIL platforms (dSPACE or NI preferred)
Strong understanding of automotive communication protocols (CAN, CAN-FD, LIN, Ethernet)
Proficiency in Python scripting
Experience working with automotive ECUs and embedded systems
Job Title: Ignition SCADA Developer / Support Engineer
Department: OT / Industrial Automation
Detroit, MI
Full Time
Onsite
Role Overview
Ignition SCADA Developer / Support Engineer in Industrial Automation team. support of real-time industrial dashboards, and operator interfaces using Ignition by Inductive Automation.
HMI/SCADA development, database integration, Documentation, and familiarity with PLC systems and OT networking. This role requires both hands-on technical development and post-deployment support.
Job Descriptions
1. Dashboard & HMI Development
- Design and build high-performance, scalable real-time dashboards using Ignition's Perspective modules.
- Create responsive web-based HMIs for Desktop.
- Utilize templates, tag bindings, scripting, and UDTs for modular and reusable design.
- Develop alarm dashboards, KPI visualizations, production monitoring screens, and operator control interfaces.
2. SCADA Configuration & Deployment
- Set up and configure Ignition Gateways (single and redundant systems), projects, and modules.
- Manage deployment pipelines for Ignition projects in development, staging, and production environments.
- Collaborate with IT/OT to configure OPC-UA, MQTT, and tag providers across distributed systems.
- Implement project versioning, backups, and rollback strategies using Git or Ignition’s project tools.
3. Database & Data Modeling
- Design, query, and optimize SQL databases (PostgreSQL, MSSQL, MySQL) for process data and reports.
- Build dynamic datasets from historical tag data, transactional systems, and ERP/MES interfaces.
4. Scripting & Logic
- Write Python (Jython) scripts for dynamic behavior and data processing.
- Develop Gateway Event and Tag Change Scripts.
- Use Ignition Expression Language and Python for custom logic, bindings, and calculations.
5. Document & Report Generation
- Design and generate project Documentation for HMI and SCADA
- Schedule and deliver reports via email, file export, or shared drives.
- Create compliance reports (batch, downtime, traceability, OEE) integrated with MES or third-party systems.
6. System Support & Maintenance
- Monitor SCADA performance, logs, tag usage, and database performance.
- Troubleshoot and resolve runtime errors, deployment issues, and integration bugs.
- Support Ignition platform.
- Create user guides, SOPs, and technical documentation for all developed solutions
Technical Skills
- Strong expertise in:
- OPC-UA, MQTT, and Modbus protocols
- PLC Integration (Rockwell, Siemens, or equivalent)
- Ignition Gateway configuration and deployment
- Solid understanding of:
- OT network topologies and SCADA architecture
- HMI/SCADA security best practices
- Data historian and time-series data management
Tools & Platforms
- Ignition by Inductive Automation (Core modules, Perspective, Reporting)
- Database Systems: PostgreSQL, SQL Server, MySQL
- Version Control: Git, Bitbucket, GitHub
Job Title : GIS Specialist (Hybrid/Exp in utilities/oil/gas/power Industry preferred)
Job Description :
Seeking a GIS Analyst that will develop an understanding of current data state, workflows and processes and develop solutions for integrations, transformations, and deliverables.
This will include GIS data analysis, data mining, technical support, and database maintenance to meet internal and external customer requirements.
- Requires demonstrated ability to solve complex problems and recommend the best track for data development and processing.
- Project work will involve process improvement, quality control, data creation from spatial and tabular sources, conversion, migration, and maintenance.
- Bachelor’s degree in geography (GIS), engineering, computer science, or related field and 3+ years experience in industry standard GIS.
- GIS certificate and 3 years of related work experience in lieu of a degree in related discipline
- 1+ years in a utility or pipeline GIS
Job Responsibilities:
- Proficiency with linear referencing techniques and concepts is highly recommended
- Knowledge of the Utility and Pipeline Data model (UPDM) is highly recommended.
- Technical project tasks, including database design, advanced GIS analysis and modeling
- Performs data mining activities to meet customer requirements/specifications
- Provides specialized queries, maps and reports to meet customer requirements/specifications
- Performs application testing and documentation of defects
- Interfaces with users; documents requested/needed changes
- Identifies new GIS technologies/processes/applications to improve inter-/intra- departmental functions
- Creates and maintains existing automated processes using Model Builder/Python scripting or other tools
- Processes, prepares and converts data to enter into GIS from a variety of data formats •
- Analyzes current business processes and recommends best practice solutions • Perform QA/QC on version data that pushes to the production environment Knowledge, Skills & Abilities
- Esri ArcGIS Pro – advanced proficiency
- Esri ArcGIS 10.2x – advanced proficiency
- Linear referencing - advanced proficiency
- MS Office suite (Access, Excel, Word, PowerPoint, Visio) – advanced proficiency
- FME by Safe Software – intermediate proficiency
- Esri ArcGIS Enterprise – intermediate proficiency
- Utility Network – intermediate proficiency
- Model Builder – intermediate proficiency
- Python – intermediate proficiency
- SQL RDBMS – intermediate proficiency
- AutoCAD/CADD – basic proficiency
- Visual Basic/VBA – basic proficiency
- SharePoint – basic proficiency
- Excellent verbal and written communication skills
- Excellent geoprocessing and spatial analysis skills
- Strong requirements review, analytical, and problem solving skills
- Application testing script development and performance of testing
- Ability to quickly learn and apply new technologies
- Ability to function independently and as a team member
- Ability to handle multiple assignments and changing priorities •
- Ability to work effectively with limited direct supervision Travel (Up to...): 5%
Remote working/work at home options are available for this role.
Job Title: Sr. Data Architect
Skills: AWS, Snowflake, python, Data warehousing
Experience: 12+ years
Location: Raleigh, NC/ Greensboro, NC (Hybrid)
Duration: Fulltime
We at Coforge are hiring Snowflake Certified Data Architect with the following skillset :
• Minimum 12 years of experience in Data Engineering and Architecture.
• Proven expertise in Snowflake data platform – architecture, performance tuning, and security.
• Strong hands-on experience with AWS services (S3, Lambda, Glue, Redshift, etc.).
• Proficiency in PySpark and Python for data processing and transformation.
• Experience with DBT or Coalesce for data transformation and orchestration.
• Deep understanding of Data Warehousing concepts, especially Data Vault modeling.
• Demonstrated experience in leading and managing offshore teams.
• Excellent communication and stakeholder management skills.
Preferred Qualifications:
• Snowflake certification(s) (e.g., SnowPro Core or Advanced Architect).
• Experience with CI/CD pipelines and DevOps practices in data engineering.
• Familiarity with data cataloguing and governance tools.
As a Data Science/Data Engineer Intern, you will work on cutting-edge analytical and data engineering projects that drive measurable business impact across pricing, underwriting, marketing, and claims.
This internship is ideal for a technically curious, motivated problem-solver who wants hands-on data science experience.
RESPONSIBILITIES
- Support the design, construction, and optimization of robust data pipelines to enable machine learning and analytical modeling.
- Contribute to the design and implementation of data and ML workflows using orchestration tools such as Dagster, Airflow, or similar frameworks.
- Help implement data quality checks, validation routines, and monitoring for automated data workflows.
- Assist in organizing and managing internal GitHub repositories to standardize ML project structures and best practices.
- Collaborate with data scientists and engineers to automate the ingestion, transformation, and delivery of data for model development.
- Contribute to initiatives migrating analytical processes into cloud-based data lake architectures and modern platforms such as AWS or Snowflake.
- Develop reusable and well-tested code to support analytical pipelines and internal tools using Python and SQL.
- Conduct data mining, cleansing, and preparation tasks to build high-quality analytical datasets.
- Participate in model development, including data profiling, model training, validation, and interpretation.
- Build and evaluate predictive models that enhance profitability through improved segmentation and estimation of insurance risk.
- Assist in studies evaluating new business models for customer segmentation, retention, and lifetime value.
- Collaborate with business leaders to translate insights into operational improvements and cost efficiencies.
QUALIFICATIONS
- Currently pursuing or recently completed a Master’s in Data Science, Computer Science, Statistics, Economics, or related field.
- Proficiency in Python (Pandas, NumPy, Scikit-learn, XGBoost, or PyTorch) and SQL.
- Understanding of data engineering concepts, ETL/ELT workflows, and machine learning deployment.
- Exposure to workflow orchestration tools (e.g., Airflow, Dagster, Prefect) and Git/GitHub for collaborative development.
- Familiarity with Docker, CI/CD pipelines, and infrastructure-as-code tools such as Terraform preferred.
- Knowledge of AWS cloud services such as S3, Lambda, EC2, or SageMaker a plus.
- Experience with common modeling techniques (e.g., GLM, tree-based models, Bayesian statistics, NLP, deep learning) through coursework or projects.
- Strong analytical, communication, and problem-solving skills.
- A self-starter mindset, with attention to detail and enthusiasm for learning new technologies.
SALARY RANGE
The pay range for this position is $35 hourly.
ABOUT THE COMPANY
The Plymouth Rock Company and its affiliated group of companies write and manage over $2 billion in personal and commercial auto and homeowner’s insurance throughout the Northeast and mid-Atlantic, where we have built an unparalleled reputation for service. We continuously invest in technology, our employees thrive in our empowering environment, and our customers are among the most loyal in the industry. The Plymouth Rock group of companies employs more than 1,900 people and is headquartered in Boston, Massachusetts. Plymouth Rock Assurance Corporation holds an A.M. Best rating of “A-/Excellent."
Title: Quantitative Researcher
Location: Onsite in Redmond, WA, or Burlingame, CA
Duration: 12 months + potential to extend
Compensation: $50 - $70/hr
Work Requirements: US Citizen, GC Holders or Authorized to Work in the U.S.
ABOUT THIS FEATURED OPPORTUNITY
Our Audio team pioneers research and development at the intersection of sound, embedded systems, and human experience—enabling new ways for people to connect, communicate, and collaborate.
As a UX Researcher, you'll be responsible for uncovering the user insights and feedback that guide our early research and advanced development toward experiences that are useful, usable, and desirable. You will contribute to an ambitious new project focused on AI wearables—advancing Superhuman Communication & Connection initiative. This role is ideal for UX researchers interested in speech communication, human-AI interaction, and real-world impact.
By joining our team, you will have the opportunity to work on breakthrough technologies that redefine how people connect and communicate, collaborate with world-class researchers and engineers in a fast-paced, mission-driven environment, and shape the future of AI wearables and superhuman audio experiences.
THE OPPORTUNITY FOR YOU
• Design & conduct research to guide the development of novel AI glasses user experiences, with a focus on speech communication and human-AI interaction, leveraging primarily quantitative UX research approaches and tools such as lab-based experimentation, statistical analysis, and surveys, supplemented by qualitative methods as needed
• Evaluate and iterate on audio and communication features for AI wearables by understanding how users perceive, interact with, and benefit from device-based experiences
• Translate the research insights into interpretable findings, measurable recommendations, and actionable insights for cross-functional partners and stakeholders
• Work closely with partners and stakeholders to influence development and integration of future AI wearables experiences and solutions
• Work closely with other UX researchers to help design research plans and specific studies
• Independently complete research, including data collection, data analysis, reporting, collaborating with teams to act upon research results, and knowledge consolidation.
KEY SUCCESS FACTORS
• MA/MS in Cognitive Psychology, Linguistics, Human Computer Interaction, Human Factors, Speech and Hearing Sciences, Neuroscience, or a related UX field
• 2+ years experience conducting applied research on consumer products with an emphasis on both lab-based and remote quantitative methods in industry
• Proficiency in descriptive statistics, inferential statistics, surveys, and experimental design
• Experience with qualitative research methods (e.g., interviews, diary studies)
• 1+ years of working in cross-functional teams
PREFERRED QUALIFICATIONS
• PhD in Cognitive Psychology, Linguistics, Human Computer Interaction, Human Factors, Speech and Hearing Sciences, Neuroscience, or a related UX field
• Experience synthesizing qualitative insights to inform product direction
• 2+ years of experience conducting attitudinal or behavioral research with wearable technology (AI glasses, headphones, smart watches, etc.)
• Advanced proficiency in statistical analysis and data visualization (e.g., R, Python)
• Strong oral and written communication skills with experience in tailoring messages to various stakeholders
Top 3 must-have HARD skills:
Strong foundation in mixed-methods research (leaning quantitative), especially experimental design, inferential statistics.
Proficiency in statistical programming and data visualization (R, Python)
Prior experience conducting lab-based perceptual or behavioral studies in an speech communication or human-AI interaction field (e.g., linguistics, speech sciences, voice-based AI)
Good to have skills:
Experience working on speech communication in research or applied settings
Enough speech, AI and tech awareness/skills to not need hand holding in explaining our context (at least one of: prior experience in speech input for voice AI systems, speech production research, speech communication research)
Prior experience in qualitative methods, including interviews, diary studies, and/or usability tests
About INSPYR Solutions
Technology is our focus and quality is our commitment. As a national expert in delivering flexible technology and talent solutions, we strategically align industry and technical expertise with our clients' business objectives and cultural needs. Our solutions are tailored to each client and include a wide variety of professional services, project, and talent solutions. By always striving for excellence and focusing on the human aspect of our business, we work seamlessly with our talent and clients to match the right solutions to the right opportunities. Learn more about us at .
INSPYR Solutions provides Equal Employment Opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, sex, national origin, age, disability, or genetics. In addition to federal law requirements, INSPYR Solutions complies with applicable state and local laws governing nondiscrimination in employment in every location in which the company has facilities.
Information collected and processed through your application with INSPYR Solutions (including any job applications you choose to submit) is subject to INSPYR Solutions’ Privacy Policy and INSPYR Solutions’ AI and Automated Employment Decision Tool Policy: By submitting an application, you are consenting to being contacted by INSPYR Solutions through phone, email, or text.