Codashop Ml Philippines Jobs in Usa
599 positions found — Page 7
About Pinterest:
Millions of people around the world come to our platform to find creative ideas, dream about new possibilities and plan for memories that will last a lifetime. At Pinterest, we're on a mission to bring everyone the inspiration to create a life they love, and that starts with the people behind the product.
Discover a career where you ignite innovation for millions, transform passion into growth opportunities, celebrate each other's unique experiences and embrace theflexibility to do your best work. Creating a career you love? It's Possible.
At Pinterest, AI isn't just a feature, it's a powerful partner that augments our creativity and amplifies our impact, and we're looking for candidates who are excited to be a part of that. To get a complete picture of your experience and abilities, we'll explore your foundational skills and how you collaborate with AI.
Through our interview process, what matters most is that you can always explain your approach, showing us not just what you know, but how you think. You can read more about our AI interview philosophy and how we use AI in our recruiting process here.
About tvScientific
tvScientific is the first and only CTV advertising platform purpose-built for performance marketers. We leverage massive data and cutting-edge science to automate and optimize TV advertising to drive business outcomes. Our solution combines media buying, optimization, measurement, and attribution in one, efficient platform. Our platform is built by industry leaders with a long history in programmatic advertising, digital media, and ad verification who have now purpose-built a CTV performance platform advertisers can trust to grow their business.
We are seeking a Machine Learning Engineer to build out our simulation and AI capabilities. You'll design and implement systems that model the CTV advertising ecosystem - auction dynamics, bidding strategies, campaign outcomes, and counterfactual scenarios - and develop AI-driven tools that accelerate how we build, test, and deploy ML systems.
What you'll do:
- Design and build simulation environments that model CTV auction mechanics, inventory supply, and advertiser competition
- Develop counterfactual and what-if frameworks for evaluating bidding strategies, budget allocation, and pacing algorithms offline
- Build AI agents that explore strategy spaces, generate hypotheses, and automate experimentation within simulated environments
- Use LLMs and generative AI to accelerate internal ML workflows - synthetic data generation, code generation, automated analysis, and rapid prototyping
- Use simulation to de-risk ML model deployments - validate new bidding and optimization strategies before they touch live traffic
- Define the technical direction for simulation and AI infrastructure and mentor engineers on the team
What we're looking for:
- Strong production Python skills and experience building simulation or modeling systems
- Deep understanding of probabilistic modeling, stochastic processes, or agent-based simulation
- Hands-on experience with modern AI tools: LLMs, code generation, agentic workflows - and good judgment about when they help vs. when they don't
- Adtech experience: you understand auction theory, RTB mechanics, and the dynamics of programmatic advertising
- Ability to translate business questions ("what happens if we change our bid strategy?") into rigorous simulation frameworks
- Clear written communication: you'll be defining new technical directions and need to bring others along
- Ownership: you scope, design, and ship systems end-to-end with minimal direction
- Nice-to-Haves:
- Causal inference - uplift modeling, synthetic controls, difference-in-differences, or incrementality testing
- Experience with discrete event simulation, Monte Carlo methods, or digital twins
- Reinforcement learning - using simulated environments for policy learning and evaluation
- Experience building agentic AI systems or multi-agent simulations
- Big data experience with Scala and Spark
- Systems programming experience in Zig or similar (C, C++, Rust)
- MLOps experience - model deployment, monitoring, and pipeline orchestration on AWS
In-Office Requirement Statement:
- We recognize that the ideal environment for work is situational and may differ across departments. What this looks like day-to-day can vary based on the needs of each organization or role.
Relocation Statement:
- This position is not eligible for relocation assistance. Visit our PinFlex page to learn more about our working model.
#LI-SM4
#LI-REMOTE
At Pinterest we believe the workplace should be equitable, inclusive, and inspiring for every employee. In an effort to provide greater transparency, we are sharing the base salary range for this position. The position is also eligible for equity. Final salary is based on a number of factors including location, travel, relevant prior experience, or particular skills and expertise.
Information regarding the culture at Pinterest and benefits available for this position can be found here.
US based applicants only$123,696—$254,667 USDOur Commitment to Inclusion:
Pinterest is an equal opportunity employer and makes employment decisions on the basis of merit. We want to have the best qualified people in every job. All qualified applicants will receive consideration for employment without regard to race, color, ancestry, national origin, religion or religious creed, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, age, marital status, status as a protected veteran, physical or mental disability, medical condition, genetic information or characteristics (or those of a family member) or any other consideration made unlawful by applicable federal, state or local laws. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. If you require a medical or religious accommodation during the job application process, please completethis formfor support.
Duration: 11 Months (Contract to hire)
Location: Columbia, SC
Onsite Requirements: Partially onsite 3 days per week (Tue, Wed, Thurs) and as needed.
Standard work hours: 8:00 AM - 5:00 PM
**Credit check will be required**
Job Summary:
Day to Day:
- A typical day will involve a mix of hands-on coding, architectural design, and research.
- The engineer will spend a significant portion of their time in Python, building and optimizing agentic AI systems using frameworks like LangChain.
- This includes integrating these agents with our backend services and deploying them using CI/CD pipelines into our cloud environment.
- They will also be responsible for researching and testing new agentic models and frameworks, monitoring agent behavior in production, and collaborating with data scientists and business stakeholders to refine requirements and ensure the ethical deployment of AI solutions.
Team: The team is an innovative, collaborative, and empowering environment. We are building the next generation of AI solutions for the enterprise in a fast-paced, project-oriented setting. This is a multi-platformed environment that values creativity, continuous learning, and a customer-focused mindset. The new engineer will play a crucial role in shaping our AI strategy and building foundational tools and accelerators that will drive innovation across the company.
Job Requirements:
**This is a new role to establish a core competency in agentic AI systems. This engineer will be pivotal in designing and deploying advanced AI agents and will build the foundational frameworks for future AI use cases across the organization.**
Required Experience:
Required Software and Tools (Hands on experience required):
- Python
- JavaScript/TypeScript
- AI Tools and Libraries (e.g. LangGraph, LangChain, Deep Agents, Claude Skills, etc.)
- AI Models (e.g. Claude, OpenAI, etc.)
- AI Concepts (e.g. Prompt Engineering, RAG, Agentic AI, etc.)
- Distributed SDLC/DevOps (e.g. github, pipelines, VS Code, testing frameworks, etc.)
- Platforms (Container Platforms, Cloud Platforms, Document Databases, AWS)
- API Design
Python & AI/ML Libraries:
- Deep hands-on experience in Python for AI/ML development.
- Generative AI Development: Proven experience developing Gen AI or AI/ML solutions, from use case conceptualization to production deployment.
- Infrastructure & DevOps: Strong understanding of cloud environments (AWS preferred), LLM hosting, CI/CD pipelines, Docker, and Kubernetes.
- Agentic AI Concepts: Knowledge of agentic/autonomous systems (e.g., reasoning, planning, tool use).
Minimum Required Education: Bachelor's degree-in Computer Science, Information Technology or other job related degree or 4 years relevant experience or Associates degree + 2 years relevant experience
Minimum Required Work Experience: 6years-of application development, systems testing or other job related experience.
Required Technologies: 3-6 years of hands-on experience in Artificial Intelligence, Machine Learning, or related fields.
Nice to have/Preferred skills:
- Proficiency in Python development and FastAPI/Flask frameworks, along with SQL.
- Familiarity with agentic AI frameworks and concepts such as LangChain, LangGraph, AutoGen, Model Context Protocol (MCP), Chain of Thought prompting, knowledge stores, and embeddings.
- Experience developing autonomous agents using cloud-based AI services.
- Experience with prompt engineering techniques and model fine-tuning.
- Strong understanding of reinforcement learning, planning algorithms, and multi-agent systems.
- Experience working across cloud platforms (AWS, Azure, GCP) and deploying AI solutions at scale.
Position title:
Bellwether Postdoctoral Scholar
Salary range:
The UC postdoc salary scales set the minimum pay determined by experience level at appointment. See the following table for the current salary scale for this position: . The current minimum salary range for this position is $69,073-$74,281. Salaries above the minimum may be offered when necessary to meet competitive conditions. A reasonable estimate for this position is $10,000 higher than the posted minimum, dependent on experience level at appointment.
Percent time:
100%
Anticipated start:
As soon as July 2026. Exact start date contingent on completion of degree and is also negotiable.
Review timeline:
Review will begin in March and finish in April.
Position duration:
2 years.
Application Window
Open date: February 13, 2026
Next review date: Friday, Mar 20, 2026 at 11:59pm (Pacific Time)
Apply by this date to ensure full consideration by the committee.
Final date: Friday, Mar 20, 2026 at 11:59pm (Pacific Time)
Applications will continue to be accepted until this date.
Position description
The School of Information at the University of California, Berkeley invites applications for up to three new full-time Bellwether Postdoctoral Scholars to start as soon as July 2026. The exact start date is negotiable. These positions are available for two years, and are non-renewable. J-1 visa sponsorship is available for this position.
These postdoctoral positions are for academics in the early stages of their career who demonstrate exceptional potential as a scholar and researcher. Applicants should either have completed a doctoral degree, or be able to convincingly demonstrate that they will complete the degree before they intend to start this postdoctoral position (e.g. by documenting a scheduled viva/final defense).
We are seeking applicants with active research plans in any of the following areas:
BPS 1) We seek applicants pursuing a research agenda at the intersection of computer science and applied economics, with interdisciplinary training and interests in both topics. The successful applicant will work on projects that address pressing policy issues, using a mix of quantitative and computational methods (e.g., econometrics, data science, AI/ML). Examples of active projects include, but are not limited to, developing theory and methods for robust and equitable decision making in social settings; the use of machine learning and digital data to guide resource allocation and related policies in low-income countries; and creating and validating new techniques for monitoring living standards and well-being in high-stakes policy environments. This position will be supervised by Joshua Blumenstock.
BPS 2) We seek applicants with interdisciplinary training and interests pursuing a research agenda at the intersection of information science, computational social science, and public-interest research. The successful applicant will work on projects that examine how sociotechnical information systems shape high-stakes decision-making across digital and institutional contexts to address pressing issues in information access, trustworthiness, and credibility, using a mix of computational, quantitative, and qualitative methods (e.g., natural language processing, digital trace data, surveys, and interviews). Examples of active projects include, but are not limited to, studying online communities as informal information infrastructures; analyzing how search engines and digital platforms structure the visibility and credibility of information; developing methods to monitor and contextualize misinformation and uncertainty in sensitive or politicized domains; and advancing conceptual frameworks for understanding information ecosystems as structural determinants of equity, autonomy, and well-being, including but not limited to health-related contexts. This position will be supervised by Coye Cheshire.
BPS 3) We seek applicants with active research plans in climate and sustainability informatics, leveraging information and/or information tools to empower individuals, communities, and organizations in tackling the challenges of climate change and biodiversity conservation. We welcome applicants with strong backgrounds in one or more of the following areas: remote sensing, ML, NLP, HCI, participatory design, design research, biosensory computing. The successful applicant will become a core member of the IceBerk Lab ( ), and be supervised by John Chuang, with possible co-supervision by another IceBerk faculty member where appropriate.
BPS 4) The Cultural Analytics group seeks postdoc applicants to conduct data-driven research across archival heritage and born-digital media. Current projects include, but are not limited to: (i) the study of narrative, belief and resonance, where the goal is to understand how narrative is mutually constitutive of beliefs, and how narrative resonates in and across communities of belief; (ii) extracting narrative elements from literary works, with a strong focus on complex corpora such as the Icelandic sagas to understand composition and social modeling in late medieval fiction; (iii) further developing the approach of archetyptonics along with the SOCKS project at University of Vermont's Complex Systems Center; and (iv) refining a search engine for popular dance, where the search term is the dancer's sequence of poses, here focusing on Kpop dance. Ideal candidates bridge Computational Humanities/Social Science Computing (ML, Networks, and/or Computer Vision) with a qualitative theoretical background. You will be supervised by Tim Tangherlini (with potential I-School co-supervision), and be associated with the Berkeley Institute for Data Science (BIDS) and the AI Futures Lab. We welcome applicants with active research plans ready to contribute to a vibrant, interdisciplinary environment.
BPS 5) The goal of this postdoctoral position is to contribute to the development of an empirically-backed theoretical understanding of how people understand and make sense of the combination of graphic and textual information. We seek a scholar with expertise in some combination of information visualization, the psychology of reading and/or diagram interpretation, and cognitive science or neuroscience more generally to investigate human conception at the intersection of language and information visualization. Expertise in conducting and analyzing eye gaze is a requirement of the position. Expertise or interest in multimodal information, both cognitively and in large vision and language models is a plus. The mentor for this position is Professor Marti Hearst.
BPS 6) Seeking postdoc applicants with a passion for and commitment to equity-driven co-design with local marginalized Indigenous communities. A successful applicant will work on projects that weave together Indigenous knowledge, experiences, and values that address public-facing outcomes, such as informal science education programs and exhibits at local museums and cultural centers. The applicant will help develop theory and methods for world-building equity that integrate marginalized communities' cultural and social struggles. We are seeking applicants with the following attributes: strong background in co-design with marginalized communities, design research, qualitative methods, and experience building mixed reality systems. Knowledge of Indigenous research methods is a plus. This position will be supervised by Kimiko Ryokai.
The Bellwether Postdoctoral Scholar program is designed to allow exceptionally promising young researchers the time to develop their own research while collaborating with leading established faculty. It is designed to accelerate careers, and to maximize the ability of Bellwether Postdoctoral Scholars to build independent research trajectories. To accomplish this, a portion (30-40%) of each post-doc's time will be reserved for their own independent research and publication efforts, including publishing results from their dissertation.
Additionally, all Bellwether Postdoctoral Scholars will work with a mentor or mentors on research projects in the areas listed above (60-70%), all of which are either already active or will be at the time of the start of the post-doc. All have significant publication opportunities planned.
These postdoctoral positions are research-focused and do not include teaching. However, all post-docs will be given opportunities for guest lecturing and will be expected to give public talks about their research. Post-docs will also contribute to planning and hosting public talks for others, and will be expected to be active participants in I School academic events such as research talks.
Each postdoctoral scholar will have access to up to $5,000 annually for research expenses and travel to professional conferences and research opportunities. A laptop computer will also be provided for the duration of the post-doc.
For all of the above positions, we only seek candidates with excellent research and leadership abilities and a commitment to contributing to the UC Berkeley I School and the field of information more broadly while accelerating their career.
The Berkeley School of Information (I School) is a global bellwether in a world awash in information and data, boldly leading the way with education and fundamental research that translates into new knowledge, practices, policies, and solutions. I School scholars and practitioners thrive in the intersections where people, organizations, and societies interact with information, technology, and data. Faculty comprise a mix of disciplines, including information, computer science, economics, political science, law, sociology, design, media studies, and more.
The I School offers three professional master's degrees and an academic doctoral degree. The MIMS program trains students for careers as information professionals and emphasizes small classes and project-based learning. The MIDS program trains data scientists to manage and analyze the coming onslaught of big data, in a unique high-touch online degree. The MICS program prepares cybersecurity leaders with the technical skills and contextual knowledge necessary to develop solutions for complex cybersecurity challenges. The Ph.D. program equips scholars to develop solutions and shape policies that influence how people seek, use, and share information. Our cohorts and classes are small enough to support intense student engagement; and we encourage collaboration among the students, faculty, and staff in the I School community. Our alumni have careers in diverse fields, such as data science, user experience design and research, product management, engineering, information policy, cybersecurity, and more.
UC Berkeley has an excellent benefits package as well as a number of policies and programs to support employees as they balance work and family, if applicable.
School:
School: about/community
Qualifications
Basic qualifications (required at time of application)
PhD (or equivalent international degree), or enrolled in a PhD or equivalent international degree-granting program at the time of application.
Additional qualifications (required at time of start)
PhD (or equivalent international degree) required by start date.
No more than three years of postdoctoral research experience.
Application Requirements
Document requirements
Curriculum Vitae - Your most recently updated C.V.
Cover Letter - 1-2 pages. Required elements of your cover letter include:
which position(s) you are applying for (e.g. BPS1 or BPS5);
when you would be available to start your postdoctoral work;
a clear articulation of your fit with the UC Berkeley I School, addressing how your expertise overlaps with, enhances, or expands upon the research area indicated for your position(s) of interest. Please include names of any mentors that you would like to work with beyond the project supervisor.Statement of Research - 2-3 pages. Includes a description of the focus of your planned independent research and publications during the post-doc, what resources would you need to do that work, and an explanation of how the research builds on and goes beyond work you have already done.
Writing Sample - Preferably a pre- or post-print of a first-authored publication.
Reference requirements
- 3-5 required (contact information only)
We may contact your references at any stage in the hiring process unless you request otherwise. Please only provide contact information and do not request letters be sent at the time of application. Letters will be solicited for all finalists.
Apply link:
JPF05222
Help contact:
About UC Berkeley
UC Berkeley is committed to diversity, equity, inclusion, and belonging in our public mission of research, teaching, and service, consistent with UC Regents Policy 4400 and University of California Academic Personnel policy (APM 210 1-d). These values are embedded in our Principles of Community, which reflect our passion for critical inquiry, debate, discovery and innovation, and our deep commitment to contributing to a better world. Every member of the UC Berkeley community has a role in sustaining a safe, caring and humane environment in which these values can thrive.
The University of California, Berkeley is an Equal Opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, age, or protected veteran status.
For more information, please refer to the University of California's Affirmative Action and Nondiscrimination in Employment Policy and the University of California's Anti-Discrimination Policy.
In searches when letters of reference are required all letters will be treated as confidential per University of California policy and California state law. Please refer potential referees, including when letters are provided via a third party (i.e., dossier service or career center), to the UC Berkeley statement of confidentiality prior to submitting their letter.
As a University employee, you will be required to comply with all applicable University policies and/or collective bargaining agreements, as may be amended from time to time. Federal, state, or local government directives may impose additional requirements.
Unless stated otherwise, unambiguously, in the position description, this position does not include sponsorship of a new consular H-1B visa petition that would require payment of the $100,000 supplemental fee.
As a condition of employment, the finalist will be required to disclose if they are subject to any final administrative or judicial decisions within the last seven years determining that they committed any misconduct.
- "Misconduct" means any violation of the policies or laws governing conduct at the applicant's previous place of employment, including, but not limited to, violations of policies or laws prohibiting sexual harassment, sexual assault, or other forms of harassment or discrimination, as defined by the employer.
- UC Sexual Violence and Sexual Harassment Policy
- UC Anti-Discrimination Policy
- APM - 035: Affirmative Action and Nondiscrimination in Employment
Job location
Berkeley, CA
Digital Product Manager – Personalization Intelligence
BJ’s Wholesale Club is seeking a Product Manager – Personalization Intelligence to lead the next evolution of our data-driven personalization strategy. This is a high-impact transformation role responsible for scaling intelligent, model-driven personalization across all member touchpoints — including site, app, email, push, SMS, and emerging channels.
You will define and drive the roadmap that powers how millions of members experience BJ’s — delivering measurable incremental revenue, stronger loyalty, and deeper engagement through advanced personalization capabilities.
This role sits at the intersection of product, data science, engineering, marketing, and digital — translating business strategy into scalable machine learning–powered solutions.
What You’ll Own
Personalization Strategy & Roadmap
- Define and execute the product roadmap for Personalization Intelligence across all customer touchpoints.
- Drive clarity in business goals, measurable outcomes, and prioritization tied to incremental revenue and engagement.
- Lead the transformation from campaign-based targeting to intelligent, model-driven personalization at scale.
ML-Powered Personalization Capabilities
- Partner closely with Data Science to design, build, and scale: Recommendation systems, Propensity and propensity-to-buy models, Predictive engagement and churn models
- Own the end-to-end ML model lifecycle from ideation and business case through training, testing, deployment, and ongoing optimization
- Translate model outputs into actionable, testable personalization strategies.
Experimentation & Measurement
- Define clear hypotheses and testing frameworks to measure incremental lift.
- Collaborate with analytics to establish robust tracking, experimentation design, and performance reporting.
- Monitor and interpret key ML performance metrics and business KPIs.
- Own and deliver the product roadmap for Personalization Intelligence, driving clarity in goal definition, accountability for business outcomes, and focused execution.
Qualifications:
- 4+ years of Product Management experience
- Demonstrated experience delivering personalization, recommendation systems, Propensity/propensity-to-buy models, and other predictive models
- Retail or e-commerce experience strongly preferred
- Strong communication skills and experience working with Stakeholders (data science, engineering, business)
- Strong product discovery, prioritization, and stakeholder management skills
About Wakefern
Wakefern Food Corp. is the largest retailer-owned cooperative in the United States and supports its co-operative members' retail operations, trading under the ShopRite®, Price Rite®, The Fresh Grocer®, Dearborn Markets®, and Gourmet Garage® banners.
Employing an innovative approach to wholesale business services, Wakefern focuses on helping the independent retailer compete in a big business world. Providing the tools entrepreneurs need to stay a step ahead of the competition, Wakefern’s co-operative members benefit from the company’s extensive portfolio of services, including innovative technology, private label development, and best-in-class procurement practices.
The ideal candidate will have a strong background in designing, developing, and implementing complex projects, with focus on automating data processes and driving efficiency within the organization. This role requires a close collaboration with application developers, data engineers, data analysts, data scientists to ensure seamless data integration and automation across various platforms. The Data Integration & AI Engineer is responsible for identifying opportunities to automate repetitive data processes, reduce manual intervention, and improve overall data accessibility.
Essential Functions
- Participate in the development life cycle (requirements definition, project approval, design, development, and implementation) and maintenance of the systems.
- Implement and enforce data quality and governance standards to ensure the accuracy and consistency.
- Provide input for project plans and timelines to align with business objectives.
- Monitor project progress, identify risks, and implement mitigation strategies.
- Work with cross-functional teams and ensure effective communication and collaboration.
- Provide regular updates to the management team.
- Follow the standards and procedures according to Architecture Review Board best practices, revising standards and procedures as requirements change and technological advancements are incorporated into the >tech_ structure.
- Communicates and promotes the code of ethics and business conduct.
- Ensures completion of required company compliance training programs.
- Is trained – either through formal education or through experience – in software / hardware technologies and development methodologies.
- Stays current through personal development and professional and industry organizations.
Responsibilities
- Design, build, and maintain automated data pipelines and ETL processes to ensure scalability, efficiency, and reliability across data operations.
- Develop and implement robust data integration solutions to streamline data flow between diverse systems and databases.
- Continuously optimize data workflows and automation processes to enhance performance, scalability, and maintainability.
- Design and develop end-to-end data solutions utilizing modern technologies, including scripting languages, databases, APIs, and cloud platforms.
- Ensure data solutions and data sources meet quality, security, and compliance standards.
- Monitor and troubleshoot automation workflows, proactively identifying and resolving issues to minimize downtime.
- Provide technical training, documentation, and ongoing support to end users of data automation systems.
- Prepare and maintain comprehensive technical documentation, including solution designs, specifications, and operational procedures.
Qualifications
- A bachelor's degree or higher in computer science, information systems, or a related field.
- Hands-on experience with cloud data platforms (e.g., GCP, Azure, etc.)
- Strong knowledge and skills in data automation technologies, such as Python, SQL, ETL/ELT tools, Kafka, APIs, cloud data pipelines, etc.
- Experience in GCP BigQuery, Dataflow, Pub/Sub, and Cloud storage.
- Experience with workflow orchestration tools such as Cloud Composer or Airflow
- Proficiency in iPaaS (Integration Platform as a Service) platforms, such as Boomi, SAP BTP, etc.
- Develop and manage data integrations for AI agents, connecting them to internal and external APIs, databases, and knowledge sources to expand their capabilities.
- Build and maintain scalable Retrieval-Augmented Generation (RAG) pipelines, including the curation and indexing of knowledge bases in vector databases (e.g., Pinecone, Vertex AI Vector Search).
- Leverage cloud-based AI/ML platforms (e.g., Vertex AI, Azure ML) to build, train, and deploy machine learning models on a scale.
- Establish and enforce data quality and governance standards for AI/ML datasets, ensuring the accuracy, completeness, and integrity of data used for model training and validation.
- Collaborate closely with data scientists and machine learning engineers to understand data requirements and deliver optimized data solutions that support the entire machine learning lifecycle.
- Hands-on experience with IBM DataStage and Alteryx is a plus.
- Strong understanding of database design principles, including normalization, indexing, partitioning, and query optimization.
- Ability to design and maintain efficient, scalable, and well-structured database schemas to support both analytical and transactional workloads,
- Familiarity with BI visualization tools such as MicroStrategy, Power BI, Looker, or similar.
- Familiarity with data modeling tools.
- Familiarity with DevOps practices for data (CI/CD pipelines)
- Proficiency in project management software (e.g., JIRA, Clarizen, etc.)
- Familiarity with DevOps practices for data (CI/CD pipelines)
- Strong knowledge and skills in data management, data quality, and data governance.
- Strong communication, collaboration, and problem-solving skills.
- Ability to work on multiple projects and prioritize tasks effectively.
- Ability to work independently and in a team environment.
- Ability to learn new technologies and tools quickly.
- The ability to handle stressful situations.
- Highly developed business acuity and acumen.
- Strong critical thinking and decision-making skills.
Working Conditions & Physical Demands
This position requires in-person office presence at least 4x a week.
Compensation and Benefits
The salary range for this position is $75,868 - $150,644. Placement in the range depends on several factors, including experience, skills, education, geography, and budget considerations.
Wakefern is proud to offer a comprehensive benefits package designed to support the health, well-being, and professional development of our Associates. Benefits include medical, dental, and vision coverage, life and disability insurance, a 401(k) retirement plan with company match & annual company contribution, paid time off, holidays, and parental leave.
Associates also enjoy access to wellness and family support programs, fitness reimbursement, educational and training opportunities through our corporate university, and a collaborative, team-oriented work environment. Many of these benefits are fully or partially funded by the company, with some subject to eligibility requirements
The pay range for this role is $150,000 - $200,000/yr USD.
WHO WE ARE:
Headquartered in Southern California, Skechers—the Comfort Technology Company®—has spent over 30 years helping men, women, and kids everywhere look and feel good. Comfort innovation is at the core of everything we do, driving the development of stylish, high-quality products at a great value. From our diverse footwear collections to our expanding range of apparel and accessories, Skechers is a complete lifestyle brand.
ABOUT THE ROLE:
Skechers Digital Team is seeking a Digital Data Architect reporting to the Director, Digital Architecture, Consumer Domain. This role is responsible for designing and governing Skechers’ Consumer Data 360 ecosystem, enabling identity resolution, high-quality data foundations, personalization, loyalty intelligence, and machine learning capabilities across digital and retail channels.
The ideal candidate will be a strong technical leader, have hands-on full-stack technical knowledge in enterprise technologies related to Skecher’s consumer domain, and have the ability to work in a fast-paced agile environment. You should have knowledge of consumer programs from an architecture/industry perspective, and you should have strong hands-on experience designing solutions on the Salesforce Core Platform (including configuration, integration, and data model best practices).
You will work cross-functionally with Digital Engineering, Data Engineering, Data Science, Loyalty, and Marketing teams to architect scalable, secure, and high-performance data platforms that support advanced personalization and recommender systems.
WHAT YOU’LL DO:
- Responsible for the full technical life cycle of consumer platform capabilities which includes:
- Capability roadmap and technical architecture in alignment to consumer experience
- Technical planning, design, and execution
- Operations, analytics/reporting, and adoption
- Define and evolve Skechers’ Consumer Data 360 architecture, including identity resolution (deterministic and probabilistic matching) and unified customer profiles.
- Architect scalable data models and pipelines across CDP, CRM, e-commerce, marketing automation, data lake, and warehouse platforms.
- Establish enterprise data quality frameworks including validation, deduplication, anomaly detection, and observability.
- Optimize SQL workloads and large-scale distributed queries through performance tuning, partitioning, indexing, and workload management strategies.
- Design and oversee ML pipelines supporting personalization, churn modeling, and recommender systems.
- Partner with Data Science teams to productionize models using distributed platforms such as Databricks (Spark, Delta Lake, MLflow preferred).
- Ensure secure data governance, access control (RBAC/ABAC), and compliance with GDPR, CCPA, and related privacy regulations.
- Provide architectural oversight ensuring performance, scalability, resilience, and maintainability.
- Collaborate with stakeholders to translate business objectives (LTV growth, personalization lift, engagement) into scalable data solutions.
REQUIREMENTS:
- Computer Science, Data Engineering, or related degree or equivalent experience.
- 12+ years experience architecting enterprise data platforms in cloud environments.
- 9+ years experience with data engineering with a focus on consumer data.
- 6+ years experience working with Salesforce platforms, including data models and enterprise integrations.
- Strong experience with Data 360 and identity resolution architectures.
- Proven expertise in SQL performance tuning and large-scale data modeling.
- Hands-on experience implementing ML pipelines and recommender systems in production environments.
- Experience with cloud technologies (AWS, GCP, or Azure).
- Experience with integration patterns (API, ETL, event streaming).
- Experience providing technical leadership and guidance across multiple projects and development teams.
- Experience translating business requirements into detailed technical specifications and working with development teams through implementation, including issue resolution and stakeholder communication.
- Strong project management skills including scope assessment, estimation, and clear technical communication with both business users and technical teams.
- Must hold at least one of the following Salesforce Certifications (Platform App Builder, Platform Developer 1, JavaScript Developer 1).
- Experience with Databricks or similar distributed data/ML platforms preferred.
AI Innovation Architect
Location: Hybrid; Ashburn, VA; Springfield, VA; Washington, D.C.
Clearance: U.S. Citizen; Must have an active Top-Secret Clearance or DHS Public Trust Clearance.
InDev is seeking a senior strategic and technical AI Architect responsible for designing, building, and deploying artificial intelligence solutions that support mission outcomes across the homeland security market. In this role you will bridge advanced AI capabilities - including machine learning, natural language processing, and data engineering - with operational requirements, ensuring solutions are secure, scalable, and aligned with the homeland security mission.
YOUR FUTURE DUTIES AND RESPONSIBILITIES
- Define overall system architecture, selecting and governing Artificial Intelligence / Machine Learning (AI/ML) and platform technologies, and ensuring solutions are scalable, secure, and production-ready
- Lead end-to-end technical design, development, and implementation of an agentic AI system to orchestrate user queries across enterprise data sources
- Partner closely with development, DevOps and data engineering teams to translate project requirements into an extensible AI architecture
- Create and promote AI strategies that align with business objectives
- Develop and coordinate POCs to test new technologies
- Evaluate and select appropriate AI tools, frameworks, and platforms (i.e., AWS, Azure, Google) to drive innovation
QUALIFICATIONS
- U.S. Citizen; Active Top-Secret Clearance or DHS Public Trust Clearance
- 8+ years of experience delivering AI solutions across federal agencies
- Bachelor’s degree in Computer Science, Engineering, or Data Science
- Deep understanding of machine learning (ML), deep learning, Natural Language Processing (NLP), and neural networks
- Experience with cloud platforms (AWS, Google Cloud, Azure) and container orchestration tools like Kubernetes and Docker
- Ability to identify high-impact AI use cases and translate them into technical requirements
- Experience designing, building, and deploying advanced AI systems including Generative AI, AI Agents, LLMs, Reinforcement Learning, and computer vision models
- Ability to apply cloud and engineering expertise across AWS, GCP, Kubernetes, Docker, Terraform, Helm, Linux, and AI services, such as SageMaker, Vertex AI, Bedrock, or Gemini
- Experience with Python, agent frameworks, data engineering, APIs/microservices, vector databases, SQL engines, distributed systems, cloud services, RAG
- Experience developing and maintaining AI/ML roadmaps, performing Analysis of Alternatives, and making defensible technical tradeoff decisions
- Experience leading multidisciplinary teams, including data scientists, engineers, and business stakeholders
- Excellent written and oral communication skills
- Ability to tailor and present information across multiple stakeholders
NICE TO HAVES
- Experience integrating AI solutions with SaaS/PaaS platforms (e.g., ServiceNow, Salesforce, etc.)
- Experience implementing virtual agents within SaaS/PaaS platforms (e.g., ServiceNow Virtual Agent, Salesforce Agentforce, etc.)
- Experience with Google Gemini
ABOUT US
At InDev, we’re not just a company; we’re a trailblazing force transforming the way data shapes the future. As a dynamic player in the federal government sector, we’re on a mission to empower agencies with cutting-edge data solutions that drive innovation, efficiency, and progress. Our team thrives on collaboration, innovation, and embracing challenges head-on to create a meaningful impact on the world around us.
WHY INDEV
- Innovative Environment: Join a team that thrives on creativity and innovation, where your ideas are not only heard but encouraged.
- Meaningful Impact: Contribute to projects that directly impact federal agencies, driving positive change on a national scale.
- Dynamic Collaboration: Work alongside diverse experts who are passionate about pushing boundaries and making a difference.
- Agile Mindset: Embrace Agile methodologies that encourage flexibility, adaptability, and rapid growth.
- Learning Culture: Enjoy ongoing learning opportunities and professional development to expand your skill set.
- Cutting-edge Tech: Engage with the latest technologies and tools in the data integration landscape.
If you’re ready to embark on a journey of innovation, collaboration, and impact, InDev welcomes you to join our team. Let’s shape the future together.
We are seeking a Senior Robotic Simulation Engineer with deep, hands-on expertise in building high-fidelity simulation environments and synthetic data pipelines using NVIDIA Isaac Sim or similar.
In this role, you will directly contribute to the development of scalable, realistic virtual worlds that power the training and validation of robotic foundation models for perception, planning, and manipulation.
At GM’s manufacturing technology development team, you’ll work closely with robotics and AI teams within the Manufacturing Technology Development (MTD) team to create simulation scenes that replicate complex manufacturing environments, generate diverse synthetic datasets, and build robust data workflows that accelerate model development and deployment.
Key Responsibilities • Build and customize simulation scenes in Isaac Sim that accurately reflect real-world robotic tasks and factory layouts • Develop synthetic data generation pipelines, including randomized object placement, sensor simulation, and multi-modal annotations (RGB, depth, segmentation, point clouds) • Implement and maintain data processing workflows, ensuring data quality, traceability, and compatibility with ML training pipelines • Optimize simulation performance and realism, tuning physics parameters, asset fidelity, and rendering configurations for scalable experimentation • Collaborate with robotics engineers and ML scientists to align simulation outputs with model requirements and support sim2real transfer • Conduct hands-on testing and debugging, iterating on simulation setups and synthetic data strategies to improve model robustness • Document workflows and contribute to best practices, enabling reproducibility and knowledge sharing across teams Required Qualifications • MSc or PhD in Robotics, Computer Graphics, Computer Vision, or related field • 3+ years of hands-on experience in robotics simulation, synthetic data generation, or virtual environment development • Proficiency with Isaac Sim, Omniverse, or similar simulation platforms • Strong programming skills in Python and C++, with experience in simulation APIs and data annotation tools • Familiarity with 3D vision, sensor modeling, and domain randomization techniques • Experience integrating simulation outputs into ML pipelines for training and evaluation Preferred Qualifications • Experience with robotics frameworks (e.g., ROS/ROS2, MoveIt, Nav2) • Experience with robotics simulation platforms (e.g., IsaacSim, MoJoCo, Gazebo) or Game engine (e.g., Unreal, Unity) • Background in industrial automation, autonomous vehicles, or robotic manipulation • Publications or contributions in simulation, synthetic data, or robotics venues (e.g.
ICRA, RSS, CVPR) • Familiarity with CI/CD pipelines and modern software development practices such as Bash, Github, Bazel, Docker
About Quantiphi:
Quantiphi is an award-winning, AI-First digital engineering and consulting company focused on delivering high-impact Services and Solutions that help organizations solve what truly matters. We partner with enterprises to reimagine their businesses through intelligent, scalable, and transformative AI driving measurable outcomes at the very core of their operations.
Since our founding in 2013, Quantiphi has tackled some of the world’s most complex business challenges by combining deep industry expertise, disciplined cloud and data engineering practices, and cutting-edge applied AI research. Our work is rooted in delivering accelerated, quantifiable business value, not just technology for technology’s sake.
Headquartered in Boston, Quantiphi is a global organization with 4,000+ professionals serving clients across key industry verticals, including BFSI, Healthcare & Life Sciences, CPG, MFG, TME etc. As an Elite and Premier partner to leading cloud and AI platforms such as NVIDIA, Google Cloud, AWS, and Snowflake, we build and deliver enterprise-grade AI services and solutions that create real-world impact.
We’ve been recognized with:
- 17x Google Cloud Partner of the Year awards in the last 8 years.
- 3x AWS AI/ML award wins.
- 3x NVIDIA Partner of the Year titles.
- 2x Snowflake Partner of the Year awards.
- We have also garnered top analyst recognitions from Gartner, ISG, and Everest Group.
- We offer first-in-class industry solutions across Healthcare, Financial Services, Consumer Goods, Manufacturing, and more, powered by cutting-edge Generative AI and Agentic AI accelerators.
- We have been certified as a Great Place to Work for the third year in a row- 2021, 2022, 2023.
Be part of a trailblazing team that’s shaping the future of AI, ML, and cloud innovation.
Your next big opportunity starts here!
For more details, visit: Website or LinkedIn Page.
Role - Account Manager
Work Location: Nashville, TN
Job Overview:
As a Strategic Account Manager at Quantiphi, you will drive growth and expand our footprint within two of our most strategic healthcare enterprise accounts, spanning both a large national health plan (payer) and a leading multi-state health system (provider).
This role requires deep expertise across the healthcare ecosystem, including payer and provider operations, and the ability to position AI, data, cloud, and digital transformation solutions that deliver measurable clinical, operational, and financial outcomes.
You will operate in a highly entrepreneurial environment, partnering closely with executive stakeholders to accelerate innovation, grow revenue, and build long-term strategic relationships within complex, regulated healthcare organizations.
Key Responsibilities:
- Own and manage the full sales cycle across assigned healthcare enterprise accounts, from opportunity identification through contract negotiation and delivery oversight.
- Build and strengthen executive-level relationships (C-Suite, VP, Director) across IT, digital, analytics, operations, clinical, and procurement functions.
- Develop and execute multi-year strategic account plans aligned to payer and provider priorities such as cost optimization, member/patient experience, care delivery transformation, and AI adoption.
- Identify and pursue new opportunities by positioning Quantiphi’s AI, data engineering, analytics, and cloud capabilities to address healthcare-specific challenges.
- Expand existing accounts through cross-sell and up-sell motions across services, solutions, and managed offerings.
- Serve as a trusted advisor to leadership teams on modernization strategies, emerging healthcare technologies, and innovation roadmaps.
- Lead collaborative workshops to define business problems, quantify value, and design scalable solution approaches.
- Maintain deep understanding of each organization’s operating model, technology landscape (EHRs, claims systems, contact centers, analytics platforms), regulatory requirements, and transformation priorities.
- Partner closely with delivery and solution teams to ensure successful execution, customer satisfaction, and long-term growth.
- Contribute to healthcare go-to-market strategy, including industry messaging, accelerators, and account-based growth initiatives.
Basic Qualifications:
- Bachelor’s degree in Business, Engineering, Computer Science, or related field (or equivalent experience).
- 7+ years of experience in enterprise sales or account management within Healthcare & Life Sciences.
- Direct experience managing or selling into large healthcare payers, health plans, or provider/health system organizations.
- Proven success selling AI, data analytics, cloud, or digital transformation solutions (GCP, AWS, Azure).
- Strong background in enterprise account growth, strategic planning, and complex deal cycles.
- Experience negotiating MSAs, SoWs, and large multi-year engagements.
- Ability to engage and influence senior healthcare executives and clinical/operational stakeholders.
- Willingness to travel to customer sites up to 50%.
Other Qualifications:
- Familiarity with healthcare use cases such as:
- Claims and payment modernization.
- Care management and population health analytics.
- Revenue cycle optimization.
- Contact center and member/patient experience transformation.
- Clinical and operational AI.
- Understanding of healthcare data ecosystems (FHIR/HL7, EHRs, claims platforms, interoperability standards).
- Knowledge of healthcare regulatory and compliance environments (HIPAA, PHI governance, security standards).
- Working knowledge of AI/ML and cloud-native architectures.
What is in it for you:
- Be part of the fastest-growing AI-first digital transformation and engineering company in the world.
- Be a leader of an energetic team of highly dynamic and talented individuals.
- Exposure to working with fortune 500 companies and innovative market disruptors.
- Exposure to the latest technologies related to artificial intelligence and machine learning, data and cloud.
AI Research Scientist | Machine Learning | Deep Learning | Natural Language Processing | LLM | Hybrid | San Jose, CA
Title: AI Research Scientist
Location: San Jose, CA
Responsibilities:
- Design, execute, and analyze machine learning experiments, establishing strong baselines and selecting appropriate evaluation metrics.
- Stay up to date with the latest AI research; identify, adapt, and validate novel techniques for company-specific use cases.
- Define rigorous evaluation protocols, including offline metrics, user studies, and adversarial (red team) testing to ensure statistical soundness.
- Specify data and annotation requirements; develop annotation guidelines and oversee quality control processes.
- Collaborate closely with domain experts, product managers, and engineering teams to refine problem statements and operational constraints.
- Develop reusable research assets such as datasets, modular code components, evaluation suites, and comprehensive documentation.
- Work alongside ML Engineers to optimize training and inference pipelines, ensuring seamless integration into production systems.
- Contribute to academic publications and represent the company in research communities, as needed.
Educational Qualifications:
- Ph.D. in Computer Science, Artificial Intelligence, Machine Learning, or a related field is strongly preferred.
- Candidates with a master’s degree and exceptional research or industry experience will also be considered.
Industry Experience:
- 3–5 years of experience in AI/ML research roles, ideally in applied or product-focused environments.
- Demonstrated success in delivering research-driven solutions that have been deployed in production.
- Experience collaborating in cross-functional teams across research, engineering, and product.
- Publications in top-tier AI/ML conferences (e.g., NeurIPS, ICML, ACL, CVPR) are a plus.
Technical Skills:
- Strong foundational knowledge in machine learning and deep learning algorithms.
- Hands-on experience with PEFT/LoRA, adapters, fine-tuning techniques, and RLHF/RLAIF (e.g., PPO, DPO, GRPO).
- Ability to read, implement, and adapt state-of-the-art research papers to real-world use cases.
- Proficiency in hypothesis-driven experimentation, ablation studies, and statistically sound evaluations.
- Advanced programming skills in Python (preferred), C++, or Java.
- Experience with deep learning frameworks such as PyTorch, Hugging Face, NumPy, etc.
- Strong mathematical foundations in probability, linear algebra, and calculus.
- Domain expertise in one or more areas: natural language processing (NLP), symbolic reasoning, speech processing, etc.
- Ability to translate research insights into roadmaps, technical specifications, and product improvements.
AI Research Scientist | Machine Learning | Deep Learning | Natural Language Processing | LLM | Hybrid | San Jose, CA
Remote working/work at home options are available for this role.