Colab Python Notebook Jobs Salary Jobs in Usa
1,133 positions found — Page 2
Job Description
LeadStack Inc. is an award winning, one of the nation's fastest growing, certified minority owned (MBE) staffing services provider of contingent workforce. As a recognized industry leader in contingent workforce solutions and Certified as a Great Place to Work, we're proud to partner with some of the most admired Fortune 500 brands in the world.
Data Engineer Level 2
Location: Blue Ash, OH 45241
Duration: 6 months (Contractor)
Pay Rate: $55–$75/hr (W2)
Interview Process: In-person interviews required at Blue Ash, OH location.
Overview
Join our team to build modern data solutions in Azure! We're seeking a skilled Data Engineer with hands-on expertise in Databricks, Spark, Python, and cloud DataOps. You'll design scalable data pipelines, automate infrastructure with Terraform/GitHub Actions, and treat data as an enterprise asset—collaborating on CI/CD, governance, and optimization for reliable, secure analytics.
Key Responsibilities
- Analyze, design, and develop Azure-based data products, pipelines, and architecture using Databricks, Spark, PySpark, Python, and SQL.
- Optimize Spark/PySpark pipelines for performance (e.g., data skew, partitioning, caching, shuffles).
- Build and maintain Delta Lake tables/models for analytical/operational use cases, including Delta Live Tables (DLT) or Databricks SQL.
- Provision cloud/Databricks resources via Terraform (IaC) and manage GitHub-based CI/CD workflows with GitHub Actions.
- Implement Git workflows for notebooks/jobs; troubleshoot clusters, jobs, and pipelines for reliability.
- Collaborate on data governance (e.g., Purview, Unity Catalog), lineage, cataloging, and enterprise standards.
- Deploy Azure fixes/upgrades; mentor on best practices; create diagrams/specs; support stakeholders and data strategy.
Requirements
- 5+ years as Data Engineer.
- Strong hands-on with Azure Databricks, Spark/PySpark, Python, SQL, and databases.
- Experience with Delta Live Tables (DLT), Databricks SQL, Azure Functions, messaging/orchestration tools.
- Proficiency in Terraform (IaC), GitHub/GitHub Actions (CI/CD, version control).
- Azure cloud data services integration; monitoring/optimizing Databricks clusters/workflows.
- Knowledge of distributed computing (partitions, joins, shuffles); data governance tools (Purview, Unity Catalog).
- SDLC familiarity; ability to manage priorities independently.
know more about current opportunities at LeadStack , please visit us on Should you have any questions, feel free to call me on (513) 3184502 or send an email on
About Pinterest:
Millions of people around the world come to our platform to find creative ideas, dream about new possibilities and plan for memories that will last a lifetime. At Pinterest, we're on a mission to bring everyone the inspiration to create a life they love, and that starts with the people behind the product.
Discover a career where you ignite innovation for millions, transform passion into growth opportunities, celebrate each other's unique experiences and embrace theflexibility to do your best work. Creating a career you love? It's Possible.
At Pinterest, AI isn't just a feature, it's a powerful partner that augments our creativity and amplifies our impact, and we're looking for candidates who are excited to be a part of that. To get a complete picture of your experience and abilities, we'll explore your foundational skills and how you collaborate with AI.
Through our interview process, what matters most is that you can always explain your approach, showing us not just what you know, but how you think. You can read more about our AI interview philosophy and how we use AI in our recruiting process here.
About tvScientific
tvScientific is the first and only CTV advertising platform purpose-built for performance marketers. We leverage massive data and cutting-edge science to automate and optimize TV advertising to drive business outcomes. Our solution combines media buying, optimization, measurement, and attribution in one, efficient platform. Our platform is built by industry leaders with a long history in programmatic advertising, digital media, and ad verification who have now purpose-built a CTV performance platform advertisers can trust to grow their business.
As a Sr. Machine Learning Engineer at tvScientific, you'll build the ML and AI systems behind our Connected TV ad-buying platform: real-time bidding, campaign optimization, and incrementality measurement at scale. We're an adtech company solving a hard problem: making CTV advertising actually measurable. Our platform helps advertisers buy ads across the CTV ecosystem: Hulu, Pluto TV, Disney+, HBO Max, and hundreds of FAST channels: and prove that those ads drove real business outcomes.
What you'll do:
- Write production Python that powers real-time bidding, model training, and campaign optimization
- Train, deploy, and monitor ML models that decide which ads to show, when, and at what price: millions of bid decisions per second
- Build and improve our incrementality measurement systems: helping advertisers understand the true causal lift of their CTV spend
- Design and implement new ML products across the ad-buying lifecycle: audience targeting, bid optimization, pacing, and attribution
- Use LLMs and generative AI to build internal tools that accelerate how we develop, test, and ship ML systems
- Serve as a technical lead and mentor on a distributed engineering team
What we're looking for:
- Strong production Python skills: you write code that runs in prod, not just notebooks
- Solid statistics and ML fundamentals: you can reason about experiment design, model evaluation, and when simpler approaches beat complex ones
- Familiarity with modern AI tools and good judgment about where they add value
- Adtech or CTV experience: familiarity with RTB, programmatic advertising, supply-path optimization
- Clear written communication: we're a distributed team and writing is how decisions get made
- Comfort with ambiguity: you'll own problems end-to-end in a fast-moving environment, from scoping to shipping
- Nice-to-Haves:
- Teaching experience
- Causal inference: uplift modeling, synthetic controls, difference-in-differences, or incrementality testing
- Big data experience with Scala and Spark
- Systems programming experience in Zig or similar (C, C++, Rust)
- Reinforcement learning or bandit algorithms in production
- Experience building agentic AI systems or LLM-powered workflows
- MLOps experience: model deployment, monitoring, and pipeline orchestration on AWS
In-Office Requirement Statement:
- We recognize that the ideal environment for work is situational and may differ across departments. What this looks like day-to-day can vary based on the needs of each organization or role.
Relocation Statement:
- This position is not eligible for relocation assistance. Visit ourPinFlexpage to learn more about our working model.
#LI-SM4
#LI-REMOTE
At Pinterest we believe the workplace should be equitable, inclusive, and inspiring for every employee. In an effort to provide greater transparency, we are sharing the base salary range for this position. The position is also eligible for equity. Final salary is based on a number of factors including location, travel, relevant prior experience, or particular skills and expertise.
Information regarding the culture at Pinterest and benefits available for this position can be found here.
US based applicants only$155,584—$320,320 USDOur Commitment to Inclusion:
Pinterest is an equal opportunity employer and makes employment decisions on the basis of merit. We want to have the best qualified people in every job. All qualified applicants will receive consideration for employment without regard to race, color, ancestry, national origin, religion or religious creed, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, age, marital status, status as a protected veteran, physical or mental disability, medical condition, genetic information or characteristics (or those of a family member) or any other consideration made unlawful by applicable federal, state or local laws. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. If you require a medical or religious accommodation during the job application process, please completethis formfor support.
Immediate need for a talented Bioinformatics Research Associate II . This is a 12+months contract opportunity with long-term potential and is located in Waltham, MA (Onsite). Please review the job description below and contact me ASAP if you are interested.
Job ID:26-08726
Pay Range: $40 - $50/hour. Employee benefits include, but are not limited to, health insurance (medical, dental, vision), 401(k) plan, and paid sick leave (depending on work location).
Key Responsibilities:
- Manager Notes:
- DIL: bio to support next-gen sequencing group. Some are in the lab, some are writing codes and analytical pipelines, working with Client coding and development systems, collaborating with the lab team, lots of coding, working with quality teams to ensure meeting metrics.
- Should have experience with at least one or two of the following. e.g., FastQC, Bowtie2, SAMtools, NCBI BLAST+, Nextflow, etc.). NGS pipeline development.
- 9-5 some wiggle room if they need to come in earlier leave earlier, Onsite but if they need a day or two here and there they can request a day to work remote A strong candidate would have Coding experience, papers published on coding, need next gen sequencing analysis, gene therapy group so if they have some exp in that or bio that would be a great advantage.
- Relevant experience is more important than a degree for the role.
- Does not want to see anyone with zero coding experience. No mention of the tools list would be a hard pass.
- Support computational needs for the development and validation of NGS-based assays.
- Work closely with a multi-disciplinary team of scientists and engineers to implement genomic analytical solutions for programs spanning precandidate selection through late phase clinical development.
- Develop, execute, and maintain NGS analysis pipelines for execution in cloud-based computational environments.
- Keep records of development work and testing in a GxP environment utilizing electronic notebook solutions.
- Represent the group at internal meetings.
Key Requirements and Technology Experience:
- Key Skills;Should have experience with at least one or two of the following. e.g., FastQC, Bowtie2, SAMtools, NCBI BLAST+, Nextflow, etc.)
- Minimum of 1 year of experience with NGS, spanning knowledge and hands-on dry-lab experience.
- Scripting experience in coding languages (e.g., bash, awk, Python, R, etc.).
- A strong candidate would have Coding experience, papers published on coding, need next-gen sequencing analysis, and gene therapy.
- Degree in a relevant computer science discipline with a minimum of 3 years of relevant industry experience.
- Minimum of 1 year experience with NGS, spanning knowledge and hands-on dry-lab experience.
- Expertise in bioinformatics with a working understanding of genomic analysis solutions (e.g., FastQC, Bowtie2, SAMtools, NCBI BLAST+, Nextflow, etc.).
- Scripting experience in coding languages (e.g., bash, awk, Python, R, etc.).
- Understanding of NGS platforms, specifically those utilizing the synthesis by sequencing technique (i.e., Illumina platforms).
- Ability to work independently and adapt under aggressive and/or changing timelines.
- Familiarity with the software development lifecycle (e.g., Git).
- Automated unit testing for test-driven design (TDD).
- Familiarity with basic molecular biology techniques (e.g., ligation, PCR, and qPCR) as well as nucleic acid extraction and analysis techniques (e.g., Nanodrop, DNA fragment analyzers, ddPCR, etc.).
- Knowledge of and experience with other sequencing platforms (i.e., SMRT sequencing).
- Prior experience in leading the internalization of custom NGS analysis pipelines is highly preferred.
- Wet-lab method development experience to support NGS workflows.
Our client is a leading Pharmaceutical Industry, and we are currently interviewing to fill this and other similar contract positions. If you are interested in this position, please apply online for immediate consideration.
Pyramid Consulting, Inc. provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, colour, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
By applying to our jobs you agree to receive calls, AI-generated calls, text messages, or emails from Pyramid Consulting, Inc. and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here.
Title
Sr. Principal Operations Research Analyst
Belong. Connect. Grow. with KBR!
KBR’s National Security Solutions team provides high-end engineering and advanced technology solutions to our customers in the intelligence and national security communities. In this position, your work will have a profound impact on the country’s most critical role – protecting our national security.
Why Join Us?
- Innovative Projects: KBR’s work is at the forefront of engineering, logistics, operations, science, program management, mission IT and cybersecurity solutions.
- Collaborative Environment: Be part of a dynamic team that thrives on collaboration and innovation, fostering a supportive and intellectually stimulating workplace.
- Impactful Work: Your contributions will be pivotal in designing and optimizing defense systems that ensure national security and shape the future of space defense.
Key Responsibilities
KBR is seeking a Sr. Principal Operations Research Analyst to join a multi-disciplinary team consisting of operational military and intelligence subject matter experts (SMEs), operations research analysts, modelers, and software engineers who conduct modeling, simulation, and analysis (MS&A) of emerging concepts for the Defense Advanced Projects Research Agency (DARPA) using advanced MS&A techniques and tools. The selected candidate will:
- Lead or support the conduct of studies for senior-level military decision makers, technology developers, and acquisition professionals for use in CONOPS development, force structure design, resource allocation, and targeted technology insertion.
- Develop and execute an experiment design approach, identify key metrics for simulation output, analyze results, generate insights and information, and present findings to senior DoD leaders.
- Develop and/or refine an analytic pipeline; interact with third-party software developers to incorporate this analytic pipeline into the modeling & simulation (M&S) tool.
- Conduct pre- and post-simulation analyses as needed.
- Design, develop, implement, and maintain analytic dashboard prototypes for incorporation into the MS&A software suite.
- Be prepared to instantiate platform and system models, model behaviors, and model hierarchies in the designated M&S environment as required.
- Work independently with little direction or guidance except in the most complex of situations.
Work Environment
- Location: On-site
- Travel Requirements: Minimal 0-20%
- Working Hours: Standard
Qualifications
Required:
- Master’s Degree in Operations Research, Applied Mathematics, Data Science, or related STEM discipline with 10 years of relevant experience
- Strong knowledge of simulation analysis and experiment design & analysis techniques
- High level of proficiency in R, MATLAB, Python, JupyterLab and/or Jupyter Notebook
- Basic proficiency in a mission-level combat modeling & simulation framework such as AFSIM or NGTS
Desired
- PhD in Operations Research, Applied Mathematics, Data Science, or related STEM discipline with 15 years of relevant experience
- Deep knowledge of simulation analysis and experiment design & analysis techniques
- High level of proficiency a mission-level combat modeling & simulation framework such as AFSIM or NGTS
- Certified Analytics Professional (CAP) or equivalent certification
Scheduled Weekly Hours
40hrs
Basic Compensation
The offered rate will be based on the selected candidate’s knowledge, skills, abilities and/or experience and in consideration of internal parity.
Additional Compensation
KBR may offer bonuses, commissions, or other forms of compensation to certain job titles or levels, per internal policy or contractual designation. Additional compensation may be in the form of sign on bonus, relocation benefits, short term incentives, long term incentives, or discretionary payments for exceptional performance.
Ready to Make a Difference?
If you’re excited about making a significant impact in the field of space defense and working on projects that matter, we encourage you to apply and join our team at KBR. Let's shape the future together.
KBR Benefits
KBR offers a selection of competitive lifestyle benefits which could include 401K plan with company match, medical, dental, vision, life insurance, AD&D, flexible spending account, disability, paid time off, or flexible work schedule. We support career advancement through professional training and development.
Belong, Connect and Grow at KBR
At KBR, we are passionate about our people and our Zero Harm culture. These inform all that we do and are at the heart of our commitment to, and ongoing journey toward being a People First company. That commitment is central to our team of team’s philosophy and fosters an environment where everyone can Belong, Connect and Grow. We Deliver – Together.
KBR is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, disability, sex, sexual orientation, gender identity or expression, age, national origin, veteran status, genetic information, union status and/or beliefs, or any other characteristic protected by federal, state, or local law.
R2120551
We are looking for a highly motivated AI Engineer to join our IT team. This role is ideal for someone passionate about building real-world AI solutions and eager to work across the full AI technology stack—from model integration and retrieval pipelines to agentic AI workflows, multi-agent orchestration, and application-level features used by business teams. You will also contribute to data engineering efforts that feed AI capabilities, working alongside a modern analytics platform built on Microsoft Fabric.
As an AI Engineer, you will help design, develop, and deploy AI capabilities. You will contribute to production-grade AI features in areas such as Open-to-Buy planning, Sales Forecasting, Intelligent Order Management Systems (OMS), Product Copy Generation, and Image Generation.
This is a unique opportunity to work on meaningful, high-impact AI initiatives while implementing modern AI infrastructure, LLMOps practices, and scalable system design.
This role will work from our Greenwich, CT office and report to the Senior Director of System Integration & Operation on our current hybrid schedule, 3 days in office and 2 days remote.
Key Responsibilities:
AI Application Development
Build and maintain AI-powered features including:
- Open-to-Buy optimization and inventory planning models
- Sales forecasting and demand prediction solutions
- Intelligent OMS features for routing, allocation, and automation
- Marketing AI tools such as product copy generation and AI-assisted image generation
Integrate custom and foundation LLMs into internal applications using API and SDK interfaces, leveraging structured outputs, function/tool calling, and prompt caching to optimize reliability and cost.
RAG, GraphRAG, + Vector DB Engineering
- Develop retrieval pipelines using vector embeddings and similarity search (Azure AI Search, FAISS, Pinecone, or equivalent).
- Implement chunking, embedding, indexing, query routing, and relevance-tuning strategies, including advanced reranking and hybrid search techniques.
- Maintain a high-quality knowledge base to support AI features via Retrieval-Augmented Generation.
- Explore and implement GraphRAG patterns to improve knowledge retrieval over structured enterprise data and entity relationships.
AI Agents & Orchestration
- Design and build AI agents capable of planning, tool use, and multi-step reasoning using frameworks such as LangGraph, PydanticAI, CrewAI, or Google ADK.
- Implement Model Context Protocol (MCP) and Agent-to-Agent (A2A) protocol integrations to connect AI agents with internal tools, APIs, data systems, and other agents in a standardized, interoperable way.
- Build guardrails, evaluation frameworks, and human-in-the-loop checkpoints to ensure reliable and safe agent behavior in production.
AI Infrastructure & System Architecture
- Maintain private cloud LLM instance landscape, ensuring secure and efficient usage.
- Assist in deploying scalable inference pipelines, batching, and caching layers.
- Collaborate with DevOps and Data Engineering on CI/CD, model deployment workflows, monitoring, and integration with the Microsoft Fabric data platform (including Fabric MCP for agent-to-data connectivity).
Data Engineering, Pipelines & Model Training
- Clean, transform, and prepare datasets for ML/AI pipelines; contribute to data engineering workflows including ELT pipeline design, medallion architecture patterns, and data transformation within the Lakehouse layer.
- Train, validate, and fine-tune models where appropriate (LLMs, forecasting models, classification models, etc.); familiar with parameter-efficient techniques such as LoRA and QLoRA.
- Evaluate model performance and optimize latency, accuracy, and cost using LLM evaluation and observability frameworks (e.g., RAGAS, LangSmith, Langfuse, Helicone, or custom evals); manage prompt versioning and regression testing.
Required Qualifications:
- Bachelor’s degree in Computer Science, Data Science, AI/ML, Engineering, or related field.
- Strong foundations in Python, data structures, and machine learning concepts.
- Comfortable working with LLM APIs, embeddings, vector databases, and RAG patterns; exposure to agentic patterns, tool use, and GraphRAG concepts is a strong plus.
- Familiarity with cloud environments (Azure preferred; AWS or GCP also acceptable).
- Understanding of systems diagrams, architecture patterns, and AI infrastructure components.
- Exposure to SQL/NoSQL databases.
- Exposure to data engineering concepts such as ELT/ETL pipelines, data transformation, and data modeling.
- Awareness of responsible AI principles including bias detection, fairness, and model interpretability.
- Awareness of AI agent frameworks and orchestration concepts (e.g., LangGraph, PydanticAI, Semantic Kernel, CrewAI, or Google ADK).
- Familiarity with prompt engineering best practices including chain-of-thought, few-shot prompting, and structured output design.
Preferred Qualifications:
- Familiarity with Microsoft Fabric (OneLake, Lakehouse, Spark notebooks, semantic models) and Power BI; experience with Fabric MCP integrations is a strong differentiator.
- Experience implementing MCP (Model Context Protocol) servers or A2A (Agent-to-Agent) protocol endpoints, or integrating AI agents with external tools and APIs.
- Exposure to multimodal AI capabilities (vision-language models) for applications such as product image analysis or document understanding.
- Experience building small AI apps, demos, or tools—portfolio/GitHub encouraged.
What you'll Gain:
- Hands-on impact in designing enterprise AI capabilities from the ground up.
- Opportunities to work with cutting-edge LLM technologies in a private, secure environment, alongside a modern Microsoft Fabric data platform.
- A chance to shape AI products used across supply chain, marketing, and e-commerce.
Company Overview:
Established in 2005, Marc Fisher Footwear company is a leading full-service, product-driven fashion footwear company with knowledge and expertise in design, sales, sourcing, distribution and marketing – all with dedicated and strategic direction for each brand within the portfolio, which includes GUESS, G by Guess, Nine West, Tommy Hilfiger, Earth, Calvin Klein, Kenneth Cole Men's, Hunter Boots, Rockport, Bandolino, indigo rd., Unisa, and Easy Spirit along with the namesake brands – Marc Fisher and Marc Fisher LTD.
Our diverse portfolio of globally recognized brands – available domestically and internationally via wholesale and retail channels – consistently meets the widest range of consumers’ fashion footwear needs, from classic to contemporary, sport to dress, men’s to women’s. Headquartered in Greenwich, Connecticut, with showrooms in New York City, Marc Fisher Footwear is sold worldwide through department stores, specialty stores and e-commerce channels.
Marc Fisher Footwear is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, religion, color, national origin, sex, sexual orientation, age, status as a protected veteran, among other things, or status as a qualified individual with a disability. EEO Employer/Vet/Disabled.
Early Research & Development Scientist – Tissue Pathology & Digital Imaging
Location: Onsite (Local candidates preferred) - 85755, Tucson, Arizona, United States
Type: Contract
About the Role
We are seeking an Early R&D Scientist with strong hands-on experience in tissue pathology and staining techniques to support innovative work in the diagnostics and assay development space.
This role combines wet lab research, digital pathology, and data-driven experimentation, offering the opportunity to contribute to early product development, process optimization, and cutting-edge imaging technologies.
Key Responsibilities
- Perform tissue-based experiments, including:
- Tissue sectioning
- Immunohistochemistry (IHC)
- Staining methods (primary & special stains)
- Design and execute experiments using sound scientific principles and Design of Experiments (DOE)
- Analyze experimental data, interpret results, and drive process improvements
- Utilize digital pathology and image analysis tools to evaluate tissue samples
- Document all lab activities in compliance with GMP/GLP standards
- Prepare technical reports, protocols, and presentations
- Collaborate with cross-functional teams and communicate findings to stakeholders
- Support early product development and innovation initiatives, including AI-enabled workflows
Required Qualifications
- PhD (entry-level) OR Master’s degree with 3+ years of relevant experience
- Hands-on experience with:
- Tissue sectioning and pathology workflows
- Immunohistochemistry (IHC) and staining techniques
- Strong understanding of experimental design and data analysis
- Experience maintaining accurate lab documentation (ELN preferred)
- Knowledge of GMP/GLP environments
Preferred Skills (Nice to Have)
- Experience with digital pathology / image analysis tools:
- HALO (highly preferred)
- QuPath, ImageJ/FIJI
- Exposure to AI/data tools:
- Python, R, or MATLAB
- Experience with:
- Tissue diagnostics
- Organoid or spheroid models
- Familiarity with:
- Microsoft Project (planning & timelines)
- Azure DevOps (ADO) / Agile workflows
- Experience with LIMS (LabWare) or Electronic Lab Notebooks
What We’re Looking For
- Strong hands-on scientist with attention to detail
- Ability to analyze data and translate results into insights
- Effective communicator comfortable working with cross-functional stakeholders
- Self-driven and capable of managing multiple experiments and priorities
Why Join?
- Work on cutting-edge pathology and imaging technologies
- Contribute to innovative R&D and product development
- Gain exposure to AI-driven scientific workflows
- Collaborative and fast-paced research environment
Potential candidates, please send your resume to Karla Monroe at
JOB TITLE: Litigation Paralegal
FLSA STATUS: Non-Exempt
DIVISION: Litigation
REPORTS TO: Attorneys/Firm Administrator/Senior Litigation Paralegals
SUMMARY: Under general supervision and according to established policies and procedures, performs a variety of duties to assist attorneys to whom assigned. Maintains positive contact with clients and observes confidentiality of client matters.
ESSENTIAL DUTIES, RESPONSIBILITIES, KNOWLEDGE, SKILLS AND ABILITIES REQUIRED: These are primarily job duties that incumbents must be able to perform unassisted or with some reasonable assistance or training provided by the Firm.
- Bachelor's degree required with 6-9 years of litigation paralegal experience (Complex or Commercial, preferred).
- Follows the ethical requirements as set forth by the Rules of Professional Conduct of the state.
- Ability to proofread typed material for contextual, grammatical, typographical and/or spelling errors, and the ability to write effectively and efficiently. Exceptional attention to detail is REQUIRED.
- Advanced level cite checking ability (according to The Bluebook) and familiarity with Best Authority is REQUIRED.
- Exceptional organizational and file management skills, including familiarity with document management systems.
- Ability to multi-task and turn projects around on tight deadlines.
- Familiarity with and ability to effect electronic filings using ECF and analogous court systems.
- Ability to supervise all aspects of a project, including: reviewing, organizing, preparing and indexing materials in accordance with established file maintenance procedure; copying and quality checking; and ensuring materials are properly and appropriately distributed to the attorney group. This will include the ability to establish, organize, index, and maintain pleadings files, attorney hearing and trial notebooks and related exhibits, deposition notebooks and related exhibits, expert witness notebooks and related exhibits.
- Ability to conduct research and locate cases, depositions, opinions, reports and information related to the matter by using all available resources, including the internet, firm library resources, and computer data systems. Intermediate to Advanced understanding of Lexis-Nexis, Westlaw, and PACER is REQUIRED.
- Familiarity with calendar/docketing management systems, such as CompuLaw, and court rules relating to computation of time establishing filing deadlines.
- Ability to work independently, read and analyze dense materials and meet assigned deadlines.
- Assist with review and tracking of invoices and related payments for attorney group.
- Advanced knowledge of Word, Excel, Adobe Acrobat Pro, and PowerPoint is REQUIRED.
- Ability to work overtime as needed, with short notice given.
- Interpersonal skills necessary to interact positively and to communicate and follow instructions effectively from a diverse group of clients, vendors, attorneys and staff.
- Intermediate understanding of litigation support databases, such as Relativity is HELPFUL. Must be able to speak and interact positively with peers and vendors regarding electronic data, conversions and creation of databases.
- Job may involve light lifting and occasional travel.
Job Title: Sr. Automation Engineer
Location: Hillsboro, OR
Job Summary
Panasonic Avionics Corporation is seeking Senior Automation Engineers to lead and enhance advanced automation solutions for embedded and UI-driven systems. The ideal candidates will bring deep expertise in Python-based automation, Robot Framework, and QNX environments, with a strong focus on scalable test architecture, framework migration, and high-volume regression execution. This role requires hands-on technical leadership, cross-layer debugging skills, and collaboration within complex embedded and aviation-grade systems.
Mandatory Technical Skills
(Minimum 5+ years of hands-on experience in each)
- Python automation using Pytest or Robot Framework
- QNX OS (POSIX-compliant systems)
- UX/UI Automation & Testing
Key Responsibilities
- Design, architect, and enhance scalable automation frameworks using Python and Pytest.
- Perform migration of automation assets from Robot Framework to Python/Pytest, ensuring feature parity and long-term maintainability.
- Analyze and interpret large Robot Framework keyword libraries and enable reuse within Python-based executions.
- Optimize hybrid execution models involving both Pytest and Robot Framework assets.
- Develop wrapper layers, fixtures, utilities, and reusable automation components.
- Independently debug complex cross-layer automation issues spanning Python, Robot Framework, QNX OS, and device-level tools.
- Integrate automation frameworks with CI/CD pipelines using tools such as Jenkins, GitLab CI, or Azure DevOps.
- Execute and maintain UI and device automation using Appium, Selenium, or equivalent tools.
- Enforce modular test design principles, including page-object and page-keyword patterns, to ensure long-term automation maintainability.
- Mentor junior engineers and uphold automation design, coding standards, and best practices.
Required Qualifications
- 5+ years of hands-on experience with Python automation and Pytest.
- Strong practical experience with Robot Framework, including keywords, resources, variables, and test structuring.
- Proven experience managing and maintaining large keyword repositories (1000+ keywords).
- Experience working with QNX OS, POSIX systems, Hypervisor-based virtualization, and Cloud environments (AWS).
- Solid understanding of Git version control, branching strategies, and CI/CD workflows.
- Experience with UI and device automation tools such as Appium and Selenium.
- Strong analytical, debugging, and problem-solving skills with the ability to work independently.
- Excellent communication skills and experience working in cross-functional teams.
Preferred Qualifications
- Experience in mobility, embedded systems, aviation, or high-volume regression environments.
- Exposure to automation framework migration, cross-framework interoperability, or keyword reuse models.
- Bachelor’s degree in Computer Science, Electronics, Engineering, or a related field.
Location: Foster City, CA, 94404
Work Schedule: Onsite - 5 Days/Week
Duration: 12 Months
Position Summary
We are seeking a talented and highly motivated individual to join the Research Oncology team in Foster City, CA. The role will provide scientific and technical support for laboratory activities including mammalian cell culture, cell line maintenance, and characterization.
The candidate will collaborate with cross-functional teams in a fast-paced research environment, supporting oncology research projects and contributing to the development of genetically modified cell lines and in-vitro cell-based assays.
Key Responsibilities
- Perform cell line expansion, maintenance, cryopreservation, and thawing of multiple mammalian cell lines.
- Prepare cell culture media with complex supplements and ensure proper cell handling procedures.
- Collaborate with team members to support cell culture needs for ongoing research projects.
- Coordinate with cross-functional teams to assess cell banking and cell culture requirements and deliver materials within timelines.
- Conduct and support cell line quality control procedures, including validation for experimental integrity.
- Maintain accurate laboratory notebooks and documentation for all experiments and lab activities.
- Support development of genetically engineered cell lines and in-vitro cell-based assays for oncology programs
Basic Qualifications
- Bachelor's degree in Molecular Biology, Cell Biology, Cancer Biology, or related life science field.
- Minimum 3 years of hands-on experience working with:
- Cancer cell lines
- Primary cells
- Genetically engineered cell lines
- Strong expertise in aseptic techniques and mammalian cell culture.
- Experience handling multiple cell lines simultaneously.
- Proficiency in media preparation with complex supplements.
- Strong organizational skills, attention to detail, and record-keeping abilities.
- Familiarity with cell culture QC practices, including:
- Mycoplasma testing
- Cell line authentication.
Preferred Qualifications
- 6+ years of hands-on experience in mammalian cell culture within the biopharmaceutical or biotechnology industry.
- Experience using electronic lab notebooks (ELN) for experiment documentation and workflow management.
- Ability to optimize and troubleshoot mammalian cell culture systems.
- Experience writing and maintaining Standard Operating Procedures (SOPs).
- Familiarity with online genetic databases and integration of phenotypic/genetic data into cell bank systems.
- Experience with viral and non-viral transduction or transfection methods, including:
- Lentivirus
- Retrovirus
- Lipid-based systems
- Experience using laboratory instruments such as:
- Plate readers
- Cell counters
- Automated western blot systems
- Ability to run basic cell-based assays and develop cell line banking protocols.
- Strong written and verbal communication skills.
- Ability to work effectively in a fast-paced environment with shifting priorities.
Location: Remote
Duration: 6 months
Role Overview
The Integration Architect defines, designs, and governs enterprise integration architecture standards across AWS, Azure, Microsoft Fabric, and on-prem systems. This consultant creates scalable integration blueprints, reusable patterns, and secure connectivity frameworks that ensure interoperability, reliability, and domain-aligned data exchange. The role partners closely with domain teams, platform engineering, API management teams, and enterprise architecture to accelerate delivery while maintaining architectural integrity.
Key Responsibilities
Integration Standards & Governance
- Define and maintain enterprise standards for API design, event schemas, messaging patterns, and integration of contracts.
- Establish integration governance across AWS, Azure, MS Fabric, and on-prem systems.
- Define patterns for ADS (Authorized Data Sources) alignment, data contracts, schema evolution, and anchor key management.
- Enforce adherence to enterprise security principles, including OAuth2/OIDC, JWT, TLS, Zero Trust patterns.
Blueprints & Reference Architecture
- Build and maintain unified enterprise integration architecture blueprints spanning cloud, Fabric, and on prem connectivity.
- Create domain specific and cross domain integration flow maps, canonical API patterns, and event driven reference architectures.
- Align AWS, Azure, MS Fabric, and on-prem patterns under Unified Architecture.
Reusable Patterns & Engineering Enablement
- Develop reusable integration patterns for:
- AWS: API Gateway, Event Bridge, SNS/SQS, Lambda, Step Functions, Glue, EMR, Redshift, Lake Formation, Kinesis, AWS Batch, AWS ECR, AWS ECS Fargate.
- Azure: APIM, Functions, Service Bus, Azure Data Factory (all IR types), Azure Synapse Pipelines, Azure Stream Analytics, Azure Batch, Azure Data Explorer ingestion.
- MS Fabric: Data Factory pipelines, Lakehouse ingestion interfaces, Fabric Data Pipelines, Notebook-based ETL, Warehouse ingestion.
- On prem: MFT, MQ, legacy services.
- Provide templates for API contracts, event schemas, integration error handling, observability hooks, and resiliency patterns.
Metadata, ADS, & Anchor Key Integration
- Define integration patterns incorporating ADS rules, domain ownership, and anchor key management for interoperability.
- Ensure all integration patterns embed security, observability, lineage awareness, and operational resiliency.
- Collaborate with data governance to ensure consistent entity resolution and cross?domain identifier mapping.
Domain Engagement & Architecture Review
- Guide domain teams in implementing target state integration architectures.
- Lead or participate in architecture reviews for API designs, event models, platform integrations, and connectivity.
- Recommend modernization opportunities to retire from legacy integration mechanisms and adopt event-driven/API?first models.
Qualifications
Technical Expertise
- 8-12+ years in integration architecture, API engineering, event-driven design, or hybrid integration.
- Strong hands-on expertise across:
- AWS: API Gateway, Event Bridge, SNS/SQS, Lambda, Step Functions, Glue, EMR, Redshift, Lake Formation, Kinesis, AWS Batch, AWS ECR, AWS ECS Fargate.
- Azure: APIM, Functions, Service Bus, Azure Data Factory (all IR types), Azure Synapse Pipelines, Azure Stream Analytics, Azure Batch, Azure Data Explorer ingestion.
- MS Fabric: Data Factory pipelines, Lakehouse ingestion interfaces, Fabric Data Pipelines, Notebook-based ETL, Warehouse ingestion.
- RDBMS: SQL, Oracle, DB2, RDS, etc.
- On prem: MQ, MFT, REST/SOAP services.
- Understanding of ADS, anchor key management, data/domain contracts, lineage aware integration.
- Experience designing event driven, API first, batch, and hybrid integration architectures.