Colaboratory Python Jobs Salary Jobs in Usa
1,015 positions found — Page 3
Bhagyashree Yewle, Principal Lead Recruiter - YOH SPG
ODI Developer - Hybrid Onsite in Boston MA - USC OR GC ONLY (No Visas)
- Location: Boston, MA
- Hybrid: 3 days on site
- Potential Convert: Yes, USC/GC ONLY no exceptions. WILL NOT SPONSOR
- ETL/ELT
- ODI
- PL/SQL coding
- 7 years’ experience
- Knowledge on how to be an admin side of things (not day to day but is able to do that)
- Scripting – Python & Unix Scripting
Seeking a highly skilled and experienced Sr. ODI Developer to join our Private Banking Systems team. The ideal candidate will possess expertise in a range of technologies, including ODI (Oracle Data Integrator), Oracle Data Warehouse, Linux, Python scripting, and have a deep understanding of the Banking domain is a big plus. As a Data Engineer, you will play a pivotal role in designing, developing, and maintaining data solutions.
Key Responsibilities:
- Build ODI mappings/interfaces, packages, procedures, scenarios, topology configuration, ODI Agent and load plans to integrate data from multiple enterprise systems.
- Expertise in building Pl/SQL queries, procedures, data loading process, ensuring high-performance and scalability to meet the evolving data needs of the various applications.
- Design, develop, and maintain ETL/ELT pipelines using Oracle Data Integrator (ODI).
- Collaborate effectively with cross-functional teams, including other data engineers, DBA group, analysts, and business stakeholders, to understand data requirements and deliver solutions.
- Monitor and troubleshoot RMJ jobs, ODI workflows, sessions, agents, and data pipelines on Linux environments.
- Perform root cause analysis for failures related to ODI workflows, RMJ jobs, network connectivity, API integrations, and file transfers.
- Optimize ETL workflows to improve reliability, performance, and scalability.
- Use scripting and automation tools to support data processing and operational workflows.
- Work in Linux/Unix environments, using command-line tools and shell scripts for job automation and troubleshooting.
- Maintain comprehensive documentation of data processes, configurations, and best practices.
- Participate in walk-throughs which review program specifications, source code, and all technical supporting documentation, including screens/reports. Provide feedback in accordance with team standards and guidelines.
- Participate in implementation of changes, enhancements, and newly developed programs.
- Conduct technical research and provide recommendations, develop proofs of concept or prototypes, contributing to technical design of applications.
- Helping to identify coding patterns and anti-patterns and enforce implementation of the patterns through code reviews.
- Quickly resolving issues encountered by business lines in the production environment, maintaining a helpful, "high touch" approach to working with business users, performing root cause analysis, technology evaluation, and performance tuning.
Desired Qualifications:
- Degree in Computer Science, Engineering or related technical area
- 7+ years of extensive hands-on experience in ODI, Oracle Datawarehouse, Oracle PL/SQL, Linux, Python scripting, and ODI admin module (ODI Agent setup, logs configuration, certificate installation).
- Must have experience in building Pl/SQL queries for Oracle Server (incl. stored procedures, functions…) and must understand basic principles of data modeling
- Excellent collaborative and communication skills, particularly in high-stress situations
- Experience with scripting Python and Linux scripting, CLE, networking fundamentals (API, IP/ports, SFTP/FTP connectivity)
- High proficiency in development practices: unit testing, Continuous Integration (CI/CD), refactoring, clean code
- Experience with Bitbucket/GIT source control management
- Problem solving skills, able to determine upcoming risks & issues and address them accordingly.
- Ability to interpret and troubleshoot applications using logs.
- Pro-active approach and good communication skills.
- Experience with agile methodologies (Scrum, Kanban) and tools (Jira)
- Private Banking domain experience.
- Working experience in a financial service industry
- Financial application knowledge like FIS AddVantage, CRD, CRM Pivotal.
- Experience with Apache Airflow for workflow orchestration.
- Knowledge of dbt (Data Build Tool) for modern data transformations.
- Exposure to cloud data platforms or hybrid data architectures.
Key Competencies:
- Strong analytical and problem-solving skills
- Ability to work with large-scale enterprise data environments
- Excellent collaboration and communication skills
- Ability to manage multiple priorities in a fast-paced environment
- Commitment to continuous learning and technology innovation
Estimated Min Rate: $55.00
Estimated Max Rate: $72.00
What’s In It for You?
We welcome you to be a part of the largest and legendary global staffing companies to meet your career aspirations. Yoh’s network of client companies has been employing professionals like you for over 65 years in the U.S., UK and Canada. Join Yoh’s extensive talent community that will provide you with access to Yoh’s vast network of opportunities and gain access to this exclusive opportunity available to you. Benefit eligibility is in accordance with applicable laws and client requirements. Benefits include:
- Medical, Prescription, Dental & Vision Benefits (for employees working 20+ hours per week)
- Health Savings Account (HSA) (for employees working 20+ hours per week)
- Life & Disability Insurance (for employees working 20+ hours per week)
- MetLife Voluntary Benefits
- Employee Assistance Program (EAP)
- 401K Retirement Savings Plan
- Direct Deposit & weekly epayroll
- Referral Bonus Programs
- Certification and training opportunities
Note: Any pay ranges displayed are estimations. Actual pay is determined by an applicant's experience, technical expertise, and other qualifications as listed in the job description. All qualified applicants are welcome to apply.
Yoh, a Day & Zimmermann company, is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.
Visit to contact us if you are an individual with a disability and require accommodation in the application process.
For California applicants, qualified applicants with arrest or conviction records will be considered for employment in accordance with the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act. All of the material job duties described in this posting are job duties for which a criminal history may have a direct, adverse, and negative relationship potentially resulting in the withdrawal of a conditional offer of employment.
It is unlawful in Massachusetts to require or administer a lie detector test as a condition of employment or continued employment. An employer who violates this law shall be subject to criminal penalties and civil liability.
By applying and submitting your resume, you authorize Yoh to review and reformat your resume to meet Yoh’s hiring clients’ preferences. To learn more about Yoh’s privacy practices, please see our Candidate Privacy Notice: working/work at home options are available for this role.
We're building safety-enhancing technology for aviation that will save lives. Automated aviation systems will enable a future where air transportation is safer, more convenient and fundamentally transformative to the way goods - and eventually people - move around the planet. We are a team of mission-driven engineers with experience across aerospace, robotics and self-driving cars working to make this future a reality.
As a Radar Systems Engineer, you will be a part of our Radar Engineering team. The Radar Engineering team is comprised of a small cross functional team of motivated and experienced engineers; we're responsible for designing, building, and testing cutting edge phased array radar systems from concept to certified product. We enjoy a culture of sharing information and collaboration. Phased array radar systems have historically been reserved for specialized applications, but we're making this technology affordable to enable Detect and Avoid for widespread commercial applications. This role will focus on new advanced operational modes. The passion for revolutionary technology to make aviation safer motivates us to come in every day.
Responsibilities
In your role as a Radar Systems Engineer, you will be responsible for analyzing phased array radar systems and developing radar processing algorithms to enable short, long range object tracking, and radar image generation. You will be involved in all phases of development from conception to production, designing your algorithms in Matlab, Python, and implementation in C++ on the target hardware. You will drive the system-level requirements of the phased array radar system, and drive trade studies collaborating with the cross functional teams. You will support the integration of the radar on the aircraft and support data collection and analysis during flight testing. You will be the owner of real-time processing of radar algorithms on an FPGA, DSP-based platform.
Basic Success Criteria
Electrical Engineering fundamentals, typically gained through a Bachelor's Degree of Science or Engineering in Mechanical, Electrical, Aerospace, or a related discipline
8+ years of professional hands-on experience with Radar algorithms, radar system design, radar signal processing, integration, and test of a radar system
Ability to use C++, Matlab and Python (Python preferred)
Ability to troubleshoot, find root cause, and resolve issues
Pulsed and FMCW processing algorithm development
Experience in airborne radar testing and development
Preferred Criteria
Advanced Degree of Science or Electrical Engineering
Experience developing system architectures and managing requirements for certification of Avionics
Creative problem solver that can bring multiple disciplines together, with the ability to assess risk and make design and development decisions without all available data
Experience integrating and troubleshooting various electronic sensors and components
Weather radar experience, airborne or ground-based
Real beam radar imaging, and or SAR processing
This role can be remote, or located at our facility in Mountain View, California.
Must be willing to travel 30% of the time.
The estimated salary range for this position is $180,000 to $260,000/annual salary + cash and stock option awards + benefits. At Reliable Robotics, we strive to provide competitive and rewarding compensation based on experience and expertise, as well as market conditions, location, and pay equity.
In addition to base compensation, Reliable Robotics offers stock options, employee medical, 401k contribution, great co-workers and a casual work environment.
This position requires access to information that is subject to U.S. export controls. An offer of employment will be contingent upon the applicant's capacity to perform in compliance with U.S. export control laws.
All applicants are asked to provide documentation that legally establishes status as a U.S. person or non-U.S. person (and nationalities in the case of a non-U.S. person). Where the applicant is not a U.S. person, meaning not a (i) U.S. citizen or national, (ii) U.S. lawful permanent resident, (iii) refugee under 8 U.S.C. * 1157, or (iv) asylee under 8 U.S.C. * 1158, or not otherwise permitted to access the export-controlled technology without U.S. government authorization, the Company reserves the right not to apply for an export license for such applicants whose access to export-controlled technology or software source code requires authorization and may decline to proceed with the application process and any offer of employment on that basis.
At Reliable Robotics, our goal is to be a diverse and inclusive workforce. As an Equal Opportunity Employer, we do not discriminate on the basis of race, religion, color, creed, ancestry, sex, gender (including pregnancy, childbirth, breastfeeding, or related medical conditions), gender identity, gender expression, sexual orientation, age, non-disqualifying physical or mental disability or medical conditions, national origin, military or veteran status, genetic information, marital status, or any other basis covered by applicable law. All employment and promotion is decided on the basis of qualifications, merit, and business need.
If you require reasonable accommodation in completing an application, interviewing, completing any pre-employment testing, or otherwise participating in the employee selection process, please direct your inquiries to
Compensation Range: $180K - $260K
Apply for this JobLength of Contract: 6 months
Location: Remote (Eastern time zone)
What are the top 3-5 skills, experience or education required for this position:
a. Proficiency in databases (SQL) and coding in R/Python
b. Experience with API development
c. Familiarity with AI techniques and strong curiosity for new technologies
d. Experience managing and curating bioinformatics datasets (BulkRNAseq, Proteomics, scRNAseq, CRISPR)
e. Code management, documentation, and version control (e.g., GitHub)
Job Overview: As a Data Analyst, you'll drive data quality and consistency in our central hub for storing OMICS data, address impactful data loading and curation projects and help improve and automate processes using agentic AI. Working closely with researchers, you'll ensure their data needs are met and help accelerate scientific discovery.
Key Responsibilities:
- Contribute to important data loading and curation projects for the departments Omics data server
- Address data quality and consistency issues in the CRISPR database.
- Apply agentic AI approaches for data loading and querying OMICS data
- Database Interaction: Use PostgreSQL to build, manage, and query large genomic datasets.
- API Development: Design and implement APIs for improved data accessibility and integration across platforms.
- Automation: Use Python and R to automate and optimize data workflows, prioritizing data quality and integrity.
- ETL Process Management: Develop and execute ETL processes to integrate high-value datasets in line with organizational standards.
- Collaboration: Work with cross-functional teams and research scientists to gather requirements, align to common data model standards, and facilitate effective data management.
- Documentation: Maintain comprehensive documentation and version control for reproducibility and teamwork.
Required qualifications:
- Master's degree in computer science, bioinformatics, or a related field, with 3+ years of relevant experience.
- Proven experience working with databases (PostgreSQL proficiency).
- Advanced skills in Python and R for automation and data manipulation.
- Experience handling and curating bioinformatics datasets (BulkRNAseq, Proteomics, scRNAseq, CRISPR).
- Code management, documentation, and usage of Github.
- Curiosity and basic knowledge of AI techniques applicable to data loading and querying.
- Excellent communication skills and a collaborative mindset.
- Demonstrated experience with AWS resources.
- Experience in API
Selling Partner Trust and Store Integrity (TSI) creates a trustworthy shopping experience across Amazon stores worldwide by protecting customers, brands, selling partners, vendors, and Amazon from fraud, counterfeit, and abuse. The Special Projects and Investigations (SPI) team within TSI protects Amazon customers and stores by applying systems thinking to understand how networks of users interact with multiple services. We target large-scale ecosystems that pose store-level risks and mitigate those ecosystems through internal and external means. Our growth requires highly skilled candidates who move fast, have an entrepreneurial spirit to create new solutions, demonstrate tenacity to get things done, thrive in ambiguity and change, and can break down and solve complex problems.
We catch bad actors and stop online fraud. It’s fun. It’s hard. It matters. We are passionate about protecting our selling partners and customers from bad actors and want a candidate that shares that passion. Amazon is one of the world’s most trusted companies. Help us keep it that way. To achieve this, the ideal candidate should be passionate about use of advanced data analytics and technology approaches to identify patterns and establish connections to uncover process and technology gaps and prevent fraud across Amazon stores worldwide. Your decisions are not only fundamental to helping protect customers and selling partners but will help maintain the health of Amazon’s catalog and product listings ecosystem.
Key job responsibilities
• Complete risk analyses and manipulate data in complex data sets (SQL, Python, R etc.)
• Use high-level judgment to inform our most complex enforcement decisions
• Identify gaps and risks in Amazon's current mechanisms and policies and recommend solutions to product/policy owning teams.
• Use data and/or technical skills to discover new ways to scale deep dive signals resulting in the identification of many bad actors and sizing the issue
• Owning the complete life cycle of one or more complex problems - from identification through scaling the solutions
• Break problems into manageable pieces, ruthlessly prioritizing, and delivering results in an ambiguous environment
• Conduct large scale deep dives to derive insights about tactics used to conduct abuse on our stores, identifying gaps and risk in Amazon's current mechanisms, systems, and policies
• Write documents for partner teams and executives that identify problems, propose technical solutions, and drive alignment among stakeholders
• Own partnerships with stakeholder teams and guide appropriate trade-offs, clearly communicate goals, roles and responsibilities.
• Design and deploy agentic AI systems to automate complex workflows, scale pattern detection, and accelerate enforcement decisions across high-volume abuse scenarios
A day in the life
Your day might involve diving deep into data to uncover emerging fraud patterns, collaborating with teams across Amazon to implement protective solutions, or developing new detection methods. You'll balance independent analytical work with team collaboration, sharing insights and supporting colleagues in our shared mission.
About the team
Our team is comprised of practitioners of fraud and abuse, working to understand bad actor ecosystems using threat intelligence analytics and technical skills. We complement specialized industry skills with broad risk experiences gathered through years of experience to deliver results - we wear a lot of hats and take ownership of hard to solve problem areas whenever possible. We speak 12 languages, write code in 3 (mostly self-taught, on the job), and celebrate learning and taking risks. We encourage experimentation and curiosity while supporting each other to constantly learn and grow.
Our work is to solve hard puzzles and identify what hasn’t already been discovered - typically with data and always with a lot persistence and curiosity. If you like the sound of that, come join us.- Bachelor’s or postgraduate degree in Information Security, Computer Science, Data Science/Analytics, Engineering, Mathematics, Statistics or related discipline.
- 3+ years of relevant industry experience in risk or fraud investigations, regulatory compliance, ecommerce, analytics, or security
- Proficient with deriving insights from big data using SQL & experience manipulating/processing data with Python
- Proven ability to deliver complex projects across multiple teams- • Experience working in e-commerce organizations • Experience working within fraud, compliance, law enforcement, or intelligence organizations • Experience with AWS services like Redshift, Neptune or Sagemaker • Masters degree in or practical experience with data science or machine learning • Excellent written and verbal communication skills to communicate security and business risk to a broad range of technical and non-technical audiences. • High level of integrity and discretion to handle confidential information. • Exceptional ownership and bias for action: willing to move quickly and decisively • Proven ability to problem solve in large/complex/technical systems
Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status.
Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
The base salary range for this position is listed below. Your Amazon package will include sign-on payments and restricted stock units (RSUs). Final compensation will be determined based on factors including experience, qualifications, and location. Amazon also offers comprehensive benefits including health insurance (medical, dental, vision, prescription, Basic Life & AD&D insurance and option for Supplemental life plans, EAP, Mental Health Support, Medical Advice Line, Flexible Spending Accounts, Adoption and Surrogacy Reimbursement coverage), 401(k) matching, paid time off, and parental leave. Learn more about our benefits at , WA, Seattle - 102, ,400.00 USD annually
OZ – Databricks Architect/ Senior Data Engineer
Note: Only applications from U.S. citizens or lawful permanent residents (Green Card Holders) will be considered.
We believe work should be innately rewarding and a team-building venture. Working with our teammates and clients should be an enjoyable journey where we can learn, grow as professionals, and achieve amazing results. Our core values revolve around this philosophy. We are relentlessly committed to helping our clients achieve their business goals, leapfrog the competition, and become leaders in their industry. What drives us forward is the culture of creativity combined with a disciplined approach, passion for learning & innovation, and a ‘can-do’ attitude!
What We're Looking For:
We are seeking a highly experienced Databricks professional with deep expertise in data engineering, distributed computing, and cloud-based data platforms. The ideal candidate is both an architect and a hands-on engineer who can design scalable data solutions while actively contributing to development, optimization, and deployment.
This role requires strong technical leadership, a deep understanding of modern data architectures, and the ability to implement best practices in DataOps, performance optimization, and data governance.
Experience with modern AI/GenAI-enabled data platforms and real-time data processing environments is highly desirable.
Position Overview:
The Databricks Senior Data Engineer will play a critical role in designing, implementing, and optimizing enterprise-scale data platforms using the Databricks Lakehouse architecture. This role combines architecture leadership with hands-on engineering, focusing on building scalable, secure, and high-performance data pipelines and platforms. The ideal candidate will establish coding standards, define data architecture frameworks such as the Medallion Architecture, and guide the end-to-end development lifecycle of modern data solutions.
This individual will collaborate with cross-functional stakeholders, including data engineers, BI developers, analysts, and business leaders, to deliver robust data platforms that enable advanced analytics, reporting, and AI-driven decision-making.
Key Responsibilities:
- Architecture & Design: Architect and design scalable, reliable data platforms and complex ETL/ELT and streaming workflows for the Databricks Lakehouse Platform (Delta Lake, Spark).
- Hands-On Development: Write, test, and optimize code in Python, PySpark, and SQL for data ingestion, transformation, and processing.
- DataOps & Automation: Implement CI/CD, monitoring, and automation (e.g., with Azure DevOps, DBX) for data pipelines.
- Stakeholder Collaboration: Work with BI developers, analysts, and business users to define requirements and deliver data-driven solutions.
- Performance Optimization: Tune delta tables, Spark jobs, and SQL queries for maximum efficiency and scalability.
- GenAI Applications Development: It is a big plus to have experience in GenAI application development
Requirements:
- 8+ years of experience in data engineering, with strong hands-on expertise in Databricks and Apache Spark.
- Proven experience designing and implementing scalable ETL/ELT pipelines in cloud environments.
- Strong programming skills in Python and SQL; experience with PySpark required.
- Hands-on experience with Databricks Lakehouse, Delta Lake, and distributed data processing.
- Experience working with cloud platforms such as Microsoft Azure, AWS, or GCP (Azure preferred).
- Experience with CI/CD pipelines, Git, and DevOps practices for data engineering.
- Strong understanding of data architecture, data modeling, and performance optimization.
- Experience working with cross-functional teams to deliver enterprise data solutions.
- Tackles complex data challenges, ensuring data quality and reliable delivery.
Qualifications:
- Bachelor’s degree in Computer Science, Information Technology, Engineering, or a related field.
- Experience designing enterprise-scale data platforms and modern data architectures.
- Experience with data integration tools such as Azure Data Factory or similar platforms.
- Familiarity with cloud data warehouses such as Databricks, Snowflake, or Azure Fabric.
- Experience supporting analytics, reporting, or AI/ML workloads is highly desirable.
- Databricks, Azure, or cloud certifications are preferred.
- Strong problem-solving, communication, and technical leadership skills.
Technical Proficiency in:
- Databricks, Apache Spark, PySpark, Delta Lake
- Python, SQL, Scala (preferred)
- Cloud platforms: Azure (preferred), AWS, or GCP
- Azure Data Factory, Kafka, and modern data integration tools
- Data warehousing: Databricks, Snowflake, or Azure Fabric
- DevOps tools: Git, Azure DevOps, CI/CD pipelines
- Data architecture, ETL/ELT design, and performance optimization
What You’re Looking For:
Join a fast-growing organization that thrives on innovation and collaboration. You’ll work alongside talented, motivated colleagues in a global environment, helping clients solve their most critical business challenges. At OZ, your contributions matter – you’ll have the chance to be a key player in our growth and success. If you’re driven, bold, and eager to push boundaries, we invite you to join a company where you can truly make a difference.
About Us:
OZ is a 28-year-old global technology consulting, services, and solutions leader specializing in creating business-focused solutions for our clients by leveraging disruptive digital technologies and innovation.
OZ is committed to creating a continuum between work and life by allowing people to work remotely. We offer competitive compensation and a comprehensive benefits package. You’ll enjoy our work style within an incredible culture. We’ll give you the tools you need to succeed so you can grow and develop with us and become part of a team that lives by its core values.
Hi Rameez here from Beaconfire. I hope you're doing well! We’re currently hiring for an exciting MERN/MEAN Developer role, and I wanted to reach out to see if you or someone in your network might be interested. This is a fantastic opportunity to work on high-impact projects using modern technologies in a collaborative and growth-oriented environment.
About the Company
BeaconFire is based in Central NJ, specializing in Software Development, Web Development, and Business Intelligence; looking for candidates with a strong background in Software Engineering or Computer Science for a Python/Node Developer position.
About the Role
The role involves developing websites and writing scalable, secure, maintainable code while collaborating with team members to achieve project goals.
Responsibilities
- Develop websites using HTML, CSS, Node.js, React.js, and Angular2+, among other tools;
- Write scalable, secure, maintainable code that powers our clients’ platforms;
- Create, deploy, and maintain automated system tests;
- Work with Testers to understand defects opened and resolves them in a timely manner;
- Supports continuous improvement by investigating alternatives and technologies and presenting these for architectural review;
- Collaborate effectively with other team members to accomplish shared user story and sprint goals;
- Invest time in constant professional development to stay up to date with new technological development and programming languages;
- Discover and fix programming bugs;
- Other duties as assigned.
Qualifications
- Proficient understanding of HTML and CSS;
- Experience in programming language JavaScript or similar (e.g. Java, Python, C, C++, C#, etc.) and understanding of the software development life cycle;
- Basic knowledge of code versioning (e.g. Git, SVN);
- A passion for coding pixel perfect web pages;
- Good verbal communication and interpersonal skills.
Required Skills
- Proficient understanding of HTML and CSS;
- Experience in programming language JavaScript or similar (e.g. Java, Python, C, C++, C#, etc.) and understanding of the software development life cycle;
- A passion for coding pixel perfect web pages;
- Good verbal communication and interpersonal skills.
Preferred Skills
- Bachelor's degree or higher in Computer Science or related fields;
- 0-1 year of practical experience in JavaScript coding;
- Familiarity with at least one JavaScript framework (Angular2+, React.js, Express.js);
- Experience with unit and integration testing of code, with an understanding of JavaScript testing frameworks like Jasmine, Cucumber, Mocha, and Karma;
- Experience providing REST/SOAP APIs for user interface consumption;
- Experience working within an Agile development methodology Scrum.
BeaconFire is an E-verified company and provides equal employment opportunities (visa sponsorship provided).
```
Job Description
Role -: QA Automation Engineer
Location: Mount Laurel, NJ (Onsite)
We are looking for a highly skilled SDET / QA Automation Engineer with strong experience in Python, JavaScript, and modern automation frameworks to support automation solutions and end-to-end network validation.
Key Skills Required:
Python Automation
JavaScript
SDET / QA Automation
Automation Frameworks (PyTest / Selenium / Playwright / Cypress)
Microservices Testing
API Testing
Networking / Cable Technologies Knowledge
End-to-End System Validation
Responsibilities:
• Develop automation solutions and test scripts for network platforms
• Build and maintain automation frameworks using Python & JavaScript
• Validate end-to-end network components and behavior
• Develop automation microservices for testing infrastructure
• Collaborate with cross-functional teams and clients to ensure quality delivery
About the Role
We are seeking a Backend Engineer to help build and maintain the backend services and API’s that power our proprietary AI SaaS CRM and LMS platforms.
You will work directly with our CTO, collaborate with the engineering team, and partner closely with our Product Manager to design, implement, and maintain scalable backend systems.
Our backend services are built primarily with:
- NestJS (TypeScript)
- Python
- Deployed across multiple AWS environments
This is a hands-on backend engineering role focused on API development, cloud deployment, distributed systems, and production-grade reliability.The role has meaningful ownership - not just ticket execution.
What You’ll Do
- Work directly with the CTO on backend design and implementation decisions
- Partner closely with a Product Manager on sprint planning, backlog grooming, translating product requirements into technical solutions, and prioritizing customer-impacting improvements
- Design, build, and maintain backend API services using NestJS (TypeScript)
- Build and support backend services in Python
- Develop and maintain production-grade RESTful APIs
- Contribute to multi-environment deployments across AWS
- Use Terraform to manage our IAC
- Work with CI/CD workflows and structured deployment procedures
- Follow and contribute to engineering documentation including development guidelines, environment configuration standards, security practices, and versioning and changelog management
- Implement and support asynchronous and event-driven systems
- Write clean, maintainable, well-tested code
- Participate in code reviews and maintain high engineering standards
- Debug and resolve production issues across distributed cloud environments
What We’re Looking For (Required)
- 5+ years of backend engineering experience
- Strong proficiency in TypeScript and experience with NestJS
- Strong proficiency in Python
- Experience designing and implementing RESTful APIs
- Experience deploying and maintaining applications in AWS
- Familiarity with multi-environment deployments (dev, staging,UAT, production)
- Experience working with CI/CD pipelines
- Experience with relational databases (PostgreSQL)
- Familiarity with Docker or containerized workflows
- Experience working in GitHub-based workflows in a collaborative environment (pull requests, code reviews, branching strategies, and issue tracking)
- Comfortable working in an agile environment with JIRA and Monday
- Strong communication and problem-solving skills
- Experience building SaaS or multi-tenant platforms
Nice to Have / Strong Plus
- Familiarity with C# & C++
- Experience with Dentrix, OpenDental, or other dental integration PMS’s
- Experience building a greenfield SaaS or B2B software
- Experience with building on a Healthcare platform
- Familiarity with AI-enabled products or LLM integrations
- Experience with Redis or caching strategies
- Experience integrating third-party APIs
Why This Role Is Different
- Direct collaboration with the CTO on backend system design
- Close partnership with Product Management
- Opportunity to help shape a modern, AI SaaS platform for the healthcare industry
About Us
We're continuing to build a transformative healthcare accreditation platform that is revolutionizing how our clients and new hospitals manage compliance, quality improvement, and regulatory processes. Our platform combines cutting-edge technology with deep healthcare domain expertise to solve real problems for healthcare organizations nationwide.
The Opportunity
The goal is to have interns turn into full time employees; Therefore, you will be given full time responsibilities day one. To add onto that, you will be working in a high velocity growth startup and will be required to move fast. You'll work directly with our engineering team on a production healthcare platform, gaining hands-on experience with enterprise-grade systems while making real contributions that impact our product and customers.
Compensation Structure: Base position is unpaid, however qualified candidates may receive upfront equity compensation based on their experience level and demonstrated capabilities. We evaluate each applicant individually and offer equity packages commensurate with their potential contribution.
What You'll Do
*During the internship you may choose the area to focus on...
Application Testing & Quality Assurance
- Design and execute comprehensive test plans for our healthcare portal
- Perform manual testing across web applications, APIs, and integrations
- Identify and document bugs, usability issues, and edge cases
- Test healthcare compliance features (HIPAA, document security, audit trails)
Test Automation Development
- Build automated test suites using modern testing frameworks
- Develop API testing scripts for healthcare data integrations
- Create performance testing scenarios for document upload/processing
- Implement continuous testing pipelines with CI/CD integration
AI/ML Quality Support
- Collaborate with our AI team on document processing accuracy testing
- Help validate machine learning models for healthcare document extraction
- Design test datasets for training and validation of AI systems
- Analyze and report on AI/ML model performance and data quality
Data Engineering Quality Assurance
- Develop data quality monitoring and validation processes
- Create automated checks for data integrity across MongoDB systems
- Build dashboards and alerts for data quality metrics
- Support ETL pipeline testing and validation
Process Improvement & Strategy
- Analyze current QA processes and identify optimization opportunities
- Research and recommend new testing tools and methodologies
- Participate in technical decision-making and sprint planning
- Document QA best practices and create team knowledge base
What We're Looking For
Required Qualifications:
- Currently pursuing or recently completed degree in Computer Science, Engineering, or related field
- Strong understanding of software testing principles and methodologies
- Experience with at least one programming language (Python, JavaScript, Java, etc.)
- Basic knowledge of databases (SQL/NoSQL) and API testing
- Excellent problem-solving skills and attention to detail
- Strong communication skills and ability to work in a collaborative environment
Preferred Qualifications:
- Experience with test automation frameworks (Selenium, Pytest, Jest, etc.)
- Knowledge of healthcare IT, compliance requirements, or regulated industries
- Familiarity with cloud platforms (AWS) and DevOps practices
- Experience with data analysis, ETL processes, or machine learning
- Previous internship or project experience in QA/testing roles
Technical Skills We'd Love to See:
- Testing Tools: Selenium, Postman, Jest, Pytest, Cypress
- Programming: Python, JavaScript, SQL
- Databases: MongoDB, SQL databases
- Cloud/DevOps: AWS, Docker, CI/CD pipelines
- Data Tools: Pandas, data validation frameworks
- Version Control: Git, GitHub
What You'll Gain
Professional Development:
- Real Impact: Your work directly affects a production healthcare platform used by hospitals
- Mentorship: Work closely with senior engineers and receive structured feedback
- Healthcare Domain Knowledge: Learn about healthcare compliance, accreditation, and regulatory requirements
- Enterprise Technology: Gain experience with production-grade systems, security, and scalability
Technical Skills:
- Advanced testing methodologies and automation frameworks
- Healthcare data processing and compliance requirements
- AI/ML model testing and validation techniques
- Data engineering and quality assurance practices
- Modern DevOps and CI/CD practices
Career Opportunities:
- Immediate Value: Potential upfront equity compensation based on qualifications
- Strong potential for full-time conversion based on performance
- Network with healthcare technology professionals
- Portfolio of real-world healthcare technology projects
- Experience that's highly valued in the growing healthtech sector
Our Tech Stack
- Frontend: React, Modern CSS
- Backend: Node.js, TypeScript, Python, RESTful APIs
- Database: MongoDB, with future SQL integrations
- Cloud: AWS (EC2, S3, Lambda, RDS)
- AI/ML: Document processing, natural language processing
- Security: HIPAA compliance, encryption, audit logging
- DevOps: Docker, GitHub Actions, automated testing
Compensation & Equity
- Base Position: Unpaid educational internship
- Equity/Stock Compensation: Available upfront based on applicant qualifications and experience level
Our Hiring Process
We believe in a transparent and thorough selection process that respects your time while ensuring mutual fit:
- Initial Screening Call We'll discuss your background, experience, and career goals, while providing an overview of the role and our team culture.
- Technical Interview We'll have an in-depth discussion about your experience and explore related technical concepts. You should be prepared to walk through every aspect of quality assurance as it pertains to your resume.
Ready to apply? We look forward to hearing from you!
MedLaunch is an equal opportunity employer committed to diversity and inclusion.
We’re hiring a Data Insights Analyst to join a growing analytics team focused on turning large, complex datasets into clear, actionable insights that drive business decisions. This is a hands-on role for someone who enjoys digging into data, working with Python and SQL, and partnering with leaders to understand what’s really happening in the business. You’ll work across multiple functions and contribute directly to high-impact initiatives around forecasting, performance analysis, and strategic decision-making.
Keys to an Interview: Data Insights Analyst | CPG Manufacturing
- 2-5 years' Data Science and/or Business Analyst experience
- Master's Degree preferred
- Strong working experience with Python for data analysis (and exposure to machine learning is a major plus)
- Advanced SQL skills with the ability to pull and manipulate data from large data warehouses
- Ability to interpret existing dashboards and datasets and identify meaningful insights
- Clear communication skills and comfort explaining technical findings to non-technical stakeholders
- Comfortable working on-site, with flexibility
Key Responsibilities: Data Insights Analyst | CPG Manufacturing
- Analyze large, complex datasets to identify trends, opportunities, and risks across the business
- Leverage Python, SQL, Excel, and Power BI to deliver actionable insights and recommendations
- Build and enhance analytical models to support forecasting, budgeting, and strategic planning
- Develop, maintain, and improve dashboards and reporting used by leadership
- Clean, transform, and validate data to ensure accuracy and consistency
- Partner cross-functionally to understand business questions and translate them into data-driven solutions
- Present findings clearly and concisely to senior stakeholders
- Support automation and process improvements to increase analytical efficiency
- Contribute to high-visibility initiatives that influence growth and long-term strategy