Vigenere Cipher Decoder Python Jobs Hiring Now Jobs in Usa

1,019 positions found — Page 4

Technology Architect
✦ New
Salary not disclosed

W2(Consultant required in W2 only )

Job Title: Technology Architect | Analytics – Packages | Python – Big Data

Location: Hartford, CT 06156 (Onsite/Hybrid – Candidates willing to relocate will be considered)

Duration: 6 Months Contract (Extension possible)

Visa: Only Visa Independent Candidates (USC)

Job Description:

We are looking for an experienced Technology Architect / Lead with strong hands-on expertise in GCP, Python, and ETL implementations. The ideal candidate will play a key role in designing, developing, and delivering scalable data solutions while collaborating with cross-functional teams.

Must Have Skills:

  • Strong experience with Google Cloud Platform (GCP)
  • Proficiency in Python
  • Hands-on experience in ETL project implementation

Nice to Have Skills:

  • End-to-end ETL implementation experience
  • Experience with SDLC lifecycle and Agile methodologies
  • Strong communication and stakeholder management skills

Key Responsibilities:

  • Design and develop scalable solutions using GCP and Python
  • Translate high-level architecture into low-level implementation designs
  • Collaborate with architects, business analysts, and stakeholders
  • Analyze business processes and recommend data-driven solutions
  • Maintain detailed documentation for applications and integrations
  • Provide support for production issues and platform upgrades
  • Drive end-to-end ETL implementations

Experience Required:

  • 8+ years of IT experience with strong delivery background
Not Specified
Territory Manager, Chronic Pain Therapies- Austin
Salary not disclosed

Our company’s Neuromodulation division focuses on implantable, non-opioid therapies for:

  • Chronic pain (Spinal Cord Stimulation – SCS)
  • Movement disorders (Deep Brain Stimulation – DBS)

This is a highly clinical, procedure-driven space where representatives are deeply embedded with physicians and frequently present in the OR.

What This Role Actually Is

This is not an entry-level clinical specialist role.

This is a senior territory leadership role with:

  • Full territory ownership
  • Strategic responsibility
  • Revenue accountability
  • Mentorship of other Territory Managers
  • Capital equipment selling
  • Referral network development

It operates very much like a business owner model within our company.

Key Responsibilities (Decoded)

1⃣ Territory Strategy & Growth

You are responsible for:

  • Sales volume
  • Market penetration
  • Profitability
  • Growing referrals of eligible pain patients

This includes:

  • Expanding existing accounts
  • Opening new implanting physicians
  • Building referral pipelines from pain clinics

2⃣ Account Integration & KOL Development

You will:

  • Build trust with implanting physicians
  • Work closely with Clinical Specialists
  • Develop relationships with Key Opinion Leaders
  • Maintain a strong and consistent presence in accounts

This is a highly relationship-driven and credibility-based role.

3⃣ Capital Equipment & Implant Coordination

You will also:

  • Evaluate capital equipment opportunities
  • Coordinate implant schedules
  • Manage consigned inventory
  • Be accountable for fiscal performance

This adds operational and financial complexity to the role.

4⃣ Leadership Component

The position includes:

  • Training and mentoring new Territory Managers
  • Operating with a high degree of independence
  • Exercising authority in making sales commitments

This signals:

A senior-level expectation

Informal leadership responsibility

Potential succession planning opportunity

Required Experience

  • 8+ years of medical device sales experience
  • 4+ years specifically in Neuromodulation

That neuromodulation experience is critical. This is not a role for someone new to the space.

This is a high-level territory seat.

What Makes This Role Challenging

  • Highly matrixed environment
  • Close collaboration with Clinical Specialists
  • Up to 50% patient interaction
  • Unpredictable procedure schedules
  • Travel-intensive
  • Tight deadlines

Because it is procedure-based, cases may be added with little notice.

Compensation Implication

While compensation is not listed, roles of this level typically include:

  • Strong base salary
  • Significant variable compensation
  • High six-figure earning potential
  • Car allowance
  • Comprehensive benefits package

Given the required experience, this is positioned as a high-income territory.

Ideal Candidate Profile

This role is best suited for someone who:

  • Has deep neuromodulation experience
  • Maintains strong relationships in pain or movement disorder markets
  • Wants full territory ownership
  • Can mentor junior team members
  • Is comfortable in OR settings
  • Can manage both operational and financial components of a territory
Not Specified
Radar Systems Engineer (Algorithms)
Salary not disclosed
Mountain View, CA 2 days ago

We're building safety-enhancing technology for aviation that will save lives. Automated aviation systems will enable a future where air transportation is safer, more convenient and fundamentally transformative to the way goods - and eventually people - move around the planet. We are a team of mission-driven engineers with experience across aerospace, robotics and self-driving cars working to make this future a reality.

As a Radar Systems Engineer, you will be a part of our Radar Engineering team. The Radar Engineering team is comprised of a small cross functional team of motivated and experienced engineers; we're responsible for designing, building, and testing cutting edge phased array radar systems from concept to certified product. We enjoy a culture of sharing information and collaboration. Phased array radar systems have historically been reserved for specialized applications, but we're making this technology affordable to enable Detect and Avoid for widespread commercial applications. This role will focus on new advanced operational modes. The passion for revolutionary technology to make aviation safer motivates us to come in every day.

Responsibilities

In your role as a Radar Systems Engineer, you will be responsible for analyzing phased array radar systems and developing radar processing algorithms to enable short, long range object tracking, and radar image generation. You will be involved in all phases of development from conception to production, designing your algorithms in Matlab, Python, and implementation in C++ on the target hardware. You will drive the system-level requirements of the phased array radar system, and drive trade studies collaborating with the cross functional teams. You will support the integration of the radar on the aircraft and support data collection and analysis during flight testing. You will be the owner of real-time processing of radar algorithms on an FPGA, DSP-based platform.

Basic Success Criteria

  • Electrical Engineering fundamentals, typically gained through a Bachelor's Degree of Science or Engineering in Mechanical, Electrical, Aerospace, or a related discipline

  • 8+ years of professional hands-on experience with Radar algorithms, radar system design, radar signal processing, integration, and test of a radar system

  • Ability to use C++, Matlab and Python (Python preferred)

  • Ability to troubleshoot, find root cause, and resolve issues

  • Pulsed and FMCW processing algorithm development

  • Experience in airborne radar testing and development

Preferred Criteria

  • Advanced Degree of Science or Electrical Engineering

  • Experience developing system architectures and managing requirements for certification of Avionics

  • Creative problem solver that can bring multiple disciplines together, with the ability to assess risk and make design and development decisions without all available data

  • Experience integrating and troubleshooting various electronic sensors and components

  • Weather radar experience, airborne or ground-based

  • Real beam radar imaging, and or SAR processing

This role can be remote, or located at our facility in Mountain View, California.

Must be willing to travel 30% of the time.

The estimated salary range for this position is $180,000 to $260,000/annual salary + cash and stock option awards + benefits. At Reliable Robotics, we strive to provide competitive and rewarding compensation based on experience and expertise, as well as market conditions, location, and pay equity.

In addition to base compensation, Reliable Robotics offers stock options, employee medical, 401k contribution, great co-workers and a casual work environment.

This position requires access to information that is subject to U.S. export controls. An offer of employment will be contingent upon the applicant's capacity to perform in compliance with U.S. export control laws.

All applicants are asked to provide documentation that legally establishes status as a U.S. person or non-U.S. person (and nationalities in the case of a non-U.S. person). Where the applicant is not a U.S. person, meaning not a (i) U.S. citizen or national, (ii) U.S. lawful permanent resident, (iii) refugee under 8 U.S.C. * 1157, or (iv) asylee under 8 U.S.C. * 1158, or not otherwise permitted to access the export-controlled technology without U.S. government authorization, the Company reserves the right not to apply for an export license for such applicants whose access to export-controlled technology or software source code requires authorization and may decline to proceed with the application process and any offer of employment on that basis.

At Reliable Robotics, our goal is to be a diverse and inclusive workforce. As an Equal Opportunity Employer, we do not discriminate on the basis of race, religion, color, creed, ancestry, sex, gender (including pregnancy, childbirth, breastfeeding, or related medical conditions), gender identity, gender expression, sexual orientation, age, non-disqualifying physical or mental disability or medical conditions, national origin, military or veteran status, genetic information, marital status, or any other basis covered by applicable law. All employment and promotion is decided on the basis of qualifications, merit, and business need.

If you require reasonable accommodation in completing an application, interviewing, completing any pre-employment testing, or otherwise participating in the employee selection process, please direct your inquiries to

Compensation Range: $180K - $260K

Apply for this Job
Not Specified
Data Analyst
Salary not disclosed
Raleigh, NC 2 days ago
Job Title: Data Analyst

Length of Contract: 6 months

Location: Remote (Eastern time zone)

What are the top 3-5 skills, experience or education required for this position:

a. Proficiency in databases (SQL) and coding in R/Python

b. Experience with API development

c. Familiarity with AI techniques and strong curiosity for new technologies

d. Experience managing and curating bioinformatics datasets (BulkRNAseq, Proteomics, scRNAseq, CRISPR)

e. Code management, documentation, and version control (e.g., GitHub)

Job Overview: As a Data Analyst, you'll drive data quality and consistency in our central hub for storing OMICS data, address impactful data loading and curation projects and help improve and automate processes using agentic AI. Working closely with researchers, you'll ensure their data needs are met and help accelerate scientific discovery.

Key Responsibilities:

- Contribute to important data loading and curation projects for the departments Omics data server

- Address data quality and consistency issues in the CRISPR database.

- Apply agentic AI approaches for data loading and querying OMICS data

- Database Interaction: Use PostgreSQL to build, manage, and query large genomic datasets.

- API Development: Design and implement APIs for improved data accessibility and integration across platforms.

- Automation: Use Python and R to automate and optimize data workflows, prioritizing data quality and integrity.

- ETL Process Management: Develop and execute ETL processes to integrate high-value datasets in line with organizational standards.

- Collaboration: Work with cross-functional teams and research scientists to gather requirements, align to common data model standards, and facilitate effective data management.

- Documentation: Maintain comprehensive documentation and version control for reproducibility and teamwork.

Required qualifications:

- Master's degree in computer science, bioinformatics, or a related field, with 3+ years of relevant experience.

- Proven experience working with databases (PostgreSQL proficiency).

- Advanced skills in Python and R for automation and data manipulation.

- Experience handling and curating bioinformatics datasets (BulkRNAseq, Proteomics, scRNAseq, CRISPR).

- Code management, documentation, and usage of Github.

- Curiosity and basic knowledge of AI techniques applicable to data loading and querying.

- Excellent communication skills and a collaborative mindset.

- Demonstrated experience with AWS resources.

- Experience in API
Not Specified
Industry Specialist - Risk, Risk
✦ New
🏢 Amazon
Salary not disclosed
Seattle, WA 1 day ago
The Special Projects & Investigations team is looking for an experienced, motivated industry specialist with background in risk, digital fraud, compliance who also have advanced data analysis skills (SQL, Python, Machine Learning, Data Science). This role will manage critical and high impact projects and scale their findings through technology and analytics to interpret risks across Amazon’s entire business segment or apply other industry experience to develop feasible, systematic solutions to endemic problems.
Selling Partner Trust and Store Integrity (TSI) creates a trustworthy shopping experience across Amazon stores worldwide by protecting customers, brands, selling partners, vendors, and Amazon from fraud, counterfeit, and abuse. The Special Projects and Investigations (SPI) team within TSI protects Amazon customers and stores by applying systems thinking to understand how networks of users interact with multiple services. We target large-scale ecosystems that pose store-level risks and mitigate those ecosystems through internal and external means. Our growth requires highly skilled candidates who move fast, have an entrepreneurial spirit to create new solutions, demonstrate tenacity to get things done, thrive in ambiguity and change, and can break down and solve complex problems.

We catch bad actors and stop online fraud. It’s fun. It’s hard. It matters. We are passionate about protecting our selling partners and customers from bad actors and want a candidate that shares that passion. Amazon is one of the world’s most trusted companies. Help us keep it that way. To achieve this, the ideal candidate should be passionate about use of advanced data analytics and technology approaches to identify patterns and establish connections to uncover process and technology gaps and prevent fraud across Amazon stores worldwide. Your decisions are not only fundamental to helping protect customers and selling partners but will help maintain the health of Amazon’s catalog and product listings ecosystem.

Key job responsibilities
• Complete risk analyses and manipulate data in complex data sets (SQL, Python, R etc.)
• Use high-level judgment to inform our most complex enforcement decisions
• Identify gaps and risks in Amazon's current mechanisms and policies and recommend solutions to product/policy owning teams.
• Use data and/or technical skills to discover new ways to scale deep dive signals resulting in the identification of many bad actors and sizing the issue
• Owning the complete life cycle of one or more complex problems - from identification through scaling the solutions
• Break problems into manageable pieces, ruthlessly prioritizing, and delivering results in an ambiguous environment
• Conduct large scale deep dives to derive insights about tactics used to conduct abuse on our stores, identifying gaps and risk in Amazon's current mechanisms, systems, and policies
• Write documents for partner teams and executives that identify problems, propose technical solutions, and drive alignment among stakeholders
• Own partnerships with stakeholder teams and guide appropriate trade-offs, clearly communicate goals, roles and responsibilities.
• Design and deploy agentic AI systems to automate complex workflows, scale pattern detection, and accelerate enforcement decisions across high-volume abuse scenarios

A day in the life
Your day might involve diving deep into data to uncover emerging fraud patterns, collaborating with teams across Amazon to implement protective solutions, or developing new detection methods. You'll balance independent analytical work with team collaboration, sharing insights and supporting colleagues in our shared mission.

About the team
Our team is comprised of practitioners of fraud and abuse, working to understand bad actor ecosystems using threat intelligence analytics and technical skills. We complement specialized industry skills with broad risk experiences gathered through years of experience to deliver results - we wear a lot of hats and take ownership of hard to solve problem areas whenever possible. We speak 12 languages, write code in 3 (mostly self-taught, on the job), and celebrate learning and taking risks. We encourage experimentation and curiosity while supporting each other to constantly learn and grow.

Our work is to solve hard puzzles and identify what hasn’t already been discovered - typically with data and always with a lot persistence and curiosity. If you like the sound of that, come join us.- Bachelor’s or postgraduate degree in Information Security, Computer Science, Data Science/Analytics, Engineering, Mathematics, Statistics or related discipline.
- 3+ years of relevant industry experience in risk or fraud investigations, regulatory compliance, ecommerce, analytics, or security
- Proficient with deriving insights from big data using SQL & experience manipulating/processing data with Python
- Proven ability to deliver complex projects across multiple teams- • Experience working in e-commerce organizations • Experience working within fraud, compliance, law enforcement, or intelligence organizations • Experience with AWS services like Redshift, Neptune or Sagemaker • Masters degree in or practical experience with data science or machine learning • Excellent written and verbal communication skills to communicate security and business risk to a broad range of technical and non-technical audiences. • High level of integrity and discretion to handle confidential information. • Exceptional ownership and bias for action: willing to move quickly and decisively • Proven ability to problem solve in large/complex/technical systems

Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status.

Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

The base salary range for this position is listed below. Your Amazon package will include sign-on payments and restricted stock units (RSUs). Final compensation will be determined based on factors including experience, qualifications, and location. Amazon also offers comprehensive benefits including health insurance (medical, dental, vision, prescription, Basic Life & AD&D insurance and option for Supplemental life plans, EAP, Mental Health Support, Medical Advice Line, Flexible Spending Accounts, Adoption and Surrogacy Reimbursement coverage), 401(k) matching, paid time off, and parental leave. Learn more about our benefits at , WA, Seattle - 102, ,400.00 USD annually
Not Specified
Databricks Architect/ Senior Data Engineer
🏢 OZ
Salary not disclosed
Boca Raton, FL 2 days ago

OZ – Databricks Architect/ Senior Data Engineer


Note: Only applications from U.S. citizens or lawful permanent residents (Green Card Holders) will be considered.


We believe work should be innately rewarding and a team-building venture. Working with our teammates and clients should be an enjoyable journey where we can learn, grow as professionals, and achieve amazing results. Our core values revolve around this philosophy. We are relentlessly committed to helping our clients achieve their business goals, leapfrog the competition, and become leaders in their industry. What drives us forward is the culture of creativity combined with a disciplined approach, passion for learning & innovation, and a ‘can-do’ attitude!


What We're Looking For:

We are seeking a highly experienced Databricks professional with deep expertise in data engineering, distributed computing, and cloud-based data platforms. The ideal candidate is both an architect and a hands-on engineer who can design scalable data solutions while actively contributing to development, optimization, and deployment.


This role requires strong technical leadership, a deep understanding of modern data architectures, and the ability to implement best practices in DataOps, performance optimization, and data governance.


Experience with modern AI/GenAI-enabled data platforms and real-time data processing environments is highly desirable.


Position Overview:

The Databricks Senior Data Engineer will play a critical role in designing, implementing, and optimizing enterprise-scale data platforms using the Databricks Lakehouse architecture. This role combines architecture leadership with hands-on engineering, focusing on building scalable, secure, and high-performance data pipelines and platforms. The ideal candidate will establish coding standards, define data architecture frameworks such as the Medallion Architecture, and guide the end-to-end development lifecycle of modern data solutions.


This individual will collaborate with cross-functional stakeholders, including data engineers, BI developers, analysts, and business leaders, to deliver robust data platforms that enable advanced analytics, reporting, and AI-driven decision-making.


Key Responsibilities:

  • Architecture & Design: Architect and design scalable, reliable data platforms and complex ETL/ELT and streaming workflows for the Databricks Lakehouse Platform (Delta Lake, Spark).
  • Hands-On Development: Write, test, and optimize code in Python, PySpark, and SQL for data ingestion, transformation, and processing.
  • DataOps & Automation: Implement CI/CD, monitoring, and automation (e.g., with Azure DevOps, DBX) for data pipelines.
  • Stakeholder Collaboration: Work with BI developers, analysts, and business users to define requirements and deliver data-driven solutions.
  • Performance Optimization: Tune delta tables, Spark jobs, and SQL queries for maximum efficiency and scalability.
  • GenAI Applications Development: It is a big plus to have experience in GenAI application development


Requirements:

  • 8+ years of experience in data engineering, with strong hands-on expertise in Databricks and Apache Spark.
  • Proven experience designing and implementing scalable ETL/ELT pipelines in cloud environments.
  • Strong programming skills in Python and SQL; experience with PySpark required.
  • Hands-on experience with Databricks Lakehouse, Delta Lake, and distributed data processing.
  • Experience working with cloud platforms such as Microsoft Azure, AWS, or GCP (Azure preferred).
  • Experience with CI/CD pipelines, Git, and DevOps practices for data engineering.
  • Strong understanding of data architecture, data modeling, and performance optimization.
  • Experience working with cross-functional teams to deliver enterprise data solutions.
  • Tackles complex data challenges, ensuring data quality and reliable delivery.


Qualifications:

  • Bachelor’s degree in Computer Science, Information Technology, Engineering, or a related field.
  • Experience designing enterprise-scale data platforms and modern data architectures.
  • Experience with data integration tools such as Azure Data Factory or similar platforms.
  • Familiarity with cloud data warehouses such as Databricks, Snowflake, or Azure Fabric.
  • Experience supporting analytics, reporting, or AI/ML workloads is highly desirable.
  • Databricks, Azure, or cloud certifications are preferred.
  • Strong problem-solving, communication, and technical leadership skills.


Technical Proficiency in:

  • Databricks, Apache Spark, PySpark, Delta Lake
  • Python, SQL, Scala (preferred)
  • Cloud platforms: Azure (preferred), AWS, or GCP
  • Azure Data Factory, Kafka, and modern data integration tools
  • Data warehousing: Databricks, Snowflake, or Azure Fabric
  • DevOps tools: Git, Azure DevOps, CI/CD pipelines
  • Data architecture, ETL/ELT design, and performance optimization


What You’re Looking For:

Join a fast-growing organization that thrives on innovation and collaboration. You’ll work alongside talented, motivated colleagues in a global environment, helping clients solve their most critical business challenges. At OZ, your contributions matter – you’ll have the chance to be a key player in our growth and success. If you’re driven, bold, and eager to push boundaries, we invite you to join a company where you can truly make a difference.


About Us:

OZ is a 28-year-old global technology consulting, services, and solutions leader specializing in creating business-focused solutions for our clients by leveraging disruptive digital technologies and innovation.


OZ is committed to creating a continuum between work and life by allowing people to work remotely. We offer competitive compensation and a comprehensive benefits package. You’ll enjoy our work style within an incredible culture. We’ll give you the tools you need to succeed so you can grow and develop with us and become part of a team that lives by its core values.

Not Specified
MERN/MEAN Stack Developer
Salary not disclosed
Philadelphia, PA 2 days ago

Hi Rameez here from Beaconfire. I hope you're doing well! We’re currently hiring for an exciting MERN/MEAN Developer role, and I wanted to reach out to see if you or someone in your network might be interested. This is a fantastic opportunity to work on high-impact projects using modern technologies in a collaborative and growth-oriented environment.


About the Company

BeaconFire is based in Central NJ, specializing in Software Development, Web Development, and Business Intelligence; looking for candidates with a strong background in Software Engineering or Computer Science for a Python/Node Developer position.


About the Role

The role involves developing websites and writing scalable, secure, maintainable code while collaborating with team members to achieve project goals.


Responsibilities

  • Develop websites using HTML, CSS, Node.js, React.js, and Angular2+, among other tools;
  • Write scalable, secure, maintainable code that powers our clients’ platforms;
  • Create, deploy, and maintain automated system tests;
  • Work with Testers to understand defects opened and resolves them in a timely manner;
  • Supports continuous improvement by investigating alternatives and technologies and presenting these for architectural review;
  • Collaborate effectively with other team members to accomplish shared user story and sprint goals;
  • Invest time in constant professional development to stay up to date with new technological development and programming languages;
  • Discover and fix programming bugs;
  • Other duties as assigned.


Qualifications

  • Proficient understanding of HTML and CSS;
  • Experience in programming language JavaScript or similar (e.g. Java, Python, C, C++, C#, etc.) and understanding of the software development life cycle;
  • Basic knowledge of code versioning (e.g. Git, SVN);
  • A passion for coding pixel perfect web pages;
  • Good verbal communication and interpersonal skills.


Required Skills

  • Proficient understanding of HTML and CSS;
  • Experience in programming language JavaScript or similar (e.g. Java, Python, C, C++, C#, etc.) and understanding of the software development life cycle;
  • A passion for coding pixel perfect web pages;
  • Good verbal communication and interpersonal skills.


Preferred Skills

  • Bachelor's degree or higher in Computer Science or related fields;
  • 0-1 year of practical experience in JavaScript coding;
  • Familiarity with at least one JavaScript framework (Angular2+, React.js, Express.js);
  • Experience with unit and integration testing of code, with an understanding of JavaScript testing frameworks like Jasmine, Cucumber, Mocha, and Karma;
  • Experience providing REST/SOAP APIs for user interface consumption;
  • Experience working within an Agile development methodology Scrum.


BeaconFire is an E-verified company and provides equal employment opportunities (visa sponsorship provided).


```

Not Specified
Quality Assurance Automation Engineer
✦ New
Salary not disclosed
Mount Laurel, NJ 1 day ago

Job Description

Role -: QA Automation Engineer

Location: Mount Laurel, NJ (Onsite)


We are looking for a highly skilled SDET / QA Automation Engineer with strong experience in Python, JavaScript, and modern automation frameworks to support automation solutions and end-to-end network validation.


Key Skills Required:

Python Automation

JavaScript

SDET / QA Automation

Automation Frameworks (PyTest / Selenium / Playwright / Cypress)

Microservices Testing

API Testing

Networking / Cable Technologies Knowledge

End-to-End System Validation


Responsibilities:

• Develop automation solutions and test scripts for network platforms

• Build and maintain automation frameworks using Python & JavaScript

• Validate end-to-end network components and behavior

• Develop automation microservices for testing infrastructure

• Collaborate with cross-functional teams and clients to ensure quality delivery

Not Specified
Back End Engineer
✦ New
Salary not disclosed
Clearwater, FL 1 day ago

About the Role


We are seeking a Backend Engineer to help build and maintain the backend services and API’s that power our proprietary AI SaaS CRM and LMS platforms.

You will work directly with our CTO, collaborate with the engineering team, and partner closely with our Product Manager to design, implement, and maintain scalable backend systems.


Our backend services are built primarily with:

  • NestJS (TypeScript)
  • Python
  • Deployed across multiple AWS environments


This is a hands-on backend engineering role focused on API development, cloud deployment, distributed systems, and production-grade reliability.The role has meaningful ownership - not just ticket execution.


What You’ll Do

  • Work directly with the CTO on backend design and implementation decisions
  • Partner closely with a Product Manager on sprint planning, backlog grooming, translating product requirements into technical solutions, and prioritizing customer-impacting improvements
  • Design, build, and maintain backend API services using NestJS (TypeScript)
  • Build and support backend services in Python
  • Develop and maintain production-grade RESTful APIs
  • Contribute to multi-environment deployments across AWS
  • Use Terraform to manage our IAC
  • Work with CI/CD workflows and structured deployment procedures
  • Follow and contribute to engineering documentation including development guidelines, environment configuration standards, security practices, and versioning and changelog management
  • Implement and support asynchronous and event-driven systems
  • Write clean, maintainable, well-tested code
  • Participate in code reviews and maintain high engineering standards
  • Debug and resolve production issues across distributed cloud environments


What We’re Looking For (Required)

  • 5+ years of backend engineering experience
  • Strong proficiency in TypeScript and experience with NestJS
  • Strong proficiency in Python
  • Experience designing and implementing RESTful APIs
  • Experience deploying and maintaining applications in AWS
  • Familiarity with multi-environment deployments (dev, staging,UAT, production)
  • Experience working with CI/CD pipelines
  • Experience with relational databases (PostgreSQL)
  • Familiarity with Docker or containerized workflows
  • Experience working in GitHub-based workflows in a collaborative environment (pull requests, code reviews, branching strategies, and issue tracking)
  • Comfortable working in an agile environment with JIRA and Monday
  • Strong communication and problem-solving skills
  • Experience building SaaS or multi-tenant platforms


Nice to Have / Strong Plus

  • Familiarity with C# & C++
  • Experience with Dentrix, OpenDental, or other dental integration PMS’s
  • Experience building a greenfield SaaS or B2B software
  • Experience with building on a Healthcare platform
  • Familiarity with AI-enabled products or LLM integrations
  • Experience with Redis or caching strategies
  • Experience integrating third-party APIs


Why This Role Is Different

  • Direct collaboration with the CTO on backend system design
  • Close partnership with Product Management
  • Opportunity to help shape a modern, AI SaaS platform for the healthcare industry
Not Specified
Data Engineer Level 2 - 26-00273
✦ New
Salary not disclosed
Cincinnati, OH 4 hours ago

Job Description

LeadStack Inc. is an award winning, one of the nation's fastest growing, certified minority owned (MBE) staffing services provider of contingent workforce. As a recognized industry leader in contingent workforce solutions and Certified as a Great Place to Work, we're proud to partner with some of the most admired Fortune 500 brands in the world.


Data Engineer Level 2

Location: Blue Ash, OH 45241

Duration: 6 months (Contractor)

Pay Rate: $55–$75/hr (W2)

Interview Process: In-person interviews required at Blue Ash, OH location.


Overview

Join our team to build modern data solutions in Azure! We're seeking a skilled Data Engineer with hands-on expertise in Databricks, Spark, Python, and cloud DataOps. You'll design scalable data pipelines, automate infrastructure with Terraform/GitHub Actions, and treat data as an enterprise asset—collaborating on CI/CD, governance, and optimization for reliable, secure analytics.

Key Responsibilities

  • Analyze, design, and develop Azure-based data products, pipelines, and architecture using Databricks, Spark, PySpark, Python, and SQL.
  • Optimize Spark/PySpark pipelines for performance (e.g., data skew, partitioning, caching, shuffles).
  • Build and maintain Delta Lake tables/models for analytical/operational use cases, including Delta Live Tables (DLT) or Databricks SQL.
  • Provision cloud/Databricks resources via Terraform (IaC) and manage GitHub-based CI/CD workflows with GitHub Actions.
  • Implement Git workflows for notebooks/jobs; troubleshoot clusters, jobs, and pipelines for reliability.
  • Collaborate on data governance (e.g., Purview, Unity Catalog), lineage, cataloging, and enterprise standards.
  • Deploy Azure fixes/upgrades; mentor on best practices; create diagrams/specs; support stakeholders and data strategy.

Requirements

  • 5+ years as Data Engineer.
  • Strong hands-on with Azure Databricks, Spark/PySpark, Python, SQL, and databases.
  • Experience with Delta Live Tables (DLT), Databricks SQL, Azure Functions, messaging/orchestration tools.
  • Proficiency in Terraform (IaC), GitHub/GitHub Actions (CI/CD, version control).
  • Azure cloud data services integration; monitoring/optimizing Databricks clusters/workflows.
  • Knowledge of distributed computing (partitions, joins, shuffles); data governance tools (Purview, Unity Catalog).
  • SDLC familiarity; ability to manage priorities independently.


know more about current opportunities at LeadStack , please visit us on Should you have any questions, feel free to call me on (513) 3184502 or send an email on

Not Specified
jobs by JobLookup
✓ All jobs loaded