Nonetype Object Is Not Subscriptable Python Error Jobs Hiring Now Jobs in Usa

33,420 positions found

Senior Automation Engineer - Python/ QNX
✦ New
Salary not disclosed
Hillsboro, OR 1 day ago

Job Title: Senior Automation Engineer

Location: Hillsboro, Oregon

Job Type: Full-Time


Job Description:

We are seeking a highly experienced Senior Automation Engineer to join our advanced software and embedded systems team. The ideal candidate will have deep expertise in Python automation (Pytest), Robot Framework, and QNX environments, with strong skills in UX/UI automation and testing. This role involves enhancing and migrating automation frameworks, debugging complex integrations, and working closely with cross-functional teams to deliver high-quality test automation solutions for embedded systems and entertainment platforms.

Key Responsibilities

Automation Framework Development

  • Architect, develop, and maintain automation frameworks primarily using Python and Pytest.
  • Lead migration of existing Robot Framework tests to Python/Pytest equivalents.
  • Build reusable fixtures, utilities, wrapper layers, and automation components to support large test suites.

Test Execution & Optimization

  • Analyze and interpret Robot Framework keyword libraries; enable efficient reuse within Python-based executions.
  • Optimize hybrid execution flows involving both Pytest and Robot Framework assets.
  • Execute and maintain UI and device automation tests using tools such as Appium, Selenium, or equivalent frameworks.

Cross-Layer Debugging & Integration

  • Independently debug cross-layer automation issues involving Python, Robot Framework, device tools, and operating systems.
  • Integrate automation frameworks with CI/CD pipelines and tools (e.g., Jenkins, GitLab, Azure DevOps).

Collaboration & Mentorship

  • Mentor and guide junior automation engineers, establishing good coding practices, test design patterns, and quality standards.
  • Work collaboratively with software engineers, product developers, and QA teams to enhance automation coverage and reliability.

System & Environment Interaction

  • Work with QNX OS, virtualization systems (Hypervisor), and cloud environments (AWS).
  • Engage with hardware interfacing (USB, Ethernet, multimedia interfaces) and hardware simulation/ fault-injection where applicable (nice-to-have).

Required Skills & Experience

  • 7–10 years experience in automation engineering with 5+ years hands-on Python automation using Pytest.
  • Practical experience with Robot Framework including keywords, variables, resources, and large keyword repositories.
  • Strong skills in UI automation using tools such as Appium/Selenium.
  • Solid understanding of modular test design and maintainable patterns (page-object, keyword patterns).
  • Experience with QNX (Posix) operating system, virtualization (e.g., Hypervisor), and cloud-based environments (preferably AWS).
  • Good understanding of Git, branching strategies, and CI/CD workflows.
  • Proven ability to debug complex, multi-layered test automation environments.

Preferred Qualifications

  • Exposure to embedded systems, aviation, or high-volume regression environments.
  • Experience in framework migration, cross-framework interoperability, or keyword reuse models.
  • Background in hardware interactions and media/UX systems (multimedia I/O, touch interactions).
  • Familiarity with fault injection tools and hardware simulation techniques.
  • Bachelor’s degree in Computer Science, Electronics, or related technical field.

Nice-to-Have Skills

  • Hardware interfacing (USB, Ethernet), multimedia interfaces (touch, audio/video).
  • Fault-injection and hardware simulation experience.
  • Knowledge of peripheral communication protocols (e.g., GMSL, IP).
Not Specified
Machine Learning Engineer | Python | Pytorch | Distributed Training | Optimisation | GPU | Hybrid, San Jose, CA
✦ New
🏢 Enigma
Salary not disclosed

Machine Learning Engineer | Python | Pytorch | Distributed Training | Optimisation | GPU | Hybrid, San Jose, CA


Title: Machine Learning Engineer

Location: San Jose, CA

Responsibilities:

  • Productize and optimize models from Research into reliable, performant, and cost-efficient services with clear SLOs (latency, availability, cost).
  • Scale training across nodes/GPUs (DDP/FSDP/ZeRO, pipeline/tensor parallelism) and own throughput/time-to-train using profiling and optimization.
  • Implement model-efficiency techniques (quantization, distillation, pruning, KV-cache, Flash Attention) for training and inference without materially degrading quality.
  • Build and maintain model-serving systems (vLLM/Triton/TGI/ONNX/TensorRT/AITemplate) with batching, streaming, caching, and memory management.
  • Integrate with vector/feature stores and data pipelines (FAISS/Milvus/Pinecone/pgvector; Parquet/Delta) as needed for production.
  • Define and track performance and cost KPIs; run continuous improvement loops and capacity planning.
  • Partner with ML Ops on CI/CD, telemetry/observability, model registries; partner with Scientists on reproducible handoffs and evaluations.


Educational Qualifications:

  • Bachelors in computer science, Electrical/Computer Engineering, or a related field required; Master’s preferred (or equivalent industry experience).
  • Strong systems/ML engineering with exposure to distributed training and inference optimization.


Industry Experience:

  • 3–5 years in ML/AI engineering roles owning training and/or serving in production at scale.
  • Demonstrated success delivering high-throughput, low-latency ML services with reliability and cost improvements.
  • Experience collaborating across Research, Platform/Infra, Data, and Product functions.


Technical Skills:

  • Familiarity with deep learning frameworks: PyTorch (primary), TensorFlow.
  • Exposure to large model training techniques (DDP, FSDP, ZeRO, pipeline/tensor parallelism); distributed training experience a plus
  • Optimization: experience profiling and optimizing code execution and model inference: (PTQ/QAT/AWQ/GPTQ), pruning, distillation, KV-cache optimization, Flash Attention
  • Scalable serving: autoscaling, load balancing, streaming, batching, caching; collaboration with platform engineers.
  • Data & storage: SQL/NoSQL, vector stores (FAISS/Milvus/Pinecone/pgvector), Parquet/Delta, object stores.
  • Write performant, maintainable code
  • Understanding of the full ML lifecycle: data collection, model training, deployment, inference, optimization, and evaluation.


Machine Learning Engineer | Python | Pytorch | Distributed Training | Optimisation | GPU | Hybrid, San Jose, CA


Remote working/work at home options are available for this role.
internship
Lead Python Backend Developer
✦ New
Salary not disclosed
Dallas, TX 1 day ago

We are seeking a Senior Lead Developer to lead the development and deployment of our backend services. In this role, you will be the bridge between our PostgreSQL database and React frontend, responsible not only for writing high-performance Python code but also for architecting the CI/CD pipelines that bring our applications to life. You will ensure our integration layers are scalable, secure, and automatically deployed.

Key Responsibilities

• API & Backend Development: Design and maintain production-grade RESTful APIs using Python (FastAPI, Flask) with a focus on asynchronous processing.

• Database Engineering: Architect relational schemas and write optimized SQL in PostgreSQL, ensuring data integrity and query performance.

• React Integration: Partner with frontend teams to define API contracts, handle state-consistent data fetching, and implement secure authentication (JWT/OAuth2).

• CI/CD & Deployment: Build and manage automated deployment pipelines (e.g., Azure DevOps or Jenkins) to move code from local environments to staging and production.

• Containerization & Cloud: Package applications using Docker and manage deployments on cloud platforms or container orchestrators (Kubernetes/ECS).

• System Reliability: Implement automated testing (PyTest), logging, and monitoring to ensure high availability of integration services.


Technical Requirements

• Experience: 10+ years of professional backend development with a heavy emphasis on Python and API architecture.

• PostgreSQL Expert: Advanced SQL knowledge, including indexing strategies, migrations (Alembic/Flyway), and performance profiling.

• DevOps Tooling: Hands-on experience with Docker and building CI/CD pipelines for Python applications.

• Frontend Literacy: Solid understanding of React (Hooks, Context API) and how it consumes complex JSON structures.

• Infrastructure as Code (Bonus): Familiarity with Terraform or AWS CloudFormation is a significant plus.


The "Lead" Expectation

At the 10-year mark, we expect more than just "feature delivery." We are looking for a candidate who:


• Automates Everything: If a task is done twice, they write a script or a CI job for it.

• Designs for Failure: Implements proper error handling, retries, and health checks in the API layer.

• Collaborates Across the Stack: Can jump into a React component or a Postgres execution plan to find the root cause of a bottleneck.

Not Specified
Lead Python API Engineer
✦ New
🏢 Anblicks
Salary not disclosed
Richardson, TX 1 day ago

Must be local to TX


Role Overview

  • He’s ideally looking for someone with 13+ years of experience, strong architecture depth, and the ability to clearly explain designs.
  • Must have experience using AI is used in day‑to‑day development.
  • Must have experience as a API Developer to lead the development and deployment of our backend services. In this role, you will be the bridge between our PostgreSQL database and React frontend, responsible not only for writing high-performance Python code but also for architecting the CI/CD pipelines that bring our applications to life. You will ensure our integration layers are scalable, secure, and automatically deployed.


Job Summary

We are seeking a Principal-level Full Stack Lead Developer with 13+ years of experience to drive high-priority engineering workstreams. This role is for a technical heavyweight who can lead new projects in parallel with existing leadership while maintaining exceptional architecture depth. You will be responsible for the full lifecycle of high-performance FastAPI and React applications, ensuring they are resilient, observable, and scalable. We expect a leader who views AI development tools as a force multiplier for velocity and can clearly articulate complex design decisions to stakeholders.


Key Responsibilities

  • Project Sovereignty: Independently lead and deliver new, complex workstreams from inception to launch, acting as a technical peer to existing leadership (e.g., Sai).
  • System Architecture: Design and defend distributed microservices and event-driven architectures. You must be able to clearly whiteboard and communicate design patterns to both technical and non-technical audiences.
  • Hands-on Execution: Maintain high-velocity output of clean, production-grade code using FastAPI (Python) and React (TypeScript).
  • Platform Reliability: Architect and implement global Error Handling frameworks, centralized Logging (e.g., OpenTelemetry, ELK), and API Management strategies including Rate Limiting and versioning.
  • Event-Driven Messaging: Oversee the implementation of asynchronous service communication using ActiveMQ or AWS EventBridge.
  • AI-Augmented SDLC: Deeply integrate AI coding tools (e.g., CloudCode, Cursor, GitHub Copilot) into daily workflows to accelerate prototyping, refactoring, and automated testing.
  • Engineering Mentorship: Foster a culture of excellence through rigorous code reviews and by unblocking senior engineers on complex technical hurdles.
  • Product Collaboration: Work closely with Product Managers to turn high-level roadmaps into technical reality, providing accurate estimates and identifying technical risks early.


Required Skills & Qualifications

  • Experience:13+ years of professional software development with a proven track record of leading large-scale products.
  • Tech Stack Mastery: Expert-level FastAPI (Async Python) and modern React (Hooks, TypeScript, Performance Profiling).
  • Advanced Governance: Hands-on experience with API Gateway patterns, request throttling, and securing distributed systems (OAuth2/JWT).
  • Observability & Messaging: Deep knowledge of structured logging, distributed tracing, and message brokers (ActiveMQ or EventBridge).
  • AI Tooling: Advanced proficiency in using AI tools for Fast Development to reduce manual overhead and multiply team output.
  • Database & Infrastructure: Expert-level PostgreSQL (tuning/indexing), Redis (for caching/rate-limiting), and container orchestration (Kubernetes/Docker).
  • Communication: Exceptional ability to translate technical "scars" and architectural risks into clear business impact.
Not Specified
Python Developer
Salary not disclosed
New York 3 days ago
Python Developer in New York, NY Compensation: $150,000-250,000/year Within this role, the developer will be responsible for building, developing and maintaining a reliable, scalable, and integrated platform for trading strategy monitoring, reporting, and operations.

Qualifications Required A Bachelor's Degree; Graduate Degree preferred At least 3 years of experience with Python Linux / Unix environment experience Experience with PostgreSQL, or other relational databases Able to quickly understand and discuss requirements from Portfolio Managers Preferred Previous experience in low-latency trading environments Familiar with quantitative finance and electronic trading concepts and financial data Equities, futures, FX, or other financial instruments knowledge Experience with containerization and orchestration technologies Experience building and deploying systems that utilize services provided by AWS, GCP or Azure Experience with other programming languages, such as C/C++, Java, Scala, Go, or C# Apache, Confluent Kafka experience Experience automating SDLC pipelines
Not Specified
Senior Pega Developer ( Pega / CDH / Python )
✦ New
🏢 Optomi
Salary not disclosed
Stamford, CT 1 day ago

Senior Pega Developer ( Pega / Python / CDH )


Optomi, in partnership with, an enterprise Telecom client, is seeking a Senior Pega Developer to sit in their Stamford, CT office! There is a hybrid structure of 4 days on site in the office, with flexibility for working from home once weekly. This position requires hands-on experience developing with Pega systems, ideally Customer Decision Hub (CDH) with Python for scripting.


What the Right Candidate Will Enjoy:

  • 2025 Awards include "Forbes Accessibility 100", "Fortune America's Most Innovative Companies", "Forbes America's Best Employers for Tech Workers", etc.
  • Directly develop applications impacting over 25M+ customers across 41 states!
  • A hybrid office structure that allows for working from home!


Experience of the Right Candidate:

  • Proven track record with 5-6 years of experience working with Customer Decision Hub (CDH), demonstrating deep understanding and ability to leverage CDH for personalized customer interactions and decisioning.
  • Certifications: Relevant Pega certifications are required (e.g., Certified Pega Business Architect, Certified Pega System Architect).
  • Python: Strong proficiency in Python for scripting and automation tasks, with experience in integrating Python solutions within Pega applications.
  • SQL: Solid experience with SQL for database management and querying, including the ability to write complex queries and optimize database performance.
  • Apache Airflow (Optional): Experience with Apache Airflow for orchestrating complex workflows is a plus but not mandatory.


Responsibilities of the Right Candidate:

  • Develop and implement solutions using Pega CDH to enhance customer engagement strategies.
  • Collaborate with cross-functional teams to design and optimize workflows and decisioning processes.
  • Utilize Python and SQL to support data-driven decision-making and application enhancements.
  • Optionally, leverage Apache Airflow for efficient workflow automation and scheduling.
  • Strong problem-solving abilities and attention to detail.
  • Excellent communication skills for effective collaboration with team members and stakeholders.
  • Ability to thrive in a fast-paced, dynamic environment and adapt to evolving project requirements.
Not Specified
Python Data Analyst (Healthcare Finance, HIPAA)
Salary not disclosed
Nashville, TN 6 days ago

About Us 

Smart Reimbursement Inc. (SRI) is a healthcare finance and analytics firm that provides innovative solutions to reimbursement challenges faced by hospitals. We combine deep policy expertise with advanced data tools to help hospitals. 


Our mission is to improve the healthcare industry by leveraging technology to automate and streamline financial reporting processes, enabling hospitals to focus on providing the highest quality of care to patients. Although SRI has been in business since 2011, our culture is more like an early-stage startup, and we prioritize experimentation, innovation, and collaboration. 


The Role 

We are hiring a Python Data Analyst to deepen ownership of our internal analytics models and improve the reliability and scalability of our delivery engine. This role will focus on preparing data, running internal Python models, troubleshooting issues, and improving performance and automation across workflows. 

You will work closely with our delivery leadership and technical subject matter experts to learn existing workflows quickly, then contribute improvements over time. 


Responsibilities 

  • Prepare and shape large healthcare datasets (claims, remits, transactions, reimbursement-related files) for internal Python models. 
  • Operate and support internal Python models reliably, including troubleshooting and root-cause debugging. 
  • Work with very large datasets (100M+ rows) and implement pragmatic approaches when standard tools are insufficient. 
  • Build and maintain efficient data pipelines for recurring and ad-hoc analytics projects. 
  • Automate data transformation and reporting workflows with clean, reusable code. 
  • Support the preparation of audit-ready workpapers and other client-ready documentation. 
  • Participate in internal and client meetings as needed to clarify goals and communicate findings. 
  • Improve documentation, internal tools, and project templates. 


What Success Looks Like 

  • You can run internal model workflows end-to-end with minimal oversight. 
  • You can diagnose and resolve issues independently and improve reliability for the broader team. 
  • You deliver meaningful improvements to performance, automation, and repeatability. 
  • You are a strong team player who supports delivery execution and reduces friction. 


Required Qualifications 

  • Advanced Python skills (ability to run, own, and improve an analytics codebase, not just write one-off scripts). 
  • Strong experience working with large datasets and performance constraints. 
  • Advanced MS Excel skills (complex formulas, pivot tables, data validation, and ability to work with larger datasets). 
  • Comfort in Jupyter notebooks and reproducible workflows (pandas required). 
  • Proven debugging ability and strong analytical judgment. 
  • Experience working with sensitive or regulated data; HIPAA/PHI experience strongly preferred. 
  • Clear communication and a collaborative working style. 


Nice to Have 

  • Experience with healthcare data or hospital finance/reimbursement workflows 
  • Familiarity with EPIC healthcare data 
  • Experience optimizing data workflows (e.g., parquet/arrow, duckdb, polars, dask, spark, databases) 
  • SQL proficiency 


Location, Logistics, and Process 

  • Nashville Hybrid or Fully Remote. 
  • Background check required. 
  • This role involves HIPAA-protected data and requires strict data security practices. 
  • Interview process includes a Python technical assessment. 


Compensation & Benefits 

This role offers a base salary of $130,000–$150,000, depending on experience, plus a performance-based bonus. Benefits include health, dental, and vision insurance, and a 401(k) with employer match. 

Not Specified
Onsite AI Engineer - Python/LLM/RAG
Salary not disclosed

Onsite AI Engineer - Construction Industry Focus

New Haven, CT - Onsite 5 days per week


  • Initial Assignment: Fully onsite 5 days per week at a construction site in Ft. Myers (FL) or New Haven (CT) for 1 year
  • Post-Assignment: Relocation to one of the corporate offices for hybrid employment: Boston, MA (preferred), New York City (NY), New Haven (CT), Herndon (VA), West Palm Beach (FL), or Estero (FL)


Role Summary

As the on-site catalyst who turns AI ideas into working reality. Partnering with each project’s AI Champion (Project Manager or Superintendent), you’ll uncover pain points, redesign workflows, and deploy AI agents that cut down reporting, accelerate RFIs, simplify lookahead planning, progress updates, materials tracking, and more. When needed, you will develop user stories and coordinate development with the central AI Studio. You’ll help advance the vision of the “Construction Site of the Future,” showing how agentic AI will transform project operations.


Responsibilities

  • Workflow discovery and redesign: Lead Lean/Six Sigma workshops; map value streams; log high-impact AI agent opportunities that improve field efficiency.
  • AI agent development: Build and deploy multiple production-ready AI agents using Copilot Studio, Power Apps/Automate, ChatGPT Enterprise, or code-first frameworks. Integrate agents into Teams/SharePoint on the front end and Databricks Lakehouse or other enterprise data sources on the back end.
  • RAG pipelines and LLMOps: Design and operate retrieval-augmented generation (RAG) pipelines with Databricks Delta Tables, Unity Catalog, and Vector Search (or Spark/Hadoop equivalents). Monitor cost, latency, adoption, and model drift.
  • Cross-cloud orchestration: Blend OpenAI, Azure OpenAI, and AWS Bedrock services through secure custom connectors to maximize flexibility and adoption.
  • Data integration: Partner with Data Engineering to deliver ETL/ELT pipelines, API integrations, and event-driven connectors that feed RAG pipelines and AI agents.
  • Change management and adoption: Train field teams, gather feedback, iterate quickly, and embed agents into SOPs. Track usage and ROI with adoption metrics and behavior-change KPIs.
  • Stakeholder communication: Translate technical results into business value for leadership and clients. Contribute use cases and playbooks for the “Construction Site of the Future.”
  • Compliance and hand-offs: Ensure all AI solutions meet the company’s data governance and security standards. Draft clear user stories and specs for escalation to central AI/Data Engineering teams when necessary.


Qualifications

  • 4+ years in AI engineering, data science, or ML-focused software engineering.
  • Proven experience building multiple AI agents in production environments.
  • 2+ years of hands-on experience with LLMs, RAG pipelines, and LLMOps practices.
  • Must have strong traditional software engineering background in Python


Bonus Points

  • Experience in construction, manufacturing, or other process-heavy industries.
  • Advanced degree in a technical field.
Not Specified
Full Stack Python/Django Engineer
🏢 Open Systems Technologies
Salary not disclosed
Charlotte 3 days ago
A financial firm is looking for a Full Stack Python/Django Engineer to join their team in Charlotte, NC.

Compensation: $150-200K Responsibilities: Design and build modular, scalable services that power the product control platform's core functions: PnL calculation, adjustment workflows, segment mapping, book and reverse logic, and audit trails.

Develop clean, maintainable and testable backend code in Python (Django) and front-end components using React or similar frameworks.

Collaborate with Product owners, Client, and quants to translate complex finance and control workflows into intuitive and robust platform features Lead the development of high-performance APIs, data validation layers, and UI modules with a focus on resiliency, data lineage, and traceability.

Integrate the platform with upstream and downstream systems including subledgers, regulatory reporting engines, and data lakes.

Participate in architectural design, peer code reviews, CI/CD processes, and performance tuning.

Contribute to a microservices-first architecture and evolving the deliverable into a fully cloud-native, modular platform.

Help define platform standards, mentor junior engineers, work and manage offshore consultants, and contribute to building a strong engineering culture.

Qualifications: 8+ years of experience in full stack software development with a focus on Python (Django) and React.

Experience building enterprise applications with complex workflow logic, approvals, adjustments and audit requirements.

Understanding of financial products and product control function is strongly preferred.

Experience working with relational databases, ORM tools; solid SQL skills Familiarity with CI/CD, Docker, and cloud-native development practices.

Strong communication skills and ability to work directly with business users and cross-functional teams.

Databricks, Spark experience.

Exposure to Financial reporting platforms.

Experience working with Agile development environments.

Prior experience in highly regulated industry or working with internal control frameworks.
permanent
Python Engineer
🏢 Open Systems Technologies
Salary not disclosed
Charlotte 3 days ago
A financial firm is looking for a Python Engineer to join their team in Charlotte, NC.

Compensation: $150-200k Responsibilities: Develop highly scalable applications in Python framework.

Create and deploy applications in Azure environment with various interconnected azure components.

Understand and enhance front-end applications using React JS, HTML5 and CSS3.

Identify and fix bottlenecks that may arise from inefficient code.

Ensure that programs are written to the highest standards (e.g., Unit Tests) and technical specifications.

Documentation of the key aspects of the project.

Qualifications: 5+ year of development experience in Python is mandatory, with optional experience in Databricks and Azure cloud computing.

Knowledge of database systems (e.g., SQL, NoSQL) and distributed computing frameworks.

Prior experience in building VaR systems is desirable.

Excellent communication and people skills, with the ability to collaborate effectively with stakeholders at all levels.

Solid organizational skills, ability to multi-task across different projects.

Experience with Agile methodologies.

Skilled at independently researching topics using all means available to firm relevant information.

Excellent verbal and written communication skills.

Self-starter with ability to multi-task and to maintain momentum.

Exposure to Power BI tools is highly desirable.

Knowledge of user authentication and authorization between multiple systems, servers and environments.
Not Specified
jobs by JobLookup
✓ All jobs loaded