Colab Python Version 3 11 Jobs in Usa
16,237 positions found
Job Role: Lead Software Engineer - AI Application Platform
Location: Charlotte, NC (Onsite)
Eligibility: USC/GC/H4-EAD only
Pay Rate: $90/hr. on W2
The Opportunity
We are seeking a Lead Software Engineer to guide the architectural development and execution of AppGen, a sophisticated AI-powered application generation platform. This role suits a proven technical leader with deep, hands-on expertise across the full software stack who finds enabling a team to build better software deeply satisfying.
You will shape critical systems, mentor senior and junior developers through complex technical decisions, conduct rigorous code reviews across multiple technology domains, and directly influence the platform's trajectory through strategic engineering leadership.
This is for someone who:
• Engages thoughtfully when a junior developer asks targeted architectural questions—because you see an opportunity to shape how someone thinks about systems
• Takes time to explain subtle type-safety issues in code review, understanding that feedback is a teaching moment
• Can present architecture clearly to executives and confidently explain both what we're building and why it matters
• Finds more energy in the code your team ships than in the code you write individually
• Has proven depth across the full stack and a track record of developing engineers into stronger contributors
If that describes you, we'd like to talk.
Core Responsibilities
1. Technical Architecture & Systems Thinking (40%)
• Shape architectural decisions across the full stack: How should the component layer handle dynamically generated forms? What's the right approach to validate complex cross-field dependencies in the FormBuilder? What separation of concerns makes sense between the Generator Lambda and the Parent Backend?
• Guide architecture discussions: Help senior developers think through design trade-offs. Should we use NgRx or Angular signals for this feature? When does a new Lambda function become worthwhile given cold-start costs?
• Identify and address system-wide bottlenecks: Work across layers to improve performance. Explore Lambda cold-start optimization, RDS query efficiency, and DynamoDB access patterns.
• Establish patterns and guide consistency: Define coding conventions that work across Python, TypeScript, and Terraform. Help new team members understand the reasoning behind architectural choices.
What this looks like in practice: You're able to justify architectural decisions with technical reasoning. When someone questions an approach, you can explain the trade-offs you considered. You can write code in multiple languages to validate an approach if needed.
2. Code Review & Technical Guidance (30%)
• Full-stack PR reviews: Review Python FastAPI endpoints and Angular components with equal depth, understanding how they interact.
• Deep technical review: Catch issues thoughtful code review can surface:
o RxJS Observable lifecycle and potential memory patterns in Angular
o Query efficiency and data loading patterns in SQLAlchemy
o Terraform module organization and state management implications
o Type safety and TypeScript coverage gaps
o AWS security and IAM configurations
• Educational feedback: Your code reviews help the team learn. When you identify an issue, reviewees understand not just what changed, but how to think about similar problems in the future.
• Define quality expectations: Work with the team to establish what "production-ready" means for this platform and support consistent application of those standards.
What this requires: Experience reviewing code across teams and multiple languages. You know how to write feedback that resonates—clear, constructive, and focused on helping people improve.
3. Mentorship & Team Development (20%)
• Expand specialist capabilities: Help backend specialists learn to contribute to the forms-engine. Support frontend experts in understanding FastAPI patterns.
• Accelerate junior developers: Pair on complex problems. Explain the reasoning behind patterns like DataState. Connect architectural choices to implementation details and performance implications.
• Identify and address gaps: Recognize when someone is struggling with a technology and provide targeted support—training, pair programming, or guidance through architectural decisions.
• Create growth opportunities: Stretch the team into new areas. A backend engineer working on their first Terraform contribution. A frontend specialist implementing an AWS Lambda authorizer.
What this requires: Genuine investment in people's growth. You've walked developers through major transitions (generalist to specialist, specialist to full-stack, or into new technology areas). You understand that team strength grows when individuals expand their capabilities.
4. Stakeholder Communication & Technical Leadership (10%)
• Explain to diverse audiences: Translate architectural choices and trade-offs for product managers, executives, and business stakeholders. Connect "optimizing DynamoDB queries" to "improving form submission latency by 30%."
• Shape technical direction: Contribute the engineering perspective on feasibility, risk, and what unlocks future capabilities.
• Support release confidence: You understand the code changes, comprehend the risks, and know what to monitor. You can stand behind releases.
Required Qualifications
Technical Skills
Frontend (Production Experience)
• 5+ years of Angular (including handling version migrations, optimizing change detection, and guiding teams through reactive patterns)
• Strong TypeScript skills with generics, discriminated unions, and strict mode
• RxJS depth: You understand hot vs. cold observables, unsubscription patterns, and can identify potential memory issues in reviews
• NgRx state management: You've designed stores at scale, optimized selectors, and evaluated architectural implications
• CSS Grid & Responsive Design: You can assess component hierarchy and layout decisions
• Material Design: You've worked within it and know when and how to extend it
Backend (Production Experience)
• 5+ years of Python (async/await, type hints, data modeling)
• FastAPI production experience: session management, dependency injection, middleware
• SQL and ORMs (SQLAlchemy): You write efficient queries and review them critically
• AWS services: Understanding of Lambda behavior, IAM least-privilege patterns, VPC networking
• REST API design: Versioning, error handling, idempotency
• Testing frameworks: pytest, testing st
Job Title: Sr. Automation Engineer
Location: Hillsboro, OR
Duration: Long Term
Job Summary
Panasonic Avionics Corporation is seeking Senior Automation Engineers to lead and enhance advanced automation solutions for embedded and UI-driven systems. The ideal candidates will bring deep expertise in Python-based automation, Robot Framework, and QNX environments, with a strong focus on scalable test architecture, framework migration, and high-volume regression execution. This role requires hands-on technical leadership, cross-layer debugging skills, and collaboration within complex embedded and aviation-grade systems.
Mandatory Technical Skills
(Minimum 5+ years of hands-on experience in each)
- Python automation using Pytest or Robot Framework
- QNX OS (POSIX-compliant systems)
- UX/UI Automation & Testing
Key Responsibilities
- Design, architect, and enhance scalable automation frameworks using Python and Pytest.
- Perform migration of automation assets from Robot Framework to Python/Pytest, ensuring feature parity and long-term maintainability.
- Analyze and interpret large Robot Framework keyword libraries and enable reuse within Python-based executions.
- Optimize hybrid execution models involving both Pytest and Robot Framework assets.
- Develop wrapper layers, fixtures, utilities, and reusable automation components.
- Independently debug complex cross-layer automation issues spanning Python, Robot Framework, QNX OS, and device-level tools.
- Integrate automation frameworks with CI/CD pipelines using tools such as Jenkins, GitLab CI, or Azure DevOps.
- Execute and maintain UI and device automation using Appium, Selenium, or equivalent tools.
- Enforce modular test design principles, including page-object and page-keyword patterns, to ensure long-term automation maintainability.
- Mentor junior engineers and uphold automation design, coding standards, and best practices.
Required Qualifications
- 5+ years of hands-on experience with Python automation and Pytest.
- Strong practical experience with Robot Framework, including keywords, resources, variables, and test structuring.
- Proven experience managing and maintaining large keyword repositories (1000+ keywords).
- Experience working with QNX OS, POSIX systems, Hypervisor-based virtualization, and Cloud environments (AWS).
- Solid understanding of Git version control, branching strategies, and CI/CD workflows.
- Experience with UI and device automation tools such as Appium and Selenium.
- Strong analytical, debugging, and problem-solving skills with the ability to work independently.
- Excellent communication skills and experience working in cross-functional teams.
Preferred Qualifications
- Experience in mobility, embedded systems, aviation, or high-volume regression environments.
- Exposure to automation framework migration, cross-framework interoperability, or keyword reuse models.
- Bachelor’s degree in Computer Science, Electronics, Engineering, or a related field.
Sr. Data Engineer (PySpark & Python + AI Tools Exp.) - (Only W2 or 1099)
Charlotte, NC (Hybrid)
12+ Months Contract
Job Description:
We are currently seeking a Senior Data Engineer with hands-on coding experience and a strong background in Python, PySpark, and Object-oriented programming.
The ideal candidate will be responsible for designing, developing, and implementing new features to our existing framework using PySpark and Python.
This position requires a deep understanding of data transformation and the ability to create standalone scripts based on given business logic. Also, exposure to AI Tools and building any AI applications will be advantage.
Key Responsibilities:
- Design, develop, and optimize large-scale data pipelines using PySpark and Python.
- Implement and adhere to best practices in object-oriented programming to build reusable, maintainable code.
- Write advanced SQL queries for data extraction, transformation, and loading (ETL).
- Collaborate closely with data scientists, analysts, and stakeholders to gather requirements and translate them into technical solutions.
- Troubleshoot data-related issues and resolve them in a timely and accurate manner.
- Leverage AWS cloud services (e.g., S3, EMR, Lambda, Glue) to build and manage cloud-native data workflows (preferred).
- Participate in code reviews, data quality checks, and performance tuning of data jobs.
Required Skills & Qualifications:
- 6+ years of relevant experience in a data engineering or backend development role.
- Strong hands-on experience with PySpark and Python, especially in designing and implementing scalable data transformations.
- Solid understanding of Object-Oriented Programming (OOP) principles and design patterns.
- Proficient in SQL, with the ability to write complex queries and optimize performance.
- Strong problem-solving skills and the ability to troubleshoot complex data issues independently.
- Excellent communication and collaboration skills.
- Hands-on experience with AI Tools.
Preferred Qualifications (Nice to Have):
- Experience working with AWS cloud ecosystem (S3, Glue, EMR, Redshift, Lambda, etc.).
- Exposure to data warehousing concepts, distributed computing, and performance tuning.
- Familiarity with version control systems (e.g., Git), CI/CD pipelines, and Agile methodologies.
- Exposure to AI Tools and hands-on experience of building any AI applications.
Must be local to TX
Role Overview
- He’s ideally looking for someone with 13+ years of experience, strong architecture depth, and the ability to clearly explain designs.
- Must have experience using AI is used in day‑to‑day development.
- Must have experience as a API Developer to lead the development and deployment of our backend services. In this role, you will be the bridge between our PostgreSQL database and React frontend, responsible not only for writing high-performance Python code but also for architecting the CI/CD pipelines that bring our applications to life. You will ensure our integration layers are scalable, secure, and automatically deployed.
Job Summary
We are seeking a Principal-level Full Stack Lead Developer with 13+ years of experience to drive high-priority engineering workstreams. This role is for a technical heavyweight who can lead new projects in parallel with existing leadership while maintaining exceptional architecture depth. You will be responsible for the full lifecycle of high-performance FastAPI and React applications, ensuring they are resilient, observable, and scalable. We expect a leader who views AI development tools as a force multiplier for velocity and can clearly articulate complex design decisions to stakeholders.
Key Responsibilities
- Project Sovereignty: Independently lead and deliver new, complex workstreams from inception to launch, acting as a technical peer to existing leadership (e.g., Sai).
- System Architecture: Design and defend distributed microservices and event-driven architectures. You must be able to clearly whiteboard and communicate design patterns to both technical and non-technical audiences.
- Hands-on Execution: Maintain high-velocity output of clean, production-grade code using FastAPI (Python) and React (TypeScript).
- Platform Reliability: Architect and implement global Error Handling frameworks, centralized Logging (e.g., OpenTelemetry, ELK), and API Management strategies including Rate Limiting and versioning.
- Event-Driven Messaging: Oversee the implementation of asynchronous service communication using ActiveMQ or AWS EventBridge.
- AI-Augmented SDLC: Deeply integrate AI coding tools (e.g., CloudCode, Cursor, GitHub Copilot) into daily workflows to accelerate prototyping, refactoring, and automated testing.
- Engineering Mentorship: Foster a culture of excellence through rigorous code reviews and by unblocking senior engineers on complex technical hurdles.
- Product Collaboration: Work closely with Product Managers to turn high-level roadmaps into technical reality, providing accurate estimates and identifying technical risks early.
Required Skills & Qualifications
- Experience:13+ years of professional software development with a proven track record of leading large-scale products.
- Tech Stack Mastery: Expert-level FastAPI (Async Python) and modern React (Hooks, TypeScript, Performance Profiling).
- Advanced Governance: Hands-on experience with API Gateway patterns, request throttling, and securing distributed systems (OAuth2/JWT).
- Observability & Messaging: Deep knowledge of structured logging, distributed tracing, and message brokers (ActiveMQ or EventBridge).
- AI Tooling: Advanced proficiency in using AI tools for Fast Development to reduce manual overhead and multiply team output.
- Database & Infrastructure: Expert-level PostgreSQL (tuning/indexing), Redis (for caching/rate-limiting), and container orchestration (Kubernetes/Docker).
- Communication: Exceptional ability to translate technical "scars" and architectural risks into clear business impact.
Title: Full Stack Developer with AI
Duration: 12 Months+
Location: Spring, TX
Type: Onsite
We are seeking a Full Stack Developer who will contribute to building scalable backend services including platform and utility modules application. You will also play an active role in implementing GenAI use cases using modern agentic frameworks.
You will collaborate with product owner, trading fusion developers, data engineers, and other full stack developers across regions.
Responsibilities?:?
- Platform Engineering & Support
- Develop, enhance, and support components of the Global Trading App platform
- Implement monitoring, alerting, and telemetry capabilities using modern observability tools
- Improve platform reliability, scalability, and performance through proactive engineering
- Author infrastructure-as-code using Terraform for cloud resources
Application & Service Development
- Build secure and scalable backend APIs (primarily in Python / FastAPI)
- Create responsive and efficient React-based UI components
- Develop reusable utility modules for fusion teams to accelerate delivery
GenAI & Agentic Solutions
- Implement GenAI-powered features using LLMs, vector databases, and multi-agent frameworks
- Develop “agentic” workflows for automation, troubleshooting, and developer productivity
- Build model integration and evaluation
Collaboration & Standards
- Contribute to engineering best practices and documentation
- Work closely with global trading fusion teams to ensure alignment and technical excellence
Qualifications?:
- Python (advanced): APIs, data processing, async programming
- React: modern component-based UI development
- FastAPI: building high performance backend services
- DBT: data engineering and transformation
- GitHub/CI/CD: strong version control and build pipeline experience
Preferred Skills:
- Terraform, Azure, AWS: infrastructure provisioning and automation
- Databricks, Snowflake
- GenAI / Multi-Agent
- Experience implementing solutions using LLMs, embeddings, prompt engineering
- Familiarity with agentic coding frameworks (e.g., LangChain, AutoGen, OpenAI agents, etc.)
- Understanding of RAG, model orchestration, and AI application patterns
Soft Skills:
- Strong problem-solving skills and ownership mindset
- Ability to work in global, cross-functional teams
- Clear communication and documentation abilities
- Comfort operating in fast-paced, high-availability environments
- Adaptability and willingness to learn new technologies and methodologies
Senior Pega Developer ( Pega / Python / CDH )
Optomi, in partnership with, an enterprise Telecom client, is seeking a Senior Pega Developer to sit in their Stamford, CT office! There is a hybrid structure of 4 days on site in the office, with flexibility for working from home once weekly. This position requires hands-on experience developing with Pega systems, ideally Customer Decision Hub (CDH) with Python for scripting.
What the Right Candidate Will Enjoy:
- 2025 Awards include "Forbes Accessibility 100", "Fortune America's Most Innovative Companies", "Forbes America's Best Employers for Tech Workers", etc.
- Directly develop applications impacting over 25M+ customers across 41 states!
- A hybrid office structure that allows for working from home!
Experience of the Right Candidate:
- Proven track record with 5-6 years of experience working with Customer Decision Hub (CDH), demonstrating deep understanding and ability to leverage CDH for personalized customer interactions and decisioning.
- Certifications: Relevant Pega certifications are required (e.g., Certified Pega Business Architect, Certified Pega System Architect).
- Python: Strong proficiency in Python for scripting and automation tasks, with experience in integrating Python solutions within Pega applications.
- SQL: Solid experience with SQL for database management and querying, including the ability to write complex queries and optimize database performance.
- Apache Airflow (Optional): Experience with Apache Airflow for orchestrating complex workflows is a plus but not mandatory.
Responsibilities of the Right Candidate:
- Develop and implement solutions using Pega CDH to enhance customer engagement strategies.
- Collaborate with cross-functional teams to design and optimize workflows and decisioning processes.
- Utilize Python and SQL to support data-driven decision-making and application enhancements.
- Optionally, leverage Apache Airflow for efficient workflow automation and scheduling.
- Strong problem-solving abilities and attention to detail.
- Excellent communication skills for effective collaboration with team members and stakeholders.
- Ability to thrive in a fast-paced, dynamic environment and adapt to evolving project requirements.
Job Title: Senior Automation Engineer
Location: Hillsboro, Oregon
Job Type: Full-Time
Job Description:
We are seeking a highly experienced Senior Automation Engineer to join our advanced software and embedded systems team. The ideal candidate will have deep expertise in Python automation (Pytest), Robot Framework, and QNX environments, with strong skills in UX/UI automation and testing. This role involves enhancing and migrating automation frameworks, debugging complex integrations, and working closely with cross-functional teams to deliver high-quality test automation solutions for embedded systems and entertainment platforms.
Key Responsibilities
Automation Framework Development
- Architect, develop, and maintain automation frameworks primarily using Python and Pytest.
- Lead migration of existing Robot Framework tests to Python/Pytest equivalents.
- Build reusable fixtures, utilities, wrapper layers, and automation components to support large test suites.
Test Execution & Optimization
- Analyze and interpret Robot Framework keyword libraries; enable efficient reuse within Python-based executions.
- Optimize hybrid execution flows involving both Pytest and Robot Framework assets.
- Execute and maintain UI and device automation tests using tools such as Appium, Selenium, or equivalent frameworks.
Cross-Layer Debugging & Integration
- Independently debug cross-layer automation issues involving Python, Robot Framework, device tools, and operating systems.
- Integrate automation frameworks with CI/CD pipelines and tools (e.g., Jenkins, GitLab, Azure DevOps).
Collaboration & Mentorship
- Mentor and guide junior automation engineers, establishing good coding practices, test design patterns, and quality standards.
- Work collaboratively with software engineers, product developers, and QA teams to enhance automation coverage and reliability.
System & Environment Interaction
- Work with QNX OS, virtualization systems (Hypervisor), and cloud environments (AWS).
- Engage with hardware interfacing (USB, Ethernet, multimedia interfaces) and hardware simulation/ fault-injection where applicable (nice-to-have).
Required Skills & Experience
- 7–10 years experience in automation engineering with 5+ years hands-on Python automation using Pytest.
- Practical experience with Robot Framework including keywords, variables, resources, and large keyword repositories.
- Strong skills in UI automation using tools such as Appium/Selenium.
- Solid understanding of modular test design and maintainable patterns (page-object, keyword patterns).
- Experience with QNX (Posix) operating system, virtualization (e.g., Hypervisor), and cloud-based environments (preferably AWS).
- Good understanding of Git, branching strategies, and CI/CD workflows.
- Proven ability to debug complex, multi-layered test automation environments.
Preferred Qualifications
- Exposure to embedded systems, aviation, or high-volume regression environments.
- Experience in framework migration, cross-framework interoperability, or keyword reuse models.
- Background in hardware interactions and media/UX systems (multimedia I/O, touch interactions).
- Familiarity with fault injection tools and hardware simulation techniques.
- Bachelor’s degree in Computer Science, Electronics, or related technical field.
Nice-to-Have Skills
- Hardware interfacing (USB, Ethernet), multimedia interfaces (touch, audio/video).
- Fault-injection and hardware simulation experience.
- Knowledge of peripheral communication protocols (e.g., GMSL, IP).
About Us
Smart Reimbursement Inc. (SRI) is a healthcare finance and analytics firm that provides innovative solutions to reimbursement challenges faced by hospitals. We combine deep policy expertise with advanced data tools to help hospitals.
Our mission is to improve the healthcare industry by leveraging technology to automate and streamline financial reporting processes, enabling hospitals to focus on providing the highest quality of care to patients. Although SRI has been in business since 2011, our culture is more like an early-stage startup, and we prioritize experimentation, innovation, and collaboration.
The Role
We are hiring a Python Data Analyst to deepen ownership of our internal analytics models and improve the reliability and scalability of our delivery engine. This role will focus on preparing data, running internal Python models, troubleshooting issues, and improving performance and automation across workflows.
You will work closely with our delivery leadership and technical subject matter experts to learn existing workflows quickly, then contribute improvements over time.
Responsibilities
- Prepare and shape large healthcare datasets (claims, remits, transactions, reimbursement-related files) for internal Python models.
- Operate and support internal Python models reliably, including troubleshooting and root-cause debugging.
- Work with very large datasets (100M+ rows) and implement pragmatic approaches when standard tools are insufficient.
- Build and maintain efficient data pipelines for recurring and ad-hoc analytics projects.
- Automate data transformation and reporting workflows with clean, reusable code.
- Support the preparation of audit-ready workpapers and other client-ready documentation.
- Participate in internal and client meetings as needed to clarify goals and communicate findings.
- Improve documentation, internal tools, and project templates.
What Success Looks Like
- You can run internal model workflows end-to-end with minimal oversight.
- You can diagnose and resolve issues independently and improve reliability for the broader team.
- You deliver meaningful improvements to performance, automation, and repeatability.
- You are a strong team player who supports delivery execution and reduces friction.
Required Qualifications
- Advanced Python skills (ability to run, own, and improve an analytics codebase, not just write one-off scripts).
- Strong experience working with large datasets and performance constraints.
- Advanced MS Excel skills (complex formulas, pivot tables, data validation, and ability to work with larger datasets).
- Comfort in Jupyter notebooks and reproducible workflows (pandas required).
- Proven debugging ability and strong analytical judgment.
- Experience working with sensitive or regulated data; HIPAA/PHI experience strongly preferred.
- Clear communication and a collaborative working style.
Nice to Have
- Experience with healthcare data or hospital finance/reimbursement workflows
- Familiarity with EPIC healthcare data
- Experience optimizing data workflows (e.g., parquet/arrow, duckdb, polars, dask, spark, databases)
- SQL proficiency
Location, Logistics, and Process
- Nashville Hybrid or Fully Remote.
- Background check required.
- This role involves HIPAA-protected data and requires strict data security practices.
- Interview process includes a Python technical assessment.
Compensation & Benefits
This role offers a base salary of $130,000–$150,000, depending on experience, plus a performance-based bonus. Benefits include health, dental, and vision insurance, and a 401(k) with employer match.
AI Data & Python Tools Engineer
We're seeking an AI Data and Python Tools Engineer to develop and deploy intelligent tools that leverage big data infrastructure and modern AI architecture. This role combines strong software engineering fundamentals with the ability to build production-ready AI applications at speed, including integration with Model Context Protocol (MCP) systems.
Responsibilities:
- Develop and deploy AI-powered full-stack applications using Python, React, and modern machine learning frameworks
- Design and streamline data pipelines, train and validate ML models, and implement robust evaluation methods
- Collaborate with cross-functional teams to solve complex problems and integrate scalable, cloud-based AI solutions
- Rapidly prototype, test, and iterate on AI tools with a strong focus on performance, flexibility, and scalability
- Maintain clear technical documentation, perform code reviews, and support the full software development lifecycle
Software Engineering & AI/ML Data, Tools Development
- 3+ years of Python Development with a background in back end services and data processing
- Exposure to AI/ML algorithms
- Familiarity with ML frameworks (TensorFlow, PyTorch, scikit-learn)
- Understanding of LLMs, vector databases, and retrieval systems
- Experience with Model Context Protocol (MCP) integration and server development
Big Data & Cloud Infrastructure
- Knowledge of building and deploying cloud based applications
- Hands-on experience with cloud data platforms (AWS/GCP/Azure)
- Proficiency with big data technologies (Spark, Kafka, or similar streaming platforms)
- Experience with data warehouses (Snowflake, BigQuery, Redshift) and data lakes
- Knowledge of containerization (Docker/Kubernetes) and infrastructure as code
*Preferred Experience
- Experience building web applications with modern frameworks (React, Vue, or Angular)
- API development and integration experience
- Basic UX/UI design sensibilities for internal tooling
- Experience with real-time data processing and analytics
- Background in building developer tools or internal platforms
- Familiarity with AI/ML operations (MLOps) practices (Experience using airflow)
- Experience building MCP servers and integrating with AI assistants
- Knowledge of structured data exchange protocols and API design for AI systems.
Type: Full Time
Location: Austin, TX or Cupertino, CA (Monday- Friday onsite)
*Relocation assistance can be offered based on individual needs and circumstances*
We are seeking a Senior Lead Developer to lead the development and deployment of our backend services. In this role, you will be the bridge between our PostgreSQL database and React frontend, responsible not only for writing high-performance Python code but also for architecting the CI/CD pipelines that bring our applications to life. You will ensure our integration layers are scalable, secure, and automatically deployed.
Key Responsibilities
• API & Backend Development: Design and maintain production-grade RESTful APIs using Python (FastAPI, Flask) with a focus on asynchronous processing.
• Database Engineering: Architect relational schemas and write optimized SQL in PostgreSQL, ensuring data integrity and query performance.
• React Integration: Partner with frontend teams to define API contracts, handle state-consistent data fetching, and implement secure authentication (JWT/OAuth2).
• CI/CD & Deployment: Build and manage automated deployment pipelines (e.g., Azure DevOps or Jenkins) to move code from local environments to staging and production.
• Containerization & Cloud: Package applications using Docker and manage deployments on cloud platforms or container orchestrators (Kubernetes/ECS).
• System Reliability: Implement automated testing (PyTest), logging, and monitoring to ensure high availability of integration services.
Technical Requirements
• Experience: 10+ years of professional backend development with a heavy emphasis on Python and API architecture.
• PostgreSQL Expert: Advanced SQL knowledge, including indexing strategies, migrations (Alembic/Flyway), and performance profiling.
• DevOps Tooling: Hands-on experience with Docker and building CI/CD pipelines for Python applications.
• Frontend Literacy: Solid understanding of React (Hooks, Context API) and how it consumes complex JSON structures.
• Infrastructure as Code (Bonus): Familiarity with Terraform or AWS CloudFormation is a significant plus.
The "Lead" Expectation
At the 10-year mark, we expect more than just "feature delivery." We are looking for a candidate who:
• Automates Everything: If a task is done twice, they write a script or a CI job for it.
• Designs for Failure: Implements proper error handling, retries, and health checks in the API layer.
• Collaborates Across the Stack: Can jump into a React component or a Postgres execution plan to find the root cause of a bottleneck.