Is Python Compiler Free Jobs in Usa
33,219 positions found
Earn an average salary of $75K/year as a professional truck driver.
Our program covers all costs, including trucking school tuition and all expenses related to getting your CDL.
How It Works: Apply Online: Complete the 10-minute application.
If you're eligible, you can begin the online course the same day.
Online CDL Permit Course: Self-paced 25-hour course that can be finished in as little as one week.
Pass the Background Check Review: We make sure that based on your driving record and criminal history, the CDL industry is a good fit for you.
Take Your CDL Permit Exam: We prepare you to pass the exam, and cover all reimbursements.
Pass Enrollment Interview: Speak with someone from our team about eligibility, career aspirations and fit.
Truck Driving School: We place you at a partner trucking school near you and cover all tuition costs.
Job Placement: Upon earning your CDL, we help you secure employment.
We have a 95% placement rate.
Minimum Qualifications: Must live in Manhattan Must have experienced arrest, probation, parole, incarceration, or a diversion program Must not be on the sex offender registry Maximum of one DUI (none within the last seven years) Active driver's license required No homicide, manslaughter, or assault with a vehicle No involvement in human or sex trafficking No pending cases About Emerge Career: We provide free CDL training for justice-involved individuals to help them start careers in trucking.
Our graduates earn an average of $75K/year.
We offer mentorship, tuition-free trucking school, and job placement with second-chance employers.
Featured in CBS, the Boston Globe, and NBC.
Read about our work in CBS a few months ago Job Types: Full-time, Part-time Benefits: Referral program People with a criminal record are encouraged to apply Work Location: On the road5c143e31-5e48-4549-b638-05792d185386
Job Title: Senior Automation Engineer
Location: Hillsboro, Oregon
Job Type: Full-Time
Job Description:
We are seeking a highly experienced Senior Automation Engineer to join our advanced software and embedded systems team. The ideal candidate will have deep expertise in Python automation (Pytest), Robot Framework, and QNX environments, with strong skills in UX/UI automation and testing. This role involves enhancing and migrating automation frameworks, debugging complex integrations, and working closely with cross-functional teams to deliver high-quality test automation solutions for embedded systems and entertainment platforms.
Key Responsibilities
Automation Framework Development
- Architect, develop, and maintain automation frameworks primarily using Python and Pytest.
- Lead migration of existing Robot Framework tests to Python/Pytest equivalents.
- Build reusable fixtures, utilities, wrapper layers, and automation components to support large test suites.
Test Execution & Optimization
- Analyze and interpret Robot Framework keyword libraries; enable efficient reuse within Python-based executions.
- Optimize hybrid execution flows involving both Pytest and Robot Framework assets.
- Execute and maintain UI and device automation tests using tools such as Appium, Selenium, or equivalent frameworks.
Cross-Layer Debugging & Integration
- Independently debug cross-layer automation issues involving Python, Robot Framework, device tools, and operating systems.
- Integrate automation frameworks with CI/CD pipelines and tools (e.g., Jenkins, GitLab, Azure DevOps).
Collaboration & Mentorship
- Mentor and guide junior automation engineers, establishing good coding practices, test design patterns, and quality standards.
- Work collaboratively with software engineers, product developers, and QA teams to enhance automation coverage and reliability.
System & Environment Interaction
- Work with QNX OS, virtualization systems (Hypervisor), and cloud environments (AWS).
- Engage with hardware interfacing (USB, Ethernet, multimedia interfaces) and hardware simulation/ fault-injection where applicable (nice-to-have).
Required Skills & Experience
- 7–10 years experience in automation engineering with 5+ years hands-on Python automation using Pytest.
- Practical experience with Robot Framework including keywords, variables, resources, and large keyword repositories.
- Strong skills in UI automation using tools such as Appium/Selenium.
- Solid understanding of modular test design and maintainable patterns (page-object, keyword patterns).
- Experience with QNX (Posix) operating system, virtualization (e.g., Hypervisor), and cloud-based environments (preferably AWS).
- Good understanding of Git, branching strategies, and CI/CD workflows.
- Proven ability to debug complex, multi-layered test automation environments.
Preferred Qualifications
- Exposure to embedded systems, aviation, or high-volume regression environments.
- Experience in framework migration, cross-framework interoperability, or keyword reuse models.
- Background in hardware interactions and media/UX systems (multimedia I/O, touch interactions).
- Familiarity with fault injection tools and hardware simulation techniques.
- Bachelor’s degree in Computer Science, Electronics, or related technical field.
Nice-to-Have Skills
- Hardware interfacing (USB, Ethernet), multimedia interfaces (touch, audio/video).
- Fault-injection and hardware simulation experience.
- Knowledge of peripheral communication protocols (e.g., GMSL, IP).
Sr. Data Engineer (PySpark & Python + AI Tools Exp.) - (Only W2 or 1099)
Charlotte, NC (Hybrid)
12+ Months Contract
Job Description:
We are currently seeking a Senior Data Engineer with hands-on coding experience and a strong background in Python, PySpark, and Object-oriented programming.
The ideal candidate will be responsible for designing, developing, and implementing new features to our existing framework using PySpark and Python.
This position requires a deep understanding of data transformation and the ability to create standalone scripts based on given business logic. Also, exposure to AI Tools and building any AI applications will be advantage.
Key Responsibilities:
- Design, develop, and optimize large-scale data pipelines using PySpark and Python.
- Implement and adhere to best practices in object-oriented programming to build reusable, maintainable code.
- Write advanced SQL queries for data extraction, transformation, and loading (ETL).
- Collaborate closely with data scientists, analysts, and stakeholders to gather requirements and translate them into technical solutions.
- Troubleshoot data-related issues and resolve them in a timely and accurate manner.
- Leverage AWS cloud services (e.g., S3, EMR, Lambda, Glue) to build and manage cloud-native data workflows (preferred).
- Participate in code reviews, data quality checks, and performance tuning of data jobs.
Required Skills & Qualifications:
- 6+ years of relevant experience in a data engineering or backend development role.
- Strong hands-on experience with PySpark and Python, especially in designing and implementing scalable data transformations.
- Solid understanding of Object-Oriented Programming (OOP) principles and design patterns.
- Proficient in SQL, with the ability to write complex queries and optimize performance.
- Strong problem-solving skills and the ability to troubleshoot complex data issues independently.
- Excellent communication and collaboration skills.
- Hands-on experience with AI Tools.
Preferred Qualifications (Nice to Have):
- Experience working with AWS cloud ecosystem (S3, Glue, EMR, Redshift, Lambda, etc.).
- Exposure to data warehousing concepts, distributed computing, and performance tuning.
- Familiarity with version control systems (e.g., Git), CI/CD pipelines, and Agile methodologies.
- Exposure to AI Tools and hands-on experience of building any AI applications.
Qualifications Required A Bachelor's Degree; Graduate Degree preferred At least 3 years of experience with Python Linux / Unix environment experience Experience with PostgreSQL, or other relational databases Able to quickly understand and discuss requirements from Portfolio Managers Preferred Previous experience in low-latency trading environments Familiar with quantitative finance and electronic trading concepts and financial data Equities, futures, FX, or other financial instruments knowledge Experience with containerization and orchestration technologies Experience building and deploying systems that utilize services provided by AWS, GCP or Azure Experience with other programming languages, such as C/C++, Java, Scala, Go, or C# Apache, Confluent Kafka experience Experience automating SDLC pipelines
Job Title: Embedded Validation Engineer
Location: Charlotte, NC
Job Type: Full-Time
Role Overview
We are seeking a technically strong Embedded Validation Engineer to serve as the Controls and Quality Assurance (QA) point of contact for lab validation and sustaining programs. The role focuses on requirements-based validation, disciplined test execution, defect reporting, and traceability across multiple product programs.
The ideal candidate will work closely with Systems Engineering, Product Development, and QA teams to validate embedded control systems, execute lab testing, and improve test automation and validation processes.
Key Responsibilities
Requirements-Based Validation
- Collaborate with Systems Engineering teams to derive validation strategies and test plans from system requirements.
- Develop and maintain requirement-to-test case traceability.
- Ensure validation activities align with product specifications and engineering requirements.
Lab Test Execution
- Serve as the Controls Validation Point of Contact (POC) for lab validation activities.
- Execute validation tests on prototype hardware and embedded control systems.
- Document test procedures and record pass/fail outcomes with technical accuracy.
Documentation & Traceability
- Maintain organized test documentation including test plans, execution logs, and validation reports.
- Ensure traceability between requirements, test cases, and defect reports.
Defect Reporting & Tracking
- Identify, document, and report defects with clear technical descriptions and reproducible steps.
- Collaborate with development teams to analyze root causes and track defect resolution.
Reporting & Quality Reviews
- Prepare concise 2–3 slide technical summaries of test results and validation findings.
- Present validation updates during PRQRB/SQA or departmental review meetings.
Test Bench & HIL Development
- Design and build test bench setups and Hardware-in-the-Loop (HIL) simulators for validation.
- Support legacy platforms and existing validation environments.
Automation Development
- Contribute to Python-based test automation and validation frameworks.
- Identify opportunities to improve test efficiency through automation.
Product Support & Continuous Improvement
- Support new product development, sustaining engineering, and validation process improvements.
- Drive enhancements in test infrastructure, lab workflows, and validation methodologies.
Required Qualifications
- Bachelor’s degree in Controls Engineering, Software Engineering, Electrical Engineering, or related field.
- 5+ years of experience in embedded systems validation, SQA, or controls testing.
- Strong understanding of Software Quality Assurance (SQA) fundamentals, including test execution and documentation.
- Experience validating embedded control systems and equipment controls.
- Hands-on experience with lab-based validation and prototype testing.
- Knowledge of controls inputs/outputs, sensors, and system interfaces.
- Experience with bench wiring, test setup, and instrumentation.
- Strong analytical skills and familiarity with engineering basics such as heat exchangers and unit conversions.
- Experience with Python scripting and test automation.
Title: Python with AI(Only w2)
Location: Atlanta GA(Onsite/Only Locals)
Must have mins 10-12 years of overall years of experience
Must haves.
- Strong Python programming (OOP, typing, async programming).
- Experience with API development (FastAPI or Flask).
- PY-test framework knowledge for unit testing.
- Git hub branching familiarity
- Framework development experience with PyTorch, TensorFlow, Hugging Face Transformers, LangChain, OpenAI API
- Basic Cloud knowledge like server less compute, http traffic, CICD concepts
Nice to Have:
- Deep Learning: Neural networks, CNNs, RNNs, transformers, attention mechanisms.
- Azure: resource groups, container apps, vnet, logging, cosmos db.
- SQL (Query writing) experience like SQL server/Azure SQL/Oracle/PostgreSQL
- API security with AuthN/AuthZ understanding of OpenID or oAuth2 flows
If I missed your call ! Please drop me a mail.
Thank you,
Harish
Talent Acquisition
Astir IT Solutions, Inc - An E-Verified Company
Email:
Direct : 7326946000*788
50 Cragwood Rd. Suite # 219, South Plainfield, NJ 07080
Compiler Engineer
Location: Mountain View, CA (Hybrid)
Working Model: Full-time, hybrid
Salary: Up to $400k + equity
The Opportunity
This is a rare chance to join a deep-tech company building a vertically integrated compute platform, spanning custom silicon, systems software, and compilers. The goal is to enable next-generation AI and machine-learning workloads at unprecedented scale, and the compiler team sits right at the centre of this mission.
Compilers here are not a support function—they are a core design driver, with direct influence over hardware architecture, performance strategy, and long-term platform direction. The work is highly technical, performance-critical, and genuinely impactful.
Compiler Engineer Responsibilities
- Design and implement mid-end and back-end compiler components targeting proprietary hardware architectures.
- Work closely with hardware, architecture, and systems teams to influence design decisions from a compiler author's perspective.
- The role is focused on performance, correctness, and long-term maintainability, contributing to a production-grade compiler stack used to run large-scale ML workloads.
Compiler Engineer Background
- Strong software engineering fundamentals with a performance-first mindset.
- Hands-on experience with core compiler algorithms such as register allocation, instruction selection and scheduling, and loop optimisations.
- Comfortable operating in a hybrid environment and collaborating across hardware and software boundaries.
Compensation & Benefits
Highly competitive base salary up to $400k, plus equity. Comprehensive benefits package including generous PTO, parental leave, learning budget, and flexible working at the start and end of the week.
If this position sounds of interest, please reach out to Harry Hansford @ IC Resources.
Machine Learning Engineer | Python | Pytorch | Distributed Training | Optimisation | GPU | Hybrid, San Jose, CA
Title: Machine Learning Engineer
Location: San Jose, CA
Responsibilities:
- Productize and optimize models from Research into reliable, performant, and cost-efficient services with clear SLOs (latency, availability, cost).
- Scale training across nodes/GPUs (DDP/FSDP/ZeRO, pipeline/tensor parallelism) and own throughput/time-to-train using profiling and optimization.
- Implement model-efficiency techniques (quantization, distillation, pruning, KV-cache, Flash Attention) for training and inference without materially degrading quality.
- Build and maintain model-serving systems (vLLM/Triton/TGI/ONNX/TensorRT/AITemplate) with batching, streaming, caching, and memory management.
- Integrate with vector/feature stores and data pipelines (FAISS/Milvus/Pinecone/pgvector; Parquet/Delta) as needed for production.
- Define and track performance and cost KPIs; run continuous improvement loops and capacity planning.
- Partner with ML Ops on CI/CD, telemetry/observability, model registries; partner with Scientists on reproducible handoffs and evaluations.
Educational Qualifications:
- Bachelors in computer science, Electrical/Computer Engineering, or a related field required; Master’s preferred (or equivalent industry experience).
- Strong systems/ML engineering with exposure to distributed training and inference optimization.
Industry Experience:
- 3–5 years in ML/AI engineering roles owning training and/or serving in production at scale.
- Demonstrated success delivering high-throughput, low-latency ML services with reliability and cost improvements.
- Experience collaborating across Research, Platform/Infra, Data, and Product functions.
Technical Skills:
- Familiarity with deep learning frameworks: PyTorch (primary), TensorFlow.
- Exposure to large model training techniques (DDP, FSDP, ZeRO, pipeline/tensor parallelism); distributed training experience a plus
- Optimization: experience profiling and optimizing code execution and model inference: (PTQ/QAT/AWQ/GPTQ), pruning, distillation, KV-cache optimization, Flash Attention
- Scalable serving: autoscaling, load balancing, streaming, batching, caching; collaboration with platform engineers.
- Data & storage: SQL/NoSQL, vector stores (FAISS/Milvus/Pinecone/pgvector), Parquet/Delta, object stores.
- Write performant, maintainable code
- Understanding of the full ML lifecycle: data collection, model training, deployment, inference, optimization, and evaluation.
Machine Learning Engineer | Python | Pytorch | Distributed Training | Optimisation | GPU | Hybrid, San Jose, CA
Remote working/work at home options are available for this role.
Onsite AI Engineer - Construction Industry Focus
New Haven, CT - Onsite 5 days per week
- Initial Assignment: Fully onsite 5 days per week at a construction site in Ft. Myers (FL) or New Haven (CT) for 1 year
- Post-Assignment: Relocation to one of the corporate offices for hybrid employment: Boston, MA (preferred), New York City (NY), New Haven (CT), Herndon (VA), West Palm Beach (FL), or Estero (FL)
Role Summary
As the on-site catalyst who turns AI ideas into working reality. Partnering with each project’s AI Champion (Project Manager or Superintendent), you’ll uncover pain points, redesign workflows, and deploy AI agents that cut down reporting, accelerate RFIs, simplify lookahead planning, progress updates, materials tracking, and more. When needed, you will develop user stories and coordinate development with the central AI Studio. You’ll help advance the vision of the “Construction Site of the Future,” showing how agentic AI will transform project operations.
Responsibilities
- Workflow discovery and redesign: Lead Lean/Six Sigma workshops; map value streams; log high-impact AI agent opportunities that improve field efficiency.
- AI agent development: Build and deploy multiple production-ready AI agents using Copilot Studio, Power Apps/Automate, ChatGPT Enterprise, or code-first frameworks. Integrate agents into Teams/SharePoint on the front end and Databricks Lakehouse or other enterprise data sources on the back end.
- RAG pipelines and LLMOps: Design and operate retrieval-augmented generation (RAG) pipelines with Databricks Delta Tables, Unity Catalog, and Vector Search (or Spark/Hadoop equivalents). Monitor cost, latency, adoption, and model drift.
- Cross-cloud orchestration: Blend OpenAI, Azure OpenAI, and AWS Bedrock services through secure custom connectors to maximize flexibility and adoption.
- Data integration: Partner with Data Engineering to deliver ETL/ELT pipelines, API integrations, and event-driven connectors that feed RAG pipelines and AI agents.
- Change management and adoption: Train field teams, gather feedback, iterate quickly, and embed agents into SOPs. Track usage and ROI with adoption metrics and behavior-change KPIs.
- Stakeholder communication: Translate technical results into business value for leadership and clients. Contribute use cases and playbooks for the “Construction Site of the Future.”
- Compliance and hand-offs: Ensure all AI solutions meet the company’s data governance and security standards. Draft clear user stories and specs for escalation to central AI/Data Engineering teams when necessary.
Qualifications
- 4+ years in AI engineering, data science, or ML-focused software engineering.
- Proven experience building multiple AI agents in production environments.
- 2+ years of hands-on experience with LLMs, RAG pipelines, and LLMOps practices.
- Must have strong traditional software engineering background in Python
Bonus Points
- Experience in construction, manufacturing, or other process-heavy industries.
- Advanced degree in a technical field.
Senior Pega Developer ( Pega / Python / CDH )
Optomi, in partnership with, an enterprise Telecom client, is seeking a Senior Pega Developer to sit in their Stamford, CT office! There is a hybrid structure of 4 days on site in the office, with flexibility for working from home once weekly. This position requires hands-on experience developing with Pega systems, ideally Customer Decision Hub (CDH) with Python for scripting.
What the Right Candidate Will Enjoy:
- 2025 Awards include "Forbes Accessibility 100", "Fortune America's Most Innovative Companies", "Forbes America's Best Employers for Tech Workers", etc.
- Directly develop applications impacting over 25M+ customers across 41 states!
- A hybrid office structure that allows for working from home!
Experience of the Right Candidate:
- Proven track record with 5-6 years of experience working with Customer Decision Hub (CDH), demonstrating deep understanding and ability to leverage CDH for personalized customer interactions and decisioning.
- Certifications: Relevant Pega certifications are required (e.g., Certified Pega Business Architect, Certified Pega System Architect).
- Python: Strong proficiency in Python for scripting and automation tasks, with experience in integrating Python solutions within Pega applications.
- SQL: Solid experience with SQL for database management and querying, including the ability to write complex queries and optimize database performance.
- Apache Airflow (Optional): Experience with Apache Airflow for orchestrating complex workflows is a plus but not mandatory.
Responsibilities of the Right Candidate:
- Develop and implement solutions using Pega CDH to enhance customer engagement strategies.
- Collaborate with cross-functional teams to design and optimize workflows and decisioning processes.
- Utilize Python and SQL to support data-driven decision-making and application enhancements.
- Optionally, leverage Apache Airflow for efficient workflow automation and scheduling.
- Strong problem-solving abilities and attention to detail.
- Excellent communication skills for effective collaboration with team members and stakeholders.
- Ability to thrive in a fast-paced, dynamic environment and adapt to evolving project requirements.