Nonetype Object Is Not Subscriptable Python Error Jobs Hiring Now Jobs in Usa
33,292 positions found — Page 2
Onsite AI Engineer - Construction Industry Focus
New Haven, CT - Onsite 5 days per week
- Initial Assignment: Fully onsite 5 days per week at a construction site in Ft. Myers (FL) or New Haven (CT) for 1 year
- Post-Assignment: Relocation to one of the corporate offices for hybrid employment: Boston, MA (preferred), New York City (NY), New Haven (CT), Herndon (VA), West Palm Beach (FL), or Estero (FL)
Role Summary
As the on-site catalyst who turns AI ideas into working reality. Partnering with each project’s AI Champion (Project Manager or Superintendent), you’ll uncover pain points, redesign workflows, and deploy AI agents that cut down reporting, accelerate RFIs, simplify lookahead planning, progress updates, materials tracking, and more. When needed, you will develop user stories and coordinate development with the central AI Studio. You’ll help advance the vision of the “Construction Site of the Future,” showing how agentic AI will transform project operations.
Responsibilities
- Workflow discovery and redesign: Lead Lean/Six Sigma workshops; map value streams; log high-impact AI agent opportunities that improve field efficiency.
- AI agent development: Build and deploy multiple production-ready AI agents using Copilot Studio, Power Apps/Automate, ChatGPT Enterprise, or code-first frameworks. Integrate agents into Teams/SharePoint on the front end and Databricks Lakehouse or other enterprise data sources on the back end.
- RAG pipelines and LLMOps: Design and operate retrieval-augmented generation (RAG) pipelines with Databricks Delta Tables, Unity Catalog, and Vector Search (or Spark/Hadoop equivalents). Monitor cost, latency, adoption, and model drift.
- Cross-cloud orchestration: Blend OpenAI, Azure OpenAI, and AWS Bedrock services through secure custom connectors to maximize flexibility and adoption.
- Data integration: Partner with Data Engineering to deliver ETL/ELT pipelines, API integrations, and event-driven connectors that feed RAG pipelines and AI agents.
- Change management and adoption: Train field teams, gather feedback, iterate quickly, and embed agents into SOPs. Track usage and ROI with adoption metrics and behavior-change KPIs.
- Stakeholder communication: Translate technical results into business value for leadership and clients. Contribute use cases and playbooks for the “Construction Site of the Future.”
- Compliance and hand-offs: Ensure all AI solutions meet the company’s data governance and security standards. Draft clear user stories and specs for escalation to central AI/Data Engineering teams when necessary.
Qualifications
- 4+ years in AI engineering, data science, or ML-focused software engineering.
- Proven experience building multiple AI agents in production environments.
- 2+ years of hands-on experience with LLMs, RAG pipelines, and LLMOps practices.
- Must have strong traditional software engineering background in Python
Bonus Points
- Experience in construction, manufacturing, or other process-heavy industries.
- Advanced degree in a technical field.
Senior Pega Developer ( Pega / Python / CDH )
Optomi, in partnership with, an enterprise Telecom client, is seeking a Senior Pega Developer to sit in their Stamford, CT office! There is a hybrid structure of 4 days on site in the office, with flexibility for working from home once weekly. This position requires hands-on experience developing with Pega systems, ideally Customer Decision Hub (CDH) with Python for scripting.
What the Right Candidate Will Enjoy:
- 2025 Awards include "Forbes Accessibility 100", "Fortune America's Most Innovative Companies", "Forbes America's Best Employers for Tech Workers", etc.
- Directly develop applications impacting over 25M+ customers across 41 states!
- A hybrid office structure that allows for working from home!
Experience of the Right Candidate:
- Proven track record with 5-6 years of experience working with Customer Decision Hub (CDH), demonstrating deep understanding and ability to leverage CDH for personalized customer interactions and decisioning.
- Certifications: Relevant Pega certifications are required (e.g., Certified Pega Business Architect, Certified Pega System Architect).
- Python: Strong proficiency in Python for scripting and automation tasks, with experience in integrating Python solutions within Pega applications.
- SQL: Solid experience with SQL for database management and querying, including the ability to write complex queries and optimize database performance.
- Apache Airflow (Optional): Experience with Apache Airflow for orchestrating complex workflows is a plus but not mandatory.
Responsibilities of the Right Candidate:
- Develop and implement solutions using Pega CDH to enhance customer engagement strategies.
- Collaborate with cross-functional teams to design and optimize workflows and decisioning processes.
- Utilize Python and SQL to support data-driven decision-making and application enhancements.
- Optionally, leverage Apache Airflow for efficient workflow automation and scheduling.
- Strong problem-solving abilities and attention to detail.
- Excellent communication skills for effective collaboration with team members and stakeholders.
- Ability to thrive in a fast-paced, dynamic environment and adapt to evolving project requirements.
About Us
Smart Reimbursement Inc. (SRI) is a healthcare finance and analytics firm that provides innovative solutions to reimbursement challenges faced by hospitals. We combine deep policy expertise with advanced data tools to help hospitals.
Our mission is to improve the healthcare industry by leveraging technology to automate and streamline financial reporting processes, enabling hospitals to focus on providing the highest quality of care to patients. Although SRI has been in business since 2011, our culture is more like an early-stage startup, and we prioritize experimentation, innovation, and collaboration.
The Role
We are hiring a Python Data Analyst to deepen ownership of our internal analytics models and improve the reliability and scalability of our delivery engine. This role will focus on preparing data, running internal Python models, troubleshooting issues, and improving performance and automation across workflows.
You will work closely with our delivery leadership and technical subject matter experts to learn existing workflows quickly, then contribute improvements over time.
Responsibilities
- Prepare and shape large healthcare datasets (claims, remits, transactions, reimbursement-related files) for internal Python models.
- Operate and support internal Python models reliably, including troubleshooting and root-cause debugging.
- Work with very large datasets (100M+ rows) and implement pragmatic approaches when standard tools are insufficient.
- Build and maintain efficient data pipelines for recurring and ad-hoc analytics projects.
- Automate data transformation and reporting workflows with clean, reusable code.
- Support the preparation of audit-ready workpapers and other client-ready documentation.
- Participate in internal and client meetings as needed to clarify goals and communicate findings.
- Improve documentation, internal tools, and project templates.
What Success Looks Like
- You can run internal model workflows end-to-end with minimal oversight.
- You can diagnose and resolve issues independently and improve reliability for the broader team.
- You deliver meaningful improvements to performance, automation, and repeatability.
- You are a strong team player who supports delivery execution and reduces friction.
Required Qualifications
- Advanced Python skills (ability to run, own, and improve an analytics codebase, not just write one-off scripts).
- Strong experience working with large datasets and performance constraints.
- Advanced MS Excel skills (complex formulas, pivot tables, data validation, and ability to work with larger datasets).
- Comfort in Jupyter notebooks and reproducible workflows (pandas required).
- Proven debugging ability and strong analytical judgment.
- Experience working with sensitive or regulated data; HIPAA/PHI experience strongly preferred.
- Clear communication and a collaborative working style.
Nice to Have
- Experience with healthcare data or hospital finance/reimbursement workflows
- Familiarity with EPIC healthcare data
- Experience optimizing data workflows (e.g., parquet/arrow, duckdb, polars, dask, spark, databases)
- SQL proficiency
Location, Logistics, and Process
- Nashville Hybrid or Fully Remote.
- Background check required.
- This role involves HIPAA-protected data and requires strict data security practices.
- Interview process includes a Python technical assessment.
Compensation & Benefits
This role offers a base salary of $130,000–$150,000, depending on experience, plus a performance-based bonus. Benefits include health, dental, and vision insurance, and a 401(k) with employer match.
Compensation: $150-200K Responsibilities: Design and build modular, scalable services that power the product control platform's core functions: PnL calculation, adjustment workflows, segment mapping, book and reverse logic, and audit trails.
Develop clean, maintainable and testable backend code in Python (Django) and front-end components using React or similar frameworks.
Collaborate with Product owners, Client, and quants to translate complex finance and control workflows into intuitive and robust platform features Lead the development of high-performance APIs, data validation layers, and UI modules with a focus on resiliency, data lineage, and traceability.
Integrate the platform with upstream and downstream systems including subledgers, regulatory reporting engines, and data lakes.
Participate in architectural design, peer code reviews, CI/CD processes, and performance tuning.
Contribute to a microservices-first architecture and evolving the deliverable into a fully cloud-native, modular platform.
Help define platform standards, mentor junior engineers, work and manage offshore consultants, and contribute to building a strong engineering culture.
Qualifications: 8+ years of experience in full stack software development with a focus on Python (Django) and React.
Experience building enterprise applications with complex workflow logic, approvals, adjustments and audit requirements.
Understanding of financial products and product control function is strongly preferred.
Experience working with relational databases, ORM tools; solid SQL skills Familiarity with CI/CD, Docker, and cloud-native development practices.
Strong communication skills and ability to work directly with business users and cross-functional teams.
Databricks, Spark experience.
Exposure to Financial reporting platforms.
Experience working with Agile development environments.
Prior experience in highly regulated industry or working with internal control frameworks.
City: Santa Monica, CA /Glendale, CA /Seattle, WA
Onsite/ Hybrid/ Remote: Hybrid (4 days onsite per week, no flexibility)
Duration: 10 Months
Rate Range: Upto $100/hr on W2
Work Authorization: GC, USC, All valid EADs except OPT, CPT, H1B
Must Have:
- Python
- Test automation framework development
- API testing
- UI testing
- Integration testing
- End-to-end acceptance testing
- CI/CD pipeline integration
- Jenkins or Spinnaker
- Gherkin / BDD / TDD
- SQL
- Database testing
- Backend testing
- Selenium
- ETL / data accuracy validation
- Big data exposure such as Spark or Hadoop
Responsibilities:
- Partner with software engineers to understand the advertising platform and define effective test strategies.
- Build and maintain automated test frameworks and test suites across UI, API, integration, and end-to-end layers.
- Participate in design discussions to improve platform testability and strengthen defect detection and prevention.
- Support issue triage, root cause analysis, and cross-team defect resolution.
- Create and execute manual test cases where automation is not practical.
- Convert manual test coverage into automated coverage wherever feasible.
- Validate backend workflows, database logic, and data accuracy across systems.
- Contribute within distributed Scrum teams operating in 2-week sprint cycles.
Qualifications:
- 4+ years of hands-on software test development experience across functional and non-functional testing.
- Strong Python proficiency with solid backend testing experience.
- Experience designing, building, and enhancing test automation frameworks.
- Strong experience in API, UI, integration, database, and end-to-end testing.
- Proficiency with CI/CD tools such as Jenkins, Spinnaker, or similar.
- Experience with Gherkin and BDD/TDD practices.
- Strong SQL skills, including query writing and optimization for database validation.
- Experience validating ETL pipelines, data quality, and data accuracy.
- Familiarity with distributed Agile/Scrum delivery environments.
- Bachelor's or Master's degree in Computer Science or related field, or equivalent experience.
Nice to Have:
- Big data testing experience with Spark or Hadoop
- Server-side application testing experience
- Advertising technology domain experience
Hello,
My Name is Praveen Kumar from Pronix Inc!!
Job Title: GCP Cloud Security Developer/ Python development with GCP -W2 Only
Location: Temple Terrace, FL | Irving, TX (Hybrid Position)
Job Type: Long term Contract
Interview Mode: Webcam Interview
Job Description:
- Bachelor's Degree.
- 5+ years of relevant work experience.
- 5+ years combined experience with GCP and Azure (minimum 4 years in GCP infrastructure, IAM, AI).
- 8+ years of Python development experience.
- 5+ years developing cloud automation solutions.
- Strong knowledge of PowerShell, Django, HTML, jQuery, Bash scripting.
- Experience with GCP SDK and CLI.
- Terraform proficiency.
- Linux and Windows system administration.
- Cloud networking and security expertise.
- Experience with JIRA, Jenkins, Ansible, Git, Confluence.
- Basic DB and SQL knowledge.
- Strong experience with GCP Cloud Functions and Azure Functions.
- Fluency in using AI-assisted coding tools (Gemini, Copilot) to accelerate automation.
Interested Candidates can share the resume to email :
Ph : 6 Six Zero Nine Three Seven Eight One One Four Two)
Compensation: $150-200k Responsibilities: Develop highly scalable applications in Python framework.
Create and deploy applications in Azure environment with various interconnected azure components.
Understand and enhance front-end applications using React JS, HTML5 and CSS3.
Identify and fix bottlenecks that may arise from inefficient code.
Ensure that programs are written to the highest standards (e.g., Unit Tests) and technical specifications.
Documentation of the key aspects of the project.
Qualifications: 5+ year of development experience in Python is mandatory, with optional experience in Databricks and Azure cloud computing.
Knowledge of database systems (e.g., SQL, NoSQL) and distributed computing frameworks.
Prior experience in building VaR systems is desirable.
Excellent communication and people skills, with the ability to collaborate effectively with stakeholders at all levels.
Solid organizational skills, ability to multi-task across different projects.
Experience with Agile methodologies.
Skilled at independently researching topics using all means available to firm relevant information.
Excellent verbal and written communication skills.
Self-starter with ability to multi-task and to maintain momentum.
Exposure to Power BI tools is highly desirable.
Knowledge of user authentication and authorization between multiple systems, servers and environments.
Senior Software Engineer (Backend | Java/Python | AWS)
We're looking for a Senior Software Engineer to join a high-impact Metadata Engineering team focused on powering the backend systems behind large-scale streaming platforms.
This role is all about building the systems that organize, structure, and distribute content data — helping ensure a seamless viewing experience for millions of users.
What You'll Do
- Design and build scalable backend systems and services
- Own features end-to-end (design → development → deployment)
- Improve existing systems and workflows for performance and scalability
- Collaborate with cross-functional teams (Product, TPMs, Engineering)
- Mentor junior engineers and contribute to technical direction
What You Need
- 5+ years of backend software engineering experience
- Strong experience with Java and Python
- Hands-on experience with AWS (Lambda, SQS/SNS, Kinesis, DynamoDB, etc.)
- Experience with SQL and/or NoSQL databases
- Familiarity with event-driven architectures (Kafka, Kinesis, etc.)
- Strong understanding of data structures, algorithms, and system design
- Ability to work independently and solve complex problems
Nice to Have
- Experience with Graph databases (Neo4j)
- Familiarity with Terraform or GraphQL
Requirements
- Bachelor's degree in a STEM field
- Strong backend engineering background (non-negotiable)
Why This Role?
- Work on high-scale systems powering major streaming platforms
- Be part of a team shaping how content data is structured and delivered
- Opportunity to make a real impact on user experience at scale
- Python & Risk to join their team in Charlotte, NC.
Compensation: $150-225k Responsibilities: Demonstrate a command of breaking market trends, competitive positioning, client priorities and business opportunities through consistent delivery of cost controlling and business enabling solutions.
Apply extensive business knowledge to inform critical decisions on strategic initiatives, future projects, long-term planning, etc.
Optimize technology outcomes for the company while maintaining strong operational and cyber risk controls.
Engage strong communication skills in public speaking engagements, client / internal presentations, press appearances and internal town halls to convey clear and consistent messaging on company's strategic technology direction and performance.
Foster confidence and buy-in among various stakeholder groups.
Leverage leadership skills to motivate employees throughout the technology organization.
Articulate a clear vision to encourage buy-in among stakeholder groups.
Continuously seek out feedback and refine approach accordingly.
Draw on external network of competitive counterparts and other market participants to benchmark the company among peers and identify new strategic opportunities.
Qualifications: 15+ years of professional experience at large banks with a focus on Risk Technology.
Experience leading team(s) and owning multiple deliveries at the same time.
Strong expertise in Python is mandatory, with optional experience in Azure cloud computing.
Familiarity with risk metrics such as VaR, sVar, stress testing, Counterparty Credit Risk, Credit quality indicators, credit sentiment analysis and sensitivity analysis.
Experience working with market data sources, financial instruments, and trading systems.
Exposure to any frontend Javascript technologies is required.
Experience in Databricks and AI/ML technologies is highly desirable Knowledge of database systems (e.g., SQL, NoSQL) and distributed computing frameworks.
Excellent communication and people skills, with the ability to collaborate effectively with stakeholders at all levels.
Ability to work independently and manage multiple priorities in a demanding environment.
Advanced sense of accountability and follow-through with an ability to effectively prioritize multiple tasks, projects, and goals.
Ability to understand complex and highly technical concepts, and ability to easily explain/translate them to peers.
Knowledge of project management frameworks including Waterfall and Agile and tools such as JIRA and MS Project.
Ability to prioritize work by setting and meeting realistic deadlines, forecasting, and communicating changes resulting from risks and issues, while ensuring an elevated level of fiscal control and accountability for project budget and resources.
Strong relationship management, collaboration and influencing skills.
Ability to successfully engage in multiple initiatives simultaneously while interacting professionally with executives, managers, and subject matter experts.
Knowledge of financial operations and planning, controls management, MIS, data management and reporting processes related to commercial investment banks.
BA/BS degree required.
Excellent verbal and written communication skills.
Bhagyashree Yewle, Principal Lead Recruiter - YOH SPG
TIME Lead Platform Engineer with Python Programming & AWS Cloud - HYBRID ONSITE
Location Flexibility: This role is based 4 days per week in either Boston MA or Needham MA, with occasional travel between offices as needed.
Candidates requiring visa sponsorship are welcome to apply!
FROM THE HIRING MANAGER - for our Platform Engineering team what we’re looking for are people who have experience building technology to be used by other development teams (not business users).
THE POSITION We are currently seeking qualified candidates for a Lead Software Engineer position for our Platform Engineering team which is responsible for designing and building tools and workflows for our internal software engineering teams. These systems will allow them to build and deploy applications effortlessly, allowing them to focus on building business functionality for their users. Your work will directly support enterprise-wide initiatives, helping teams across the organization streamline operations, improve reliability, and accelerate delivery. This role is ideal for someone who enjoys solving complex technical problems and collaborating with other engineers to create highimpact internal platforms. The ideal candidate should have experience enabling IT organizations to work more efficiently, standardize best practices, and reduce friction across the development lifecycle. This includes creating reusable components, automation frameworks, and platform capabilities that empower our engineering teams.
KEYS TO THE POSITION
- 10+ years of experience in software engineering
- Proficient in Python with experience building tools using widely adopted libraries such as Pandas, NumPy, Requests, BeautifulSoup, FastAPI, and SQLAlchemy
- Skilled in packaging, testing, and deploying Python applications using tools like pytest, setuptools, and Docker
- Hands-on experience designing, deploying, and managing cloud-native applications using AWS services (e.g., EC2, Lambda, S3, RDS, CloudFormation), with a strong grasp of scalable and secure architecture principles.
- Experience designing and operating DevOps platforms including CI/CD pipelines, infrastructure as code (e.g., Terraform, Jenkins), and container orchestration using ECS or EKS - Experience designing and operating monitoring, logging, and performance optimization tools (e.g., OpenSearch, Open Telemetry, CloudWatch, X-Ray)
- Excellent written and verbal communication
- Attention to detail, self-discipline, and passion to drive and innovate
- Must be comfortable with test-driven development, continuous integration, and agile development methodologies using tools like GIT, Artifactory, and Jenkins
- Experience working with offshore development teams is a plus
- Bachelor’s degree in computer science, engineering, math, or related field, or equivalent experience is preferred
Estimated Min Rate: $140,000.00
Estimated Max Rate: $165,000.00
What’s In It for You?
We welcome you to be a part of the largest and legendary global staffing companies to meet your career aspirations. Yoh’s network of client companies has been employing professionals like you for over 65 years in the U.S., UK and Canada. Join Yoh’s extensive talent community that will provide you with access to Yoh’s vast network of opportunities and gain access to this exclusive opportunity available to you. Benefit eligibility is in accordance with applicable laws and client requirements. Benefits include:
- Medical, Prescription, Dental & Vision Benefits (for employees working 20+ hours per week)
- Health Savings Account (HSA) (for employees working 20+ hours per week)
- Life & Disability Insurance (for employees working 20+ hours per week)
- MetLife Voluntary Benefits
- Employee Assistance Program (EAP)
- 401K Retirement Savings Plan
- Direct Deposit & weekly epayroll
- Referral Bonus Programs
- Certification and training opportunities
Note: Any pay ranges displayed are estimations. Actual pay is determined by an applicant's experience, technical expertise, and other qualifications as listed in the job description. All qualified applicants are welcome to apply.
Yoh, a Day & Zimmermann company, is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.
Visit to contact us if you are an individual with a disability and require accommodation in the application process.
For California applicants, qualified applicants with arrest or conviction records will be considered for employment in accordance with the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act. All of the material job duties described in this posting are job duties for which a criminal history may have a direct, adverse, and negative relationship potentially resulting in the withdrawal of a conditional offer of employment.
It is unlawful in Massachusetts to require or administer a lie detector test as a condition of employment or continued employment. An employer who violates this law shall be subject to criminal penalties and civil liability.
By applying and submitting your resume, you authorize Yoh to review and reformat your resume to meet Yoh’s hiring clients’ preferences. To learn more about Yoh’s privacy practices, please see our Candidate Privacy Notice: working/work at home options are available for this role.