Colab Python Jobs in Usa

1,215 positions found — Page 4

Contract ODI Developer - Hybrid Onsite in Boston MA - USC OR GC ONLY
✦ New
Salary not disclosed
Boston, MA, Hybrid 5 hours ago
Please send current resumes directly to
Bhagyashree Yewle, Principal Lead Recruiter - YOH SPG
ODI Developer - Hybrid Onsite in Boston MA - USC OR GC ONLY (No Visas)
  • Location: Boston, MA
  • Hybrid: 3 days on site
  • Potential Convert: Yes, USC/GC ONLY no exceptions. WILL NOT SPONSOR
Tope 5 Must haves:
  • ETL/ELT
  • ODI
  • PL/SQL coding
  • 7 years’ experience
  • Knowledge on how to be an admin side of things (not day to day but is able to do that)
  • Scripting – Python & Unix Scripting
Role Overview:
Seeking a highly skilled and experienced Sr. ODI Developer to join our Private Banking Systems team. The ideal candidate will possess expertise in a range of technologies, including ODI (Oracle Data Integrator), Oracle Data Warehouse, Linux, Python scripting, and have a deep understanding of the Banking domain is a big plus. As a Data Engineer, you will play a pivotal role in designing, developing, and maintaining data solutions.

Key Responsibilities:
  • Build ODI mappings/interfaces, packages, procedures, scenarios, topology configuration, ODI Agent and load plans to integrate data from multiple enterprise systems.
  • Expertise in building Pl/SQL queries, procedures, data loading process, ensuring high-performance and scalability to meet the evolving data needs of the various applications.
  • Design, develop, and maintain ETL/ELT pipelines using Oracle Data Integrator (ODI).
  • Collaborate effectively with cross-functional teams, including other data engineers, DBA group, analysts, and business stakeholders, to understand data requirements and deliver solutions.
  • Monitor and troubleshoot RMJ jobs, ODI workflows, sessions, agents, and data pipelines on Linux environments.
  • Perform root cause analysis for failures related to ODI workflows, RMJ jobs, network connectivity, API integrations, and file transfers.
  • Optimize ETL workflows to improve reliability, performance, and scalability.
  • Use scripting and automation tools to support data processing and operational workflows.
  • Work in Linux/Unix environments, using command-line tools and shell scripts for job automation and troubleshooting.
  • Maintain comprehensive documentation of data processes, configurations, and best practices.
  • Participate in walk-throughs which review program specifications, source code, and all technical supporting documentation, including screens/reports. Provide feedback in accordance with team standards and guidelines.
  • Participate in implementation of changes, enhancements, and newly developed programs.
  • Conduct technical research and provide recommendations, develop proofs of concept or prototypes, contributing to technical design of applications.
  • Helping to identify coding patterns and anti-patterns and enforce implementation of the patterns through code reviews.
  • Quickly resolving issues encountered by business lines in the production environment, maintaining a helpful, "high touch" approach to working with business users, performing root cause analysis, technology evaluation, and performance tuning.

Desired Qualifications:

  • Degree in Computer Science, Engineering or related technical area
  • 7+ years of extensive hands-on experience in ODI, Oracle Datawarehouse, Oracle PL/SQL, Linux, Python scripting, and ODI admin module (ODI Agent setup, logs configuration, certificate installation).
  • Must have experience in building Pl/SQL queries for Oracle Server (incl. stored procedures, functions…) and must understand basic principles of data modeling
  • Excellent collaborative and communication skills, particularly in high-stress situations
  • Experience with scripting Python and Linux scripting, CLE, networking fundamentals (API, IP/ports, SFTP/FTP connectivity)
  • High proficiency in development practices: unit testing, Continuous Integration (CI/CD), refactoring, clean code
  • Experience with Bitbucket/GIT source control management
  • Problem solving skills, able to determine upcoming risks & issues and address them accordingly.
  • Ability to interpret and troubleshoot applications using logs.
  • Pro-active approach and good communication skills.
  • Experience with agile methodologies (Scrum, Kanban) and tools (Jira)
Nice to Have:
  • Private Banking domain experience.
  • Working experience in a financial service industry
  • Financial application knowledge like FIS AddVantage, CRD, CRM Pivotal.
  • Experience with Apache Airflow for workflow orchestration.
  • Knowledge of dbt (Data Build Tool) for modern data transformations.
  • Exposure to cloud data platforms or hybrid data architectures.

Key Competencies:

  • Strong analytical and problem-solving skills
  • Ability to work with large-scale enterprise data environments
  • Excellent collaboration and communication skills
  • Ability to manage multiple priorities in a fast-paced environment
  • Commitment to continuous learning and technology innovation

Estimated Min Rate: $55.00

Estimated Max Rate: $72.00

What’s In It for You?
We welcome you to be a part of the largest and legendary global staffing companies to meet your career aspirations. Yoh’s network of client companies has been employing professionals like you for over 65 years in the U.S., UK and Canada. Join Yoh’s extensive talent community that will provide you with access to Yoh’s vast network of opportunities and gain access to this exclusive opportunity available to you. Benefit eligibility is in accordance with applicable laws and client requirements. Benefits include:

  • Medical, Prescription, Dental & Vision Benefits (for employees working 20+ hours per week)
  • Health Savings Account (HSA) (for employees working 20+ hours per week)
  • Life & Disability Insurance (for employees working 20+ hours per week)
  • MetLife Voluntary Benefits
  • Employee Assistance Program (EAP)
  • 401K Retirement Savings Plan
  • Direct Deposit & weekly epayroll
  • Referral Bonus Programs
  • Certification and training opportunities

Note: Any pay ranges displayed are estimations. Actual pay is determined by an applicant's experience, technical expertise, and other qualifications as listed in the job description. All qualified applicants are welcome to apply.

Yoh, a Day & Zimmermann company, is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.

Visit to contact us if you are an individual with a disability and require accommodation in the application process.

For California applicants, qualified applicants with arrest or conviction records will be considered for employment in accordance with the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act. All of the material job duties described in this posting are job duties for which a criminal history may have a direct, adverse, and negative relationship potentially resulting in the withdrawal of a conditional offer of employment.

It is unlawful in Massachusetts to require or administer a lie detector test as a condition of employment or continued employment. An employer who violates this law shall be subject to criminal penalties and civil liability.

By applying and submitting your resume, you authorize Yoh to review and reformat your resume to meet Yoh’s hiring clients’ preferences. To learn more about Yoh’s privacy practices, please see our Candidate Privacy Notice:  working/work at home options are available for this role.

contract
SDET
✦ New
Salary not disclosed
Santa Monica, CA 5 hours ago

Notes:

  • STRONG PYTHON proficiency is a MUST, with a robust knowledge base in backend development and database testing.
  • Proficient in SQL, including crafting and optimizing queries for database testing. Familiarity with big data technologies (e.g., Spark, Hadoop) and ETL/data accuracy validation.
  • Proven expertise in automated testing across APIs, UIs, integrations, and data validation, as well as end-to-end acceptance testing.


Description/Comment:

Job Overview:

The Technology team in Santa Monica is seeking a Sr. Software Development in Test Engineer to join our Engineering Services team. This role involves building test automation and processes to ensure the quality of our advertising systems. Properly functioning systems reliably deliver relevant ads to our viewers, driving higher revenue and helping customers discover relevant brands and products. Defects in our ad systems can negatively impact revenue and viewer experience due to irrelevance and repetition. This is a unique opportunity to impact the QE and automation process and culture, as well as the products we release.


Basic Qualifications

Key Responsibilities:

  • Work closely with Software Engineers to understand the complex advertising ecosystem at Technology.
  • Develop automated test frameworks and suites for UI, API, and Integration levels using Python or other OO languages.
  • Participate in design discussions to evolve the platform, enabling richer testing scenarios and simplifying defect detection and prevention.
  • Assist with triage, diagnosis, and resolution of issues discovered across teams.
  • Contribute to end-to-end acceptance tests.
  • Develop and execute manual test cases when automated testing is not feasible.
  • Drive the conversion of manual tests to automated tests whenever possible.


Basic Qualifications:

  • Minimum of 4 years of hands-on software test development experience, including both functional and non-functional test development.
  • Passion for driving best practices in the testing space.
  • Proficiency with Python or other OO languages.
  • Knowledge of software engineering practices and agile approaches.
  • Strong desire to establish and improve product quality.
  • Experience building or improving test automation frameworks.
  • Proficiency in CI/CD integration and pipeline development using Jenkins, Spinnaker, or similar tools.
  • Experience with Gherkin (BDD/TDD).
  • Willingness to take on challenges while being part of a team.


Preferred Qualifications:

  • Strong SQL knowledge and experience with database testing.
  • Experience with server-side and database projects.
  • Selenium experience is preferred; strong Python skills are a must.


Required Education:

• B.S. in Computer Science or equivalent degree/work experience.


About US Tech Solutions:

US Tech Solutions is a global staff augmentation firm providing a wide range of talent on-demand and total workforce solutions. To know more about US Tech Solutions, please visit Tech Solutions is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, colour, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.


Recruiter Details:

Name: Emmadi Srikanth

Email:

Not Specified
Back End Developer
✦ New
Salary not disclosed
Glendale, CA 5 hours ago

Job Title: Backend Engineer (Node.js / Python)

Location: Onsite – Glendale

Salary: $190,000 Base + Bonus

Overview

We are seeking a Backend Engineer with strong Node.js and Python experience to join a fast-growing engineering team building scalable, high-performance systems. This role is ideal for an engineer who enjoys designing robust backend services, working with modern cloud architectures, and collaborating closely with product and data teams.

The ideal candidate has experience building production-grade APIs and distributed systems and is curious about emerging AI technologies. Experience with AI or machine learning is a plus, but we are equally excited about engineers who are interested in learning and working with Large Language Models (LLMs).


Responsibilities

  • Design, build, and maintain scalable backend services and APIs using Node.js and Python
  • Architect and develop high-performance, reliable distributed systems
  • Build and maintain RESTful and event-driven services used by internal and customer-facing applications
  • Collaborate with frontend engineers, product managers, and data teams to deliver new features
  • Optimize system performance, reliability, and scalability
  • Write clean, well-tested, and maintainable code
  • Participate in architecture discussions, code reviews, and technical planning
  • Contribute to integrating AI/LLM-driven capabilities into backend systems where appropriate


Required Qualifications

  • 5+ years of professional backend software engineering experience
  • Strong development experience with Node.js and/or Python
  • Experience designing and building REST APIs and microservices
  • Strong understanding of database design, data modeling, and performance optimization
  • Experience with cloud environments (AWS, GCP, or Azure)
  • Familiarity with containerization technologies such as Docker
  • Strong problem-solving skills and ability to work in a collaborative team environment
Not Specified
Data Engineer - GCP
✦ New
Salary not disclosed
Phoenix, AZ 5 hours ago

Job Summary


We are seeking a skilled Data Engineer with 5+ years of hands-on experience designing, building, and maintaining scalable data pipelines and data platforms. The ideal candidate has strong experience working with DAG-based orchestration, cloud technologies (preferably Google Cloud Platform), SQL-driven data processing, Apache Spark, and Python-based API development using Fast API. You will play a key role in enabling reliable data ingestion, transformation, and quality assurance across enterprise systems.


Key Responsibilities


  • Design, develop, and maintain DAG-based data pipelines (Airflow or similar orchestration tools).
  • Build and optimize SQL-based data transformations for analytics and reporting.
  • Develop and manage batch and streaming data pipelines using Apache Spark.
  • Implement Python-based REST APIs using FastAPI for data services and integrations.
  • Perform data quality checks, validation, reconciliation, and anomaly detection.
  • Work with cloud platforms (preferably Google Cloud Platform) for storage, compute, and orchestration.
  • Architect and implement cloud-native data platforms on GCP, leveraging services such as BigQuery, BigTable, Dataflow, Dataproc, Pub/Sub, and Cloud Storage.
  • Monitor pipeline performance, troubleshoot failures, and optimize processing efficiency.
  • Collaborate with analytics, application, and business teams to understand data requirements.
  • Ensure best practices around security, scalability, and maintainability.
  • Ensure data quality, reliability, security, governance, and compliance with enterprise standards


Required Skills & Experience


  • 5 + years of experience as a Data Engineer
  • Strong experience with DAG orchestration (e.g., Apache Airflow).
  • Solid understanding of cloud technologies, preferably Google Cloud Platform (GCP).
  • Advanced proficiency in SQL for data processing and transformations.
  • Hands-on experience running and tuning Apache Spark jobs.
  • Experience developing APIs using Python and FastAPI.
  • Strong understanding of data quality frameworks, checks, and validation techniques.
  • Proficiency in Python, Java, Scala, or PySpark, with strong SQL expertise.
  • Hands-on experience with GCP data services, including BigQuery, BigTable, Dataproc, Dataflow, and cloud-native ETL patterns.
  • Experience with software delivery methodologies such as Agile, Scrum, and CI/CD practices.
  • Strong analytical and problem-solving skills.
  • Ability to work independently and in cross-functional teams.
  • Good communication and documentation skills.
Not Specified
Quantitative Developer
✦ New
Salary not disclosed
Jersey City, NJ 5 hours ago

Job Description: We are seeking a Sr Python Developers with strong Python skills, analytical thinking, and financial/risk experience to help with system design and implement the core modeling, scenario generation, and analytics components of this enterprise platform.

This role blends quantitative development and software engineering to build scalable tools used by Treasury, Market Risk, and senior decision-makers.

Key Responsibilities

Quantitative Modeling & Scenario Analytics

• Develop and implement using Python for balance sheet projections, interest rate risk (IRR), liquidity analytics, and scenario-driven stress testing.

• Support both regulatory scenarios (e.g., CCAR, SCB, liquidity stress) and ad hoc “what-if” analyses for Treasury and risk stakeholders.

• Build tools for scenario transformations, sensitivity calculations, curve construction, and quantitative stress analytics.

Platform & Data Engineering

• Design and maintain high performance Python modules that serve as the computational core of the scenario analysis framework.

• Proficient with Pandas, Numpy and other Quant libraries.

• Work with large datasets using SQL to integrate financial, balance sheet, and market inputs.

• Collaborate on the development of REST APIs that interface with scenario engines, model layers, and user applications.

Front-End & Workflow Integration

• Partner with UI developers to support React-based dashboards that present scenario results, visualizations, and analytics to business users.

Not Specified
Sr. Full Stack Engineer
Salary not disclosed
Fairfax, VA 3 days ago


Sr. Full Stack Engineer

Job ID

2025-2140

# of Openings

1

Overview

Currently seeking multiple Full Stack Developers in support of the of U.S. Citizenship and Immigration Services (USCIS) Engineering Support for Identity Services (ESIS), this individual will support Agile Application Development technologies and capabilities in the areas of software development, systems engineering, integration, and test of software applications and infrastructure. Will be skilled with front-end, back-end, and database development. Design and implement full stack cloud solutions to include IaaS, PaaS, and SaaS. Design and deploy computing infrastructure, physical or virtual machines and other resources like virtual-machine disk image library, block and file-based storage, firewalls, load balancers, IP addresses, virtual local area networks. Implement cloud-based platform services for AWS. Implement cloud-based software as service for AWS. Perform DevOps functions.

Key Skills:

  • 10+ years of experience with full stack engineering with proficiency in database development/integration as well as server and client application development/integration
  • Software developing experience using Python and Java Spring framework
  • Experience with other software technologies such as Web Services (SOAP/REST), React/Angular, VS Code, SQL, Gradle, and/or Git
  • AWS experience required with experience deploying enterprise applications in AWS
  • Experience with CI/CD environment tools such as Docker, Jenkins, Ansible, Kubernetes


Responsibilities

  • Software development with Python, Java, React, and various scripting languages
  • Design data models and web APIs and creation of software tasks from system requirements
  • Perform requirements analysis, design, development, unit, and integration testing of software, troubleshooting and debugging of the system
  • Immediate responsibilities will include enhancing and maintaining the existing system as well as design, development, and documentation of new features
  • Create Git Releases, pull request and code reviews
  • Query logs utilizing Splunk and will monitor dashboarding utilizing New Relic
  • Usage of Atlassian Tools for day to day tasks within the Scrum process
  • Implement web services, data persistence access features and external interfaces
  • Partner closely with front-end and database engineers to ensure features are developed holistically
  • Follow Agile software development methodology and team architecture standards.
  • Will need to be able to read Architecture Diagrams
  • Perform test service to improve code coverage, mocking services, test driven development and unit testing
  • Will modify Helm Charts, Jenkinsfiles, and Dockerfiles


Qualifications

  • MUST BE US CITIZEN
  • Bachelor's degree required
  • Must be able to obtain and maintain a Public Trust security clearance
  • 10+ years expereince in Software Engineering
  • Must have experience in Python and Java Spring Framework (Boot, Batch, Data, Security)
  • Must have experience with other software technologies such as Web Services (SOAP/REST), React/Angular, VS Code, SQL, Gradle, and/or Git
  • Experience with design, development, enhancement, troubleshooting and debugging of web applications
  • Must have experience in AWS cloud environment and with CI/CD tools (ie. Docker, Jenkins, Kubernetes) for deployment processes, monitoring production environments, and modifying docker/Jenkins files and helm charts
  • Experience with scripting languages (Python, Bash, Powershell, Perl) is not required but nice to have
  • Understanding of the concept of branching and utilizing technological tools such as Git, VS Code, and/or Rancher to perform
  • Experience with creating Git releases, creating pull requests, and reviewing code
  • Experience monitoring dashboards utilizing New Relic
  • Experience with Splunk to query logs
  • Experience with Junit testing preferred
  • Experience creating release instructions utilizing JIRA
  • Experience developing and integrating complex software systems through the full SDLC
  • Experience with Agile Scrum
  • Must have strong written and verbal communication skills


Target Pay Range

The below listed pay range for this position is not a guarantee of compensation or salary. The final offered salary will be influenced by a host of factors including, but not limited to, geographic location, Federal Government contract labor categories and contract wage rates, relevant prior work experience, specific skills and competencies, education, and certifications. Our employees value the flexibility at Pyramid Systems that allows them to balance quality work and their personal lives. We offer competitive compensation, benefits, to include our Employee Stock Ownership Program, FlexPTO, and learning and development opportunities.

Pyramid Min

USD $125,731.00/Yr.

Pyramid Max

USD $188,597.00/Yr.

Why Pyramid?

Pyramid Systems, Inc. is an award-winning, technology leader, driving digital transformation across federal agencies. We empower forward-thinking innovations, accelerate production-ready software, and deliver secure solutions so federal agencies can meet their mission goals. Voted a Top Workplace, both regionally (Washington, DC) and Nationally (USA) the past 2 years (2023 and 2024) based on the feedback from our employees, we are headquartered in Fairfax, VA. and have a growing national footprint. We value and promote our Flexible Workplace approach because of the positive impacts it has on work-life integration. We remain committed to ensuring every employee's voice is heard, performance and results are recognized and rewarded, development and advancement is a focus, and diversity, equity and inclusion is a company priority. We offer competitive compensation and benefits (including a recently launched Employee Stock Ownership Plan - ESOP), a robust performance-based rewards program, and we know how to have fun! Our people and culture have endured and delivered for our clients for nearly three decades.

EEO Statement

Pyramid Systems, Inc. is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, or protected veteran status and will not be discriminated against on the basis of disability.

permanent
Data Scientist
✦ New
Salary not disclosed
Newark, NJ 1 day ago
Job Title: Data Scientist

Duration: 12 Months (Temp to Hire)

Location: Newark, NJ 07102


Job Description:

Are you interested in building capabilities that enable the organization with innovation, speed, agility, scalability and efficiency? When you join our organization at Prudential, you'll unlock an exciting and impactful career - all while growing your skills and advancing your profession at one of the world's leading financial services institutions.

As a Data Scientist on/in the US Businesses PruAdvisors Data Science Team you will partner with Machine Learning Engineers, Data Engineers, Business Leaders and other professionals to build GenAI and ML models to improve advisor experience, perform lead scoring, and increase sales revenue. You will implement AI and machine learning models that will deliver stability, scalability and integration with other advisor products and services. You will implement capabilities to solve sophisticated business problems, deploy innovative products, services and experiences to delight our customers! In addition to deep technical expertise and experience, you will bring excellent problem solving, communication and teamwork skills, along with agile ways of working, strong business insight, an inclusive leadership attitude and a continuous learning focus to all that you do.

Responsibilities:


  • Provide deep technical leadership to a portfolio of high impact data science initiatives involving sales and advisor experience. Identify the optimal sets of data, models, training, and testing techniques required for successful product delivery. Remove complex technical impediments
  • Leverage your experience and skills to identify new opportunities where data science and AI can improve experiences, gain efficiencies, and generate sales.
  • Manage team members in AI/ML and model development, testing, training, and tuning. Apply hands-on experience to ensuring best-in-class model development. Mentor team members in technical skill development and product ownership.
  • Communicate clearly and concisely, in writing and verbally, all facets of model design and development. Continuously look for insights in models developed and generate new ideas for model improvement.
  • Manage external vendors in the execution of parts of the data science development process as needed.
  • Leverage continuous integration and continuous deployment best practices, including test automation and monitoring, to ensure successful deployment of ML models and application code on Prudential's AI/ML platform.
  • Bring a deep understanding of relevant and emerging technologies, give technical direction to team members and embed learning and innovation in the day-to-day.
  • Work on significant and unique issues where analysis of situations or data requires an evaluation of intangible variables and may impact future concepts, products or technologies.
  • Familiarity with Python, SQL, AWS, and JIRA.
  • Familiarity with LLMs, deployment of LLMs, RAG, LangChain, LangGraph, and Agentic AI concepts.

The Skills and expertise you bring:


  • Applied Statistics, Computer Science, or Engineering or experience in related fields with a focus on machine learning, AI, and LLMs.
  • Junior category industry experience with responsibility for developing and delivering advanced quantitative, AI/ML, analytical and statistical solutions.
  • Ability to lead a small team with minimal guidance and effectively leverage diverse ideas, experiences, thoughts and perspectives to the benefit of the organization to deliver AI products.
  • Ability to influence business stakeholders and to drive adoption of AI/ML solutions.
  • Experience with agile development methodologies, Test-Driven Development (TDD), and product management.
  • Knowledge of business concepts, tools and processes that are needed for making sound decisions in the context of the company's business
  • Demonstrated ability to mentor and operational management of data science team based on project requirements, resourcing requirements, and planning dependencies as appropriate, anticipate risks and bottlenecks and proactively takes actions
  • Excellent problem solving, communication and collaboration skills, and stakeholder management
  • Significant experience and/or deep expertise with several of the following:
  • Machine Learning and AI: Understanding of machine learning theory, including the mathematics underlying machine learning algorithms. Expertise in the application of machine learning theory to building, training, testing, interpreting and monitoring machine learning models. Expertise in traditional machine learning models (unsupervised, XGBoost, etc.) and Large Language Models (OpenAI, Claude).
  • Model Deployment: Understanding of model development life cycle, CI/CD/CT pipelines (using tools like Jenkins, CloudBees, Harness, etc.), A/B testing, and pipeline frameworks such as AWS SageMaker, and newer AWS/Azure Agentic AI infrastructure products.
  • Data Acquisition and Transformation: Acquiring data from disparate data sources using APIs and SQL. Transform data using SQL and Python. Visualizing data using a diverse tool set including but not limited to Python.
  • Database Management Systems: Knowledge of how databases are structured and function in order use them efficiently. May include multiple data environments, cloud/AWS, primary and foreign key relationships, table design, database schemas, etc.
  • Data Analysis and Insights: Analyzing structured and unstructured data using data visualization, manipulation, and statistical methods to identify patterns, anomalies, relationships, and trends.
  • Programming Languages: Python and SQL
Not Specified
Engineering Technician, Intermediate|6087
✦ New
🏢 Spectraforce Technologies
Salary not disclosed
San Diego, CA 1 day ago
Job Title: Engineering Technician II - Camera Imaging Lab

Location: San Diego, CA (Onsite)

Duration: 6+ Months


Job Overview:

Client is seeking an Engineering Technician II to support the Camera Image Quality (IQ) engineering team in San Diego. This role will focus on capturing photo and video data, analyzing image quality, and maintaining imaging databases used for camera development and evaluation.

The ideal candidate will assist engineers in camera testing, data analysis, and lab operations, while utilizing tools such as Python, Android Debug Bridge (ADB), and image/video analysis software.

Key Responsibilities:


  • Photo & Video Capture and Imaging Data Management
  • Capture high-quality photos and videos using camera devices (primarily smartphone cameras) in both lab environments and real-world scenarios.
  • Execute image and video analysis tools to evaluate camera performance and generate quantitative and qualitative results.
  • Manage and maintain imaging databases across multiple Image Quality (IQ) domains, including: Texture and Noise, Color Accuracy, HDR, Exposure, Zoom, Bokeh, Video Quality, Image, Stabilization
  • Configure and operate Android devices using Android Debug Bridge (ADB) for camera testing and data capture.
  • Organize captured data through structured folder systems and naming conventions to support multi-device testing workflows.


Camera Lab Maintenance


  • Maintain and operate Camera Image Quality evaluation labs located at San Diego campus.
  • Ensure the lab environment is properly configured for camera testing and data collection.
  • Support camera engineering teams by maintaining testing equipment, scenes, and workflows.


Image Quality Evaluation Support (Optional)


  • Develop and enhance Python-based tools for image quality analysis and evaluation.
  • Assist in developing evaluation protocols for camera IQ metrics, including texture, color, HDR, exposure, zoom, and video stabilization.
  • Support development of test scenarios for both lab setups and real-world capture conditions.
  • Work with tools such as FFmpeg for video analysis and processing.
  • Assist in building and organizing image/video datasets for machine learning training, including data annotation and labeling.


Required Qualifications:


  • Bachelor's degree in Engineering (Electrical Engineering, Computer Engineering, Computer Science, or related field).
  • Strong interest in camera imaging, photography, or image quality evaluation.
  • Basic knowledge of Python programming.
  • Ability to manage large datasets and organize technical data efficiently.


Preferred Qualifications:


  • Experience with image or video processing tools.
  • Familiarity with Android Debug Bridge (ADB).
  • Knowledge of MATLAB, Python scripting, or FFmpeg.
  • Exposure to computer vision, image quality analysis, or camera testing.
  • Understanding of image datasets used in machine learning workflows.
Not Specified
Software Engineer
Salary not disclosed
NORTH CASTLE, NY 6 days ago
Software Engineer, IBM Corporation, Armonk, New York and various unanticipated client sites throughout the US (Up to 40% telecommuting permitted): Develop and enhance ETL (extract, transform, load) processes on Informatica PowerCenter from various sources to Oracle data warehouse. Lead a team of Data Engineers on assigned data warehouse ETL projects. Design and implement automated processes for regulatory reporting and calculations. Maintain documentation, runbooks, and incident records in compliance with audit requirements. Support applications, data pipelines and infrastructure for regulatory reports. Plan/conduct Informatica ETL unit and development tests and monitor the business ETL processing and troubleshoot issues identified. Create Unix and Python scripts for data ingestion, validation and process auditing. Monitor existing data flows developed on Apache Flink framework and work on enhancements. Build data pipelines to extract data from various sources, perform transformations, and load it into target systems. Develop and schedule Data pipelines and using Airflow DAGs. Work on migrating business processes from Informatica to Big data platform/technologies while performing testing and quality assurance. Develop PySpark applications to process and analyze large datasets efficiently including implementing complex data transformations, aggregations, and statistical operations. Maintain the code and versioning in Github. Write Oracle SQL and Hive queries to validate the data related to multiple financial reports. Support data warehouse month-end loads and monitoring to ensure successful completion. Utilize: Oracle SQL/PLSQL, Unix shell scripting, Java, Data Analytics and Integration, Informatica Power center - Extract, Transform, Load (ETL) Tool, Pyspark - Python API for Apache Spark, Apache Hive. Required: Bachelor's degree or equivalent in Engineering or related and five (5) years of experience as a Managing Consultant, Engineer or related. Five (5) years of experience must include utilizing Oracle SQL/PLSQL, Unix shell scripting, Java, Data Analytics and Integration, Informatica Power center - Extract, Transform, Load (ETL) Tool, Pyspark - Python API for Apache Spark, Apache Hive. $189592 to $216700 per year. Please send resumes to Applicants must reference SN159 in the subject line.

JobiqoTJN. Keywords: Software Engineer, Location: NORTH CASTLE, NY - 10504
Not Specified
Software Engineer-Senior Software Engineer – Platform Software Engineer Hopkinton, MA
✦ New
🏢 Dell
Salary not disclosed
Software Engineer/Senior Software Engineer  – Platform Software Engineer (Hopkinton, MA)

Join us to do the best work of your career and make a profound social impact as a  Software Engineer/Senior Software Engineer  on our Software Engineering Team in Hopkinton, MA.

What you’ll achieve
We build enterprise-grade, massively scalable cluster-based storage systems running across Linux and BSD. Our portfolio includes a multi-petabyte S3 object store and a scale-out NAS platform. We’re a modern, scrum-based engineering org that ships with high velocity and quality, using the best tools, hardware, and practices.
As a Software Engineer, you will contribute to our platform stack, the foundation upon which these products are built.  Help us decide where your strengths best fit as you onboard. If you can explain how and where you’ll add outsized value in a distributed storage architecture, we want to talk.

You will:
Own problems end-to-end  across design, implementation, testing, deployment, and supportability—within a cluster storage system.
Build and harden  distributed services: durability, consistency, replication, data paths, metadata, control planes, scheduling, placement, and lifecycle management.
Optimize performance  across compute, memory, IO, networking (including RDMA), and storage media (NVMe/SSD/HDD); drive latency and throughput improvements with data-driven profiling.
Advance reliability  through observability, telemetry, failure injection, chaos testing, and automated remediation; raise the bar on serviceability and supportability.
Contribute to security & compliance  with secure-by-default engineering.

Take the first step towards your dream career
Every Dell Technologies team member brings something unique to the table. Here’s what we are looking for with this role:

Where You Might Contribute
You are expected to work across multiple skills in the following areas.
Primarily C, with Python and C++ components
Sophisticated networking including RDMA (RoCE)
Scale-Out NAS Platform (BSD & Linux)
BSD platform work, networking stack, file systems, NFS/SMB, POSIX semantics
Device firmware/drivers, kernel development, NVMe/NVMe-oF
System Engineering
Programming with Python, Linux Shell and an understanding of data structures and algorithms
Read/Write (IOPs, Latency, Bandwidth), I/O datapath, NFS, SMB, S3, ACLs, networking layers (switching, routing, vlans)
Performance & Observability/Security, Serviceability & Supportability

Essential Requirements
Strength in  systems programming  and  distributed systems  fundamentals (concurrency, networking, storage, consistency, fault tolerance).
Proficiency in at least one of  C/C++ ,  Java , or  Python ; willingness to learn across the stack.
Experience with Linux or BSD development and debugging (e.g., perf, strace/dtrace/eBPF, tcpdump).
Ability to write clean, testable code; familiarity with unit/integration/system testing and CI/CD.
Clear communication, collaboration, and a bias for action.

Desirable Requirements
Kernel subsystems, device drivers, firmware; RDMA/verbs; SPDK/DPDK/JVM tuning and GC; async/reactive patterns; lock-free/concurrent data structures
Filesystem internals; NFS/SMB semantics; S3 object store internals; erasure coding/Observability stacks, performance profiling at scale, chaos/failure-injection
Security, crypto, FIPS/CC, secure boot, TPM, HSM integrations/Private or Public cloud (Microsoft Azure, Google GCP, and Amazon AWS)

Compensation
Dell is committed to fair and equitable compensation practices.
The salary range for the Software Engineer position is 130K to 155K
The salary range for the Senior Software Engineer position is 158K to 185K
(Note compensation may vary depending on location)

Benefits and Perks of working at Dell Technologies
Your life. Your health. Supported by your benefits. You can explore the overall benefits experience that awaits you as a Dell Technologies team member — right now at

Who we are
We believe that each of us has the power to make an impact. That’s why we put our team members at the center of everything we do. If you’re looking for an opportunity to grow your career with some of the best minds and most advanced tech in the industry, we’re looking for you.

Dell Technologies is a unique family of businesses that helps individuals and organizations transform how they work, live and play. Join us to build a future that works for everyone because Progress Takes All of Us.

Dell Technologies is committed to the principle of equal employment opportunity for all employees and to providing employees with a work environment free of discrimination and harassment. Read the full Equal Employment Opportunity Policy here.

Job ID: R285860
Not Specified
jobs by JobLookup
✓ All jobs loaded