Github Remote Jobs Repository Jobs in Usa
11 positions found
Job Title: Software Engineer
Duration: 12 months (Right to Hire)
Location: 100% Remote
Responsibilities:
- Design and build internal tools and automation-including API linting frameworks, OpenAPI specification validators, code?generation utilities, and workflow automation-to improve consistency, quality, and efficiency across the API lifecycle.
- Ensure developer experience is at the center of all software created, building intuitive, reliable, and friction?reducing tools that empower API producers and consumers and simplify their workflows.
- Collaborate, coordinate, and align with technical stakeholders such as architecture, platform engineering, security, and API governance teams to ensure tooling meets enterprise needs and integrates seamlessly with broader technical ecosystems.
- Apply industry best practices to deliver secure, scalable, and maintainable solutions that align with clients engineering, security, and compliance standards.
- Drive development activities from design through delivery, ensuring tools and services are released on time and effectively support both API producers and consumers.
- Champion code quality, implementing comprehensive unit testing, functional testing, and automated validation to ensure highly reliable solutions and fast feedback loops.
- Demonstrate engineering excellence, consistently applying high?quality engineering practices-including clean code principles, strong testing strategies (unit, integration, functional), CI/CD pipeline integration, versioning discipline, and reliable automated deployment strategies-to ensure tooling is robust, maintainable, and production?ready.
- Ensure all software created adheres to strong security principles, including secure coding practices, automated security scanning, vulnerability mitigation, and alignment with enterprise security standards-ensuring tooling is safe by design, safe by default, and safe in production.
- Support the tech lead in evaluating and shaping technical decisions, contributing insights and execution capabilities related to tooling, automation, and developer?experience improvements.
Tools & Technologies:
- Programming & Scripting: Java | Python | JavaScript | TypeScript, Bash / Shell Scripting
- API Design & Management: RESTful APIs, OpenAPI / Swagger (Specification, Validation), API Linting Frameworks, API Governance & Standards Enforcement, API Versioning Strategies
- Automation & Tooling: Code Generation Utilities, Workflow Automation Tools, Internal Developer Tooling, CLI Tools
- Testing & Quality Engineering: Unit Testing | Integration Testing | Functional Testing, Automated Validation Frameworks, Test Automation Tools, Code Quality & Static Analysis Tools
- CI/CD & DevOps: CI/CD Pipelines (GitHub Actions, GitLab CI, Jenkins), Automated Build & Deployment Pipelines, Artifact Repositories, Infrastructure Automation
- Cloud & Platforms: Cloud Platforms (AWS / Azure / GCP), Containerization (Docker), Kubernetes (optional / platform-dependent)
- Security & Compliance: Secure Coding Practices, Automated Security Scanning (SAST / DAST), Vulnerability Management Tools, Dependency Scanning, Compliance & Enterprise Security Standards
- Developer Experience (DX): Developer Tooling & Enablement Platforms, Documentation Automation, API Consumer & Producer Enablement Tools
- Collaboration & Version Control: Git | GitHub | GitLab, Agile / Scrum Methodologies, Issue & Work Tracking Tools (Jira, similar)
Compensation: $150-200k Responsibilities: β’ Finding and improving operational efficiencies to best suit Cloud resource delivery, access management and security implementations.
β’ Support of production workloads within a multi-Cloud environment, including, but not limited to monitoring, patching, backup and restoration of Cloud resources.
β’ Delivery of Cloud infrastructure for partner solutions.
β’ Creation and support of automation and infrastructure as code solutions for resource creation and policy, leveraging PowerShell/Azure DevOps, Ansible and other approved corporate automation and orchestration platforms.
β’ Sound familiarity with orchestration and automation practices at scale, with a heavy emphasis on designing solutions for other teams to consume.
β’ Prior experience working with version control tools such as Github / Azure DevOps, including repository management, pipeline as code, and sdlc workflows.
β’ Basic understanding/debugging experience when it comes to application infrastructure, databases, networking, and DNS.
β’ Design, creation and maintenance of complex Infrastructure as Code and Pipelines as Code solutions in a highly reusable capacity.
β’ Work with internal teams and vendors on the integration of a diverse set of systems into the central ITSM/ITOM platform, ServiceNow.
β’ Participate in the automated implementation of monitoring systems, for event monitoring, alerting and metrics.
β’ Client opportunities and help facilitate a mechanism to change, evolve, improve, and simplify the infrastructure and supporting processes/procedures.
β’ Directly interface with support teams across all disciplines, to facilitate a closer relationship for collaborative implementations and knowledge sharing.
β’ Ensure handover of new/updated systems/documentation to team providing 24x7x365 support.
Qualifications: β’ Proficiency in Cloud services related to one or more Cloud providers including IaaS, PaaS and SaaS.
β’ Strong automation skillset with the ability to identify and create automation workflows.
β’ Strong PowerShell or other scripting experience, especially with the goal of automation.
β’ Experience with infrastructure as code development and iteration in either Terraform/ARM/Bicep/CloudFormation.
β’ Hands on operational experience and knowledge with Azure or AWS.
β’ Experience with monitoring and log aggregation tools such as Azure Monitor, Log Analytics, CloudWatch, CloudTrail, Splunk, ELK, etc.
β’ Basic knowledge of foundational IT tooling across a wealth of domains to facilitate understanding of automation creation.
β’ Fundamental understanding of public vs private networking in the Cloud.
β’ Strong knowledge on Git merging and branching strategies.
β’ Ability to execute proof of concepts and deploy complex solutions.
β’ Understanding of typical SDLC processes, workflow as pertaining to infrastructure.
β’ Basic understanding of Atlassian Suite (Jira/Confluence) is a plus.
β’ Prior experience working with orchestration tools (Rundeck/Cutover preferred) and infrastructure-as-code tools (HashiCorp Terraform).
β’ Excellent verbal and written communication skills and ability to articulate requirements, concepts and ideas to business and technology partners.
β’ Strong technical ability for diagnosis, triage, troubleshooting and problem analysis with the ability to communicate results to business stakeholders, IT support teams to resolve issues both quickly and effectively.
β’ Ability to influence people outside the immediate span of control, negotiate and resolve conflicts, and work with business users, IT partners and vendors.
β’ High-Level customer service mindset and commitment to deliver quality results to internal stakeholders in a demanding environment.
β’ A strong sense of urgency and accountability with exceptional time management skills.
β’ Comfortable in effectively communicating with business end users, technical IT Teams, network partners and vendors.
β’ Comfortable in fast-paced environment with changing priorities and schedules.
At Northrop Grumman, our employees have incredible opportunities to work on revolutionary systems in air and space that impact peopleβs lives around the world today, and for generations to come. Our work preserves freedom, democracy, and advances human discovery and our understanding of the universe. We look for people who have bold new ideas, courage and a pioneering spirit to join forces to invent the future and have a lot of fun along the way. Our culture thrives on intellectual curiosity, cognitive diversity and bringing your whole self to work β and we have an insatiable drive to do what others think is impossible. Our employees are not only part of history, theyβre making history.
Northrop Grumman has an opening for a Principal or Senior Principal DevOps Engineer to join our team of qualified, diverse individuals. This position can be located in Roy, UT, Bellevue, NE, or Huntsville, AL.
As an DevOps Engineer, you will
Develop scripts, workflows, and playbooks for storage provisioning, automation, and backup integration.
Design, build, maintain and own SDPβcompliant CI/CD pipelines, ensuring they incorporate security checks, automated patching, and governance requirements.
Develop containers with Podman and orchestrate deployments on Kubernetes (or similar platforms).
Create and maintain IaC using Ansible (or comparable tools) to provision and configure cloud resources, containers, and networking components.
Write automation scripts and playbooks in Python, Go, Bash, or other languages for building images, running SAST scans, and automating repetitive tasks.
Develop and evolve build, deployment, and release processes, including versioning, artifact storage, and promotion across environments.
Basic Qualifications
Must have an active U.S. Government DoD Secret security clearance at time of application, current and within scope, with an ability to obtain and maintain Special Access Program (SAP) approval as determined by the company to meet its business need
Hold or have the ability to obtain Security + CE (or other DoD 8570/8140 certification)
Level 3 (T3): 5 Years with Bachelors in Science from an accredited university; 3 Years with Masters; 1 Year with PhD or
4 additional years of relevant experience in lieu of a degree.
Level 4 (T4): 8 Years with Bachelors in Science from an accredited university; 6 Years with Masters; 4 Years with PhD or
4 additional years of relevant experience in lieu of a degree.
Experience with developing CI/CD workflows and utilizing tools such as Nexus, Maven, Jira, GitLab, and Release Management
Hands-on experience with Infrastructure as Code tools (e.g., Puppet, Chef, Ansible.)
Programming and scripting experience in a UNIX environment (GoLang, C++, Perl, Python, Bash, Ruby, Shell, Scripts)
Preferred Qualifications
Experience with Kubernetes, Docker, and/or other cloud orchestration tools and technologies
Experience with Podman, Buildah, Skopeo and/or other container tools and technologies
Experience with CI/CD best practices, automated builds and tests, quality gates, software quality, and CI tools, i.e., Jenkins
Experience with configuration management tools, i.e., Git, GitHub,
GitLab, Bitbucket, others Familiarity with branching strategies, gated commits, source-controlled management, etc. Familiarity with the principle of DevSecOps
Familiarity with Atlassian Tool Suite Jira, Confluence Familiarity with using a Nexus Repository
Familiarity with security coding standard best practices, static and dynamic scanning tools, i.e., SonarQube, Fortify, Coverity, PCLint, Anchore, Nexus Lifecycle, etc.
Familiarity with Agile Development
Position title:
Project Scientist
Salary range:
The UC academic salary scales set the minimum pay determined by rank and step at appointment. See the following table for the current salary scale for this position: . A reasonable estimate for this position is $181,700 - $229,700.
Percent time:
100%
Anticipated start:
Winter/Spring 2026
Position duration:
Initial appointment is for one year with the possibility of renewal based on performance and funding availability.
Application Window
Open date: February 25, 2026
Next review date: Wednesday, Mar 11, 2026 at 11:59pm (Pacific Time)
Apply by this date to ensure full consideration by the committee.
Final date: Friday, Mar 27, 2026 at 11:59pm (Pacific Time)
Applications will continue to be accepted until this date, but those received after the review date will only be considered if the position has not yet been filled.
Position description
The Advanced BioImaging Center (ABC) in the Department of Molecular and Cell Biology at the University of California, Berkeley seeks applications for two Project Scientists at the Assistant, Associate, or full rank. The selected candidate will be appointed at the rank to commensurate with prior experience. The position will report to Professor Gokul Upadhyayula, with Professor Eric Betzig serving as an additional academic mentor. The project scientist will make significant and creative contributions in the area of machine learning & data analytics.
The Advanced BioImaging Center (ABC) at UC Berkeley aspires to be a world-leading multidisciplinary imaging center that drives important biological discoveries through critical new advances in all aspects of imaging technology and that drives the dissemination of that technology through a multi-pronged education strategy to scientists around the world. ABC was intentionally designed to maximize scientific productivity and impact by adopting groundbreaking imaging technologies such as the next-generation adaptive optical multifunctional microscope, incorporating the high-level technical expertise of instrumentation scientists, applied mathematicians, and computational scientists, and building worldwide collaborations aimed at tackling the challenges posed by terabyte and petabyte-scale imaging data processing, visualization, and dissemination. Members of the ABC have access to leading - edge imaging and computing hardware, as well as exposure to collaborators from a range of diverse disciplines, including in the fields of Artificial Intelligence, Data Science, Mathematics, and more.
The Assistant/Associate/Full Project Scientists will be an integral part of a visionary scientific team driving cutting-edge biological discoveries through immediate applications of critical advances in imaging technologies. These positions will work with a dedicated team to develop data analytics software in terabyte- to petabyte-scale imaging projects. The incumbents will develop and refine machine learning applications and manage projects and provide regular progress reports to PIs and collaborators.
Successful candidates will be an integral part of the expert team working together with computational scientists and biologists in experimental design to tackle complex biological questions in a quantitative manner. The work will primarily be conducted at the facility in Barker Hall. Occasional travel may be required.
Key Responsibilities
*Make significant and creative contributions to development of new imaging and data processing tools for datasets generated on multicellular tissues, organoids, transparent embryos.
*Design, build, and maintain new software packages for efficient data processing.
*Advise on applications of these tools for biological imaging; collaborate with Postdocs and graduate students on specific projects to test, learn and implement for general and specific use cases.
*General organization and management of software documentation.
*Bring cross disciplinary expertise to solve problems at the intersection between life science, computer vision, and state-of-the-art AI methods.
*Work with petabyte-scale light sheet datasets that are typically 4D or 5D (x,y,z,t,chemistry). Identify and implement scalable solutions to scientific questions on large-scale data sets, especially using performant algorithms.
*Develop machine learning approaches, computer vision tools to help pre-process dataset and annotations to generate groundtruth benchmarks.
*Contribute to dissemination via open source code repositories, demonstrations, publications, presentation.
These positions will be eligible for full benefits.
Lab:
Contract: ar-contract-2022/
Qualifications
Basic qualifications (required at time of application)
*PhD (or equivalent international degree)
Additional qualifications (required at time of start)
*Minimum of four years of postdoctoral research experience
*For consideration for the Associate Project Scientist rank: a minimum of 8 years of post PhD research experience
*For consideration for the full Project Scientist rank: a minimum of 14 years of post PhD research experience
Preferred qualifications
*PhD or equivalent international degree in Computation Data, Computer Sciences, Bioinformatics or Related field
*Demonstrated record of productivity and publications and/or scholarly contributions
*Strong biological background and understanding of molecular biology
*Demonstrate understanding of optical microscopy, including light sheet microscopy, adaptive optics, and modern scientific cameras
*Demonstrated ability to work in a research team, manage active collaborations with other academic groups
*Demonstrated experience handling and processing large scale imaging datasets (>100TB to petabyte scale and beyond)
*Expertise in programming in C++, Labview, MATLAB, Python
*Expertise in databases, data infrastructure, data governance
*Expertise in high performance computing using SLURM or LSF
*Experience with PyTorch, JAX, or Tensorflow
*Experience with NVIDIA CUDA and related OpenMP programming
*Experience with cloud services (AWS, GCP, Azure, etc)
*Experience with state of the art AI/ML architectures (vison transformers, diffusion models, etc
*Experience mentoring undergraduate/graduate students, and/or technicians.
*Experience with professional speaking engagements
*Ability to effectively communicate, participate in efficient and open collaboration, and engage with a diverse group of researchers
*The ideal candidate will be innovative and able to synergize various ideas and approaches, while exercising sound judgment to evaluate and take acceptable risks
Application Requirements
Document requirements
Curriculum Vitae - Your most recently updated C.V.
Cover Letter
Statement of Research - Provide a summary of your major research accomplishments in approximately 250 words. Additionally, please include a brief statement highlighting your experience that is directly relevant to the key responsibilities of this position
Project Portfolio - Summary portfolio of data and/or AI projects executed, as demonstrated by publications or github contributions
Reference requirements
- 3 required (contact information only)
Apply link:
JPF05256
Help contact:
About UC Berkeley
UC Berkeley is committed to diversity, equity, inclusion, and belonging in our public mission of research, teaching, and service, consistent with UC Regents Policy 4400 and University of California Academic Personnel policy (APM 210 1-d). These values are embedded in our Principles of Community, which reflect our passion for critical inquiry, debate, discovery and innovation, and our deep commitment to contributing to a better world. Every member of the UC Berkeley community has a role in sustaining a safe, caring and humane environment in which these values can thrive.
The University of California, Berkeley is an Equal Opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, age, or protected veteran status.
For more information, please refer to the University of California's Affirmative Action and Nondiscrimination in Employment Policy and the University of California's Anti-Discrimination Policy.
In searches when letters of reference are required all letters will be treated as confidential per University of California policy and California state law. Please refer potential referees, including when letters are provided via a third party (i.e., dossier service or career center), to the UC Berkeley statement of confidentiality prior to submitting their letter.
As a University employee, you will be required to comply with all applicable University policies and/or collective bargaining agreements, as may be amended from time to time. Federal, state, or local government directives may impose additional requirements.
Unless stated otherwise, unambiguously, in the position description, this position does not include sponsorship of a new consular H-1B visa petition that would require payment of the $100,000 supplemental fee.
As a condition of employment, the finalist will be required to disclose if they are subject to any final administrative or judicial decisions within the last seven years determining that they committed any misconduct.
- "Misconduct" means any violation of the policies or laws governing conduct at the applicant's previous place of employment, including, but not limited to, violations of policies or laws prohibiting sexual harassment, sexual assault, or other forms of harassment or discrimination, as defined by the employer.
- UC Sexual Violence and Sexual Harassment Policy
- UC Anti-Discrimination Policy
- APM - 035: Affirmative Action and Nondiscrimination in Employment
Job location
Berkeley, CA
As a Data Science/Data Engineer Intern, you will work on cutting-edge analytical and data engineering projects that drive measurable business impact across pricing, underwriting, marketing, and claims.
This internship is ideal for a technically curious, motivated problem-solver who wants hands-on data science experience.
RESPONSIBILITIES
- Support the design, construction, and optimization of robust data pipelines to enable machine learning and analytical modeling.
- Contribute to the design and implementation of data and ML workflows using orchestration tools such as Dagster, Airflow, or similar frameworks.
- Help implement data quality checks, validation routines, and monitoring for automated data workflows.
- Assist in organizing and managing internal GitHub repositories to standardize ML project structures and best practices.
- Collaborate with data scientists and engineers to automate the ingestion, transformation, and delivery of data for model development.
- Contribute to initiatives migrating analytical processes into cloud-based data lake architectures and modern platforms such as AWS or Snowflake.
- Develop reusable and well-tested code to support analytical pipelines and internal tools using Python and SQL.
- Conduct data mining, cleansing, and preparation tasks to build high-quality analytical datasets.
- Participate in model development, including data profiling, model training, validation, and interpretation.
- Build and evaluate predictive models that enhance profitability through improved segmentation and estimation of insurance risk.
- Assist in studies evaluating new business models for customer segmentation, retention, and lifetime value.
- Collaborate with business leaders to translate insights into operational improvements and cost efficiencies.
QUALIFICATIONS
- Currently pursuing or recently completed a Masterβs in Data Science, Computer Science, Statistics, Economics, or related field.
- Proficiency in Python (Pandas, NumPy, Scikit-learn, XGBoost, or PyTorch) and SQL.
- Understanding of data engineering concepts, ETL/ELT workflows, and machine learning deployment.
- Exposure to workflow orchestration tools (e.g., Airflow, Dagster, Prefect) and Git/GitHub for collaborative development.
- Familiarity with Docker, CI/CD pipelines, and infrastructure-as-code tools such as Terraform preferred.
- Knowledge of AWS cloud services such as S3, Lambda, EC2, or SageMaker a plus.
- Experience with common modeling techniques (e.g., GLM, tree-based models, Bayesian statistics, NLP, deep learning) through coursework or projects.
- Strong analytical, communication, and problem-solving skills.
- A self-starter mindset, with attention to detail and enthusiasm for learning new technologies.
SALARY RANGE
The pay range for this position is $35 hourly.
ABOUT THE COMPANY
The Plymouth Rock Company and its affiliated group of companies write and manage over $2 billion in personal and commercial auto and homeownerβs insurance throughout the Northeast and mid-Atlantic, where we have built an unparalleled reputation for service. We continuously invest in technology, our employees thrive in our empowering environment, and our customers are among the most loyal in the industry. The Plymouth Rock group of companies employs more than 1,900 people and is headquartered in Boston, Massachusetts. Plymouth Rock Assurance Corporation holds an A.M. Best rating of βA-/Excellent."
Job Description Develop test cases and test scenarios in Zephyr and automated testing tool that provide thorough coverage of implemented features and meet acceptance criteria from business stakeholders.
Enhance/execute existing automation suite using Selenium-Java, OpenText UFT and Service Automation.
Closely work with team on implementing service-based automation for REST and Soap calls.
Execute test automation suites in CI/CD pipelines.
Integrate Service Automation, Wire Mock, and UI Automation with CI/CD pipelines on Azure platforms.
Develop performance testing scripts in WEB load/JMeter tools and perform performance testing and analyze the report.
Execute smoke tests and regression tests in QA, UAT and production environments to verify code deployments and other system changes.
Identify, replicate, report, and track issues to closure in Jira.
Experience with building Test Strategies and Test Plans.
Position reports to Medline headquarters at Three Lakes Drive, Northfield, IL60093.
Telecommuting is permitted 100% of the time.
No additional national or international travel is anticipated.
Job Requirements PRIMARY REQUIREMENTS: Bachelorβs degree in Computer Science, or related field, or its foreign equivalent, and 5 years of experience as a Consultant or related.
In addition, experience with the following skills is required: (1) Developing, designing, and maintaining automated test scripts using opensource frameworks including Selenium for functional, regression, and integration testing of Web Applications; and experience applying knowledge of any Scripting including VBScript for developing test scripts.
(2) Utilizing at least one of the following programming languages: Java, Python, VB Script, or C++.
(3) Working in all spectrums of testing, including integration testing, performance testing, and regression testing to validate the data required by multiple lines of business.
(4) Design, script, and execute performance tests using tools including JMeter to identify bottlenecks, measure scalability, and ensure application responsiveness under load.
(5) Leveraging knowledge in branching strategies to effectively manage test code repositories using Bitbucket and GitHub.
(6) Integrating API test scripts into Continuous Integration/Continuous Deployment (CI/CD) pipelines.
JOB SITE: Three Lakes Drive, Northfield, IL 60093 WORK HOURS: Full Time (8am to 5pm, Monday to Friday) PAY RANGE: $121,389.00 to $138,000.00 per year Our benefit package includes health insurance, life and disability, 401(k) contributions, paid time off, etc., for employees working 30 or more hours per week on average.
For a more comprehensive list of our benefits please click here.
For roles where employees work less than 30 hours per week, benefits include 401(k) contributions as well as access to the Employee Assistance Program, Employee Resource Groups and the Employee Service Corp.
Weβre dedicated to creating a Medline where everyone feels they belong and can grow their career.
We strive to do this by seeking diversity in all forms, acting inclusively, and ensuring that people have tools and resources to perform at their best.
Explore our Belonging page here .
Medline Industries, LP is an equal opportunity employer.
Medline evaluates qualified individuals without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, age, disability, neurodivergence, protected veteran status, marital or family status, caregiver responsibilities, genetic information, or any other characteristic protected by applicable federal, state, or local laws.
Cloud Engineer/Architect
TS/SCI
Onsite: National Space Intelligence Center at Wright-Patterson Air Force Base. The address is 4180 Watson Way, Wright Patterson Air Force Base OH, 45433.
Contract Info: Fully Funded, 4 years left on contract
150k-170k/year
We're seeking an experienced Cloud Architect to join a dynamic team.
The primary role of the Cloud Architect is to help develop robust technical solutions and detailed execution plans that align with the center's prioritized IT and data requirements. Daily responsibilities involve carefully evaluating existing data repositories and applications amongst the customers to determine how to consolidate and migrate to more efficient and modern technologies. The responsibility also requires the architect to work closely with the members of the cloud team to refine data, IT, and cloud adoption strategies, ensuring that every piece of technology and every process contributes to our overall mission effectiveness.
In addition, the role also encompasses cybersecurity and security control requirements to safeguard IT infrastructure, lead the deployment of cloud architectures and applications, and continuously assess new technologies that could meet mission objectives. Moreover, the role entails creating comprehensive process documentation, bullet papers, slide presentations, and other relevant materials to support initiatives and maintain uninterrupted mission continuity.
Required
- 7 years of Cloud Engineering/Architect experience
- Bachelorβs degree - IT related
- Technical certification (One or more of CASP/SecurityX, Sec+, CISSP)
- Strong AWS Cloud skills (VPC, IAM, EC2, S3, ECR)
- Containerization/Microservices
- Kubernetes deployments/tools (Pods, Kubectl, Kustomize, Helm)
- Security hardening (Sonarqube, Client Fortify, STIG)
- Strong understanding of DoD environments, processes and common technical infrastructure
- Strong customer communication skills
- Strong understanding of Agile Scrum/Kanban
Preferred
- CICD Pipelines (e.g. Gitlab, GitHub, Bitbucket)
- Visualization dashboard (Promethius, Kibana, Kuma)
- Microsoft Azure Cloud
- Atlassian Suite (Confluence, Jira)
- Bash, Shell scripting
- Remote Connections
- Self-motivated & fast learner
Interview Process: 1 - 2 step interview.
Onboarding timeline: Start date is 2 weeks after accepted offer
Marathon TS is committed to the development of a creative, diverse and inclusive work environment. In order to provide equal employment and advancement opportunities to all individuals, employment decisions at Marathon TS will be based on merit, qualifications, and abilities. Marathon TS does not discriminate against any person because of race, color, creed, religion, sex, national origin, disability, age or any other characteristic protected by law (referred to as "protected status").
Remarks:
Clearance required: TS/SCI
The customer would strongly prefer A STEM related degree of some type. The individual will be working on-site at the National Space Intelligence Center at Wright-Patterson Air Force Base. The address is 4180 Watson Way, Wright Patterson Air Force Base OH, 45433.
The interview will be 1-2 interviews with a start date no longer that 2 weeks following. Contract is fully funded and has at least 4 more years + extension.
Awarded by the Air Force Research Labratory (AFRLβs) Information Directorate (RI), the new award has an estimated value of $406m.
As part of the InSITE contract, the company will be responsible for modernising and advancing the serviceβs capabilities to gather, share and analyse the intelligence information by leveraging a wide range of artificial intelligence (AI)-based solutions
Kelly Government Solutions has an opening for a Biomedical AI Imaging Scientist to support the Integrated Research Facility at the National Institute of Allergy and Infectious Diseases (NIAID), National Institutes of Health (NIH) in Frederick, MD. Role is estimated to support work arrangement that is primarily on-site with flexibility for remote work if/when authorized.
This is a long-term contract position which offers:
- Competitive compensation and comprehensive benefit package
- Optional health, vision, and dental plans
- Paid leave, paid federal holidays, and 401K plan.
- Access to NIH's unparalleled resources and niche scientific initiatives
KEY TASKS
(1) Support imaging scientists with acquisition of imaging data and conduct research as directed by NIAID involving and related to imaging and artificial intelligence in support of the Integrated Research Facility (IRF) in Frederick, MD
(2) Consult with scientific staff and NIAID leadership to ensure data meets scientific objectives; coordinate overall study logistics with other core laboratory services.
(3) Analyze and interpret imaging data from various modalities to support research studies on select agent viruses.
(4) Develop, train, and validate machine learning and deep learning models for image segmentation, feature extraction, and quantification tasks.
(5) Collaborate with the imaging team to design and implement new algorithms or modify existing ones to improve image analysis accuracy and efficiency.
(6) Generate reports and visualizations to communicate findings and trends in imaging data to researchers and stakeholders.
(7) Stay updated with advancements in AI and image analysis techniques relevant to biomedical research and apply this knowledge to enhance IRF capabilities.
(8) Work closely with the other functional area leads at the IRF to integrate imaging analysis results with other data for a comprehensive understanding of viral pathogenesis.
(9) Ensure compliance with all safety protocols and procedures while working in a BSL-4 environment.
(10) Provide guidance on experimental design and implementation and strives to quickly resolve problems; publish results in peer-reviewed journals.
(11) Communicate progress or problems with approved programs and projects to leadership.
KEY REQUIREMENTS
(1) Ph.D. in Computer Science, Biomedical Engineering, Bioinformatics, or a related field with a focus on artificial intelligence, machine learning, or image processing. Candidates possessing master's degree and relevant experience may also be considered.
(2) Experience analyzing multimodality imaging scans, such as parametric image analysis of MRI data
(3) Strong theoretical foundations in machine learning, deep learning, and image analysis techniques with experience working with biomedical imaging data.
(4) Proficiency in programming languages such as Python, R, or MATLAB along with experience using popular deep learning frameworks like TensorFlow, PyTorch, or Keras.
(5) Familiarity with biomedical imaging data and experience working with image processing libraries such as OpenCV or ITK-SNAP.
(6) Experience with radiomic feature extraction and application to machine learning, feature selection methods such as mRMR, and working with a high-performance computing environment and GitHub repositories
Resources filling this position must have at least 5 recent yearsβ experience working with Angular, C# .Net, JavaScript, SSRS, SQL Server, and working in an environment utilizing hybrid agile/waterfall project management methodologies.
Position Duties: Β· Design, develop, and maintain applications using C#.Net and Angular Β· Write user acceptance test plans, creating required test data and assisting users with running tests Β· Participate in requirements gathering session to document scoping, definition, analysis, business design, and technical design phases Β· Coordinate application development and scheduling interfaces with cross-functional teams Β· Assist with debugging complex coding issues Β· Author technical standards, choose technology, and create technical solutions Β· Develop and maintain SSRS reports Β· Participate in artifact reviews with peers, system specialists, Enterprise Security, and other entities to ensure IT solutions and applications adhere agency policies, standards, and guidelines Β· Coordinate with security resources to ensure systems are properly designed according to agency security requirements and standards Β· Participate in Solutions Design Team (SDT) meetings and assist in the creation of Enterprise Architecture Solution Assessments (EASA), infrastructure Service Requests (ISR), hosting documents, and firewall rules, as needed Β· Develop database objects, including stored procedures, functions, triggers, and packages using SQL and PL/SQL Β· Troubleshoot issues using SQL, PL/SQL scripts Β· Ensure proper change management is followed and documented for all changes to system designs and prod changes Β· Develop training content and facilitate training Β· Actively participate in the development and implementation of assigned client agencyβs strategic direction/plan Β· Serve as technical resource to the Project Manager and liaison to the PMO to assist with resolving project issues Position Qualifications: Β· 10+ years of experience of developing complex systems using C#/.NET and Java (Eclipse IDE) Β· 10+ years of advanced experience in SQL and PL/SQL development Β· 8+ years of programming experience using JavaScript, SSRS, and Microsoft SQL Server Β· 7+ years of experience working with GIT code repository software and 5+ years of experience working with GIT for version control and source code management Β· 5+ years of hands-on experience developing web applications using Angular and modern JavaScript frameworks Β· 5+ years of recent experience writing, compiling, modifying, and debugging complex SQL Server database configuration items, including Stored Procedures, Functions, Triggers, Views, Tables, and linked servers Β· 5+ years of experience using Azure DevOps (ADO) for backlog management, sprint planning, task tracking, and Agile progress reporting Β· 5+ years of experience developing and executing unit and regression tests to ensure application reliability and stability Β· 2+ years of experience with React.js and modern JavaScript (ES6+) Β· Strong experience developing secure web applications, implementing industry best practices to prevent vulnerabilities such as cross-site scripting(XSS) and SQL injection, including secure logging practices Β· Exposure to DevOps practices and cloud platforms, including AWS and Microsoft Azure Β· Hands-on experience Integrating software components into a fully functional software system Β· Hands-on experience using GitHub Copilot to accelerate daily coding tasks, including code generation, refactoring, and documentation; proven ability to integrate GitHub Copilot into development workflows to enhance productivity, code quality, and team collaboration Β· A minimum of a Bachelorβs Degree in Information Technology or other relevant field Note: This is a W2 contract role β C2C, 1099, & 3 rd party candidates WILL NOT be considered .
Remote working/work at home options are available for this role.
This individual shall apply new and emerging technologies to the software development process.
Required Skills & Qualifications: B.S.
in Computer Science, Engineering, Mathematics, or equivalent experience.
Advanced full-stack software development experience, building enterprise web and middle-tier applications using Angular and core Java with Spring/Spring Boot.
Angular 16+.
Leadership to guide, encourage, and motivate your fellow engineers.
Experience working in an Agile Scrum development environment.
Experience in REST API development via a gateway.
Experience with Docker, Kubernetes, Terraform, and AWS cloud deployment/application management.
Experience with unit testing and test automation libraries/strategies (Cypress/Karate/Cucumber).
Experience building and deploying applications using continuous integration pipelines and automated deployment tools such as Jenkins.
Experience using source control and pull requests for collaborative development in code repository tools such as GitHub.
Strong communications and problem-solving skills.
Preferred Skills: Experience with PDF generation
- understanding PDF reporting Docker Kubernetes Terraform Jenkins Duties and Responsibilities Developing and deploying software in a fast-paced environment.
Collaborating with colleagues on technical implementation and process improvement.
Working closely with technology and business partners to design new features.
Passion for learning the latest technologies and frameworks.
Building positive relationships within and across teams.
Mentor and be mentored by your team members and partners.
We are seeking an experienced Technical Writer III to support complex documentation initiatives within a technology-driven engineering environment. This role will collaborate closely with engineering, product, and cross-functional teams to create and maintain high-quality technical documentation.
Key Responsibilities
- Develop, edit, and maintain technical documentation including:
- User manuals
- Installation guides
- Release notes
- Online help content
- FAQs and process documentation
- Collaborate with engineers, SMEs, and product teams to gather and translate technical information into clear, user-friendly content
- Manage documentation repositories and version control processes
- Ensure documentation accuracy, clarity, and compliance with company standards
- Maintain templates and ensure consistency across documentation
- Support documentation updates aligned with product changes and engineering revisions
- Manage multiple documentation projects and meet deadlines
Required Qualifications
- Bachelorβs degree in Technical Writing, English, Communications, Computer Science, or related field
- 3β5+ years of technical writing experience
- Experience developing complex technical documentation
- Strong written and verbal communication skills
- Ability to explain technical concepts to non-technical audiences
- Strong organizational and time-management skills
Preferred Skills
- Experience with documentation tools such as:
- Windchill
- Adobe FrameMaker
- MadCap Flare
- RoboHelp
- Markdown editors
- Familiarity with CMS platforms and version control tools (Git/GitHub)
- Experience working in Agile or software development environments
- Basic knowledge of APIs or programming concepts
- Experience with AI-based documentation tools (a plus)