Perception Encoder Github Jobs Salary Jobs in Usa
1,344 positions found — Page 2
Senior Interactive Experience Developer / Creative Coder
Location: Portland, Oregon | Hybrid (3 days in office)
Tandem Talent is partnering with an innovative, globally active creative technology company to recruit a Senior Interactive Experience Developer / Creative Coder. This role is ideal for a developer who enjoys combining strong programming skills with creativity to build immersive digital experiences that exist beyond traditional screens.
You will collaborate with designers, UX strategists, and fellow developers to create interactive environments used in corporate spaces, museums, universities, sports venues, and cultural institutions worldwide. The work focuses on developing experiences that blend digital and physical environments through interactive displays, sensing technologies, and responsive systems.
This is an opportunity to work with advanced technology while helping bring ambitious creative concepts to life in real-world environments.
The Role
As a Senior Interactive Experience Developer, you will play a key role in designing and building innovative front-end and interactive systems. You will work closely with multidisciplinary teams to develop engaging experiences, prototype new ideas, and help shape technical best practices.
Key responsibilities include:
- Leading front-end development for client projects and internal innovation initiatives
- Experimenting with emerging technologies and frameworks to create new digital experiences
- Defining and maintaining coding standards and development best practices
- Mentoring junior developers and supporting collaborative problem-solving
- Conducting project reviews to ensure technical performance and creative quality
- Producing documentation that supports both technical and non-technical stakeholders
- Working within development tools including Atlassian, GitHub, MS Teams, Visual Studio, and Figma
- Supporting installations and client projects, including occasional travel for site visits (approximately 2–3 per year)
What We’re Looking For
The ideal candidate combines strong programming capability with an interest in creative technology and immersive environments.
Required experience:
- Strong programming foundation with experience in creative coding and visual development
- Experience with digital creation platforms such as TouchDesigner, Notch, Pixera, Unreal, or Unity
- Programming experience with languages and APIs including Qt/QML, JavaScript (Three.js, WebGL, Canvas), Python, or Unreal Blueprint/C++
- Strong browser-based development experience, particularly building creative in-browser experiences
- Portfolio demonstrating engaging digital work beyond standard web applications
- Graphics programming experience using OpenGL/GLSL, Vulkan, or DirectX and understanding of the graphics pipeline
- Experience using Git/GitHub for collaborative development
- Experience designing touch interfaces or other natural user interaction systems
- Ability to rapidly prototype concepts and develop them into production-ready code
- Understanding of UX principles and how technical decisions influence user experience
- Strong communication and collaboration skills across technical and non-technical teams
- Curiosity, creativity, and enthusiasm for exploring new technologies
Desirable experience:
- Experience working with interactive hardware, sensors, or immersive technologies.
The Opportunity
This position offers the chance to work on highly creative and technically challenging projects that reach audiences globally. Developers in this team build experiences that appear on interactive display walls, projection-mapped environments, and sensor-driven installations that respond to people and environments in real time.
You will be working within a collaborative, multidisciplinary team where ideas are encouraged and technical experimentation is part of the culture.
Location
Hybrid role based in Portland, Oregon, with three days per week in the office.
Please note that visa sponsorship is not available for this position.
If you are interested in combining technical expertise with creative problem solving to build immersive digital experiences, Tandem Talent would be pleased to hear from you.
Job Title: SDET / QA Automation Engineer
Location: Mount Laurel- NJ
Duration: Long Term
Job Description:
Job Summary:
We are seeking a highly skilled and experienced SDET / QA Automation Engineer with 8 to 10 years of expertise in Python, JavaScript, and modern automation frameworks. This position involves developing automation solutions, microservices, and test scripts while validating end‑to‑end network components and their behavior. The candidate should have strong domain knowledge in networking and cable technologies, with the ability to collaborate effectively with clients and cross‑functional teams.
Key Responsibilities:
- Develop microservices using Python, NodeJS, and Golang as part of automation and service validations.
- Develop standalone Python/NodeJS scripts to simulate network traffic and validate performance across different endpoints.
- Create Proof of Concepts (POCs) based on client needs and actively participate in client demos and technical discussions.
- Lead the creation of test strategies and manage test environments with both physical and virtual device setups.
- Create comprehensive test scenarios and automated test scripts using MochaJS, ensuring robust coverage of functional, integration, and regression test cases.
- Design reusable test components, validate API and microservice behavior, and integrate MochaJS test suites into the existing automation framework to enhance reliability and execution efficiency.
- Collaborate with cross‑functional teams to refine requirements, improve test coverage, and ensure smooth integration with CI/CD pipelines.
- Gather requirements and perform detailed analysis for new automation scenarios and test case development.
- Support manual and automation testing across applications, devices, and servers as required.
- Ensure code quality using tools like SonarQube and adhere to strict QA standards.
- Provide technical guidance, troubleshooting support, and mentorship to team members on tasks and issues raised by the client.
- Maintain version control and branching strategies using GitHub, ensuring high code integrity and traceability.
- Monitor automation execution, analyze failures, and drive root‑cause investigations to improve overall product quality.
- Document technical workflows, automation processes, and test scenarios to ensure long-term maintainability and knowledge sharing.
Required Skills & Experience:
- 8-10 years of experience in QA/SDET automation roles.
- Strong programming knowledge with Python and JavaScript.
- Good hands-on experience with Go Lang and NodeJS.
- Hands-on experience with MochaJS for scripting and automated testing.
- Excellent knowledge with web technologies like REST, SOAP, XML and JSON
- Proficiency in API testing using Bruno/ Postman.
- Familiarity with GitHub for version control and Jira for project tracking.
- Excellent domain knowledge in Network and cable domain
- Should be familiar with IMS architecture and SIP protocols.
- Good problem-solving and debugging skills.
Should have good communication and client interaction skills.
Are you an experienced Full Stack Developer with a desire to excel? If so, then Talent Software Services may have the job for you! Our client is seeking an experienced Full Stack Developer to work at their company in Minneapolis, MN.
Position Summary: The team builds and maintains scalable microservices and batch-processing platforms that ingest, enrich, store, and serve user-generated content for clients' eCommerce and enterprise systems. Our culture is highly collaborative, prioritizing agility, code simplicity, operational excellence, and consistently high-quality software delivery.
Primary Responsibilities/Accountabilities:
- Delivers complex, well-tested, and reliable product features with minimal oversight.
- Excels at breaking down large problems and demonstrates depth across software development lifecycle phases, including concept, design, testing, and deployment.
- Develops solutions and optimizations that improve performance across the full application stack.
- Comfortable independently triaging complex issues across multiple environments in a fast-paced, dynamic setting.
- Actively engages in pair programming, daily standups, sprint retrospectives, backlog grooming, and user story mapping.
Qualifications:
- 5+ years of experience building highly scalable, high-performing applications using Java, Spring Boot, and Gradle with strong object-oriented design skills.
- Experience with Test Driven Development (TDD), including writing unit and integration tests using JUnit, Mockito, and/or the Spock Framework.
- Experience with streaming and messaging platforms such as Kafka, RabbitMQ, or Google Pub/Sub.
- Strong experience with CI/CD pipelines using tools such as Jenkins or GitHub Actions.
- Experience designing distributed application architectures that leverage NoSQL data stores such as Apache Cassandra for high throughput at scale.
- Experience with search and indexing systems such as Apache Solr for large-scale data access and query performance.
Preferred:
- Strong communicator and collaborator who works effectively across cross-functional teams, proactively brings ideas to the table, and takes initiative rather than waiting to be directed.
- Experience with front-end technologies, including JavaScript, ReactJS and NodeJS.
- Experience with container platforms such as Docker.
- Experience designing, testing, and deploying scalable solutions on Google Cloud Platform utilizing services such as BigQuery, Cloud Functions, Cloud Run, and Dataflow.
- Experience with off-heap caching solutions such as Memcached.
- Experience leveraging AI-assisted development tools such as GitHub Copilot to accelerate development workflows.
- Ability to triage and manage complex, production issues
Project description
This technology engineer is responsible for ensuring the reliability, supportability, and continuous improvement of key infrastructure monitoring and management platforms, with primary ownership focus on tools such as SolarWinds, Azure Sentinel. This role requires a developer mindset. This person will also be providing operations systems administration support for hands on Linux and Windows systems. This role partners closely with internal teams across operations, monitoring, and security to strengthen platform health, improve signal quality, and enable effective incident response workflows. The engineer will support a hybrid environment with strong emphasis on Microsoft Azure monitoring and logging, contribute to platform lifecycle activities (patching, upgrades, onboarding, documentation), and continuously learn and apply modern capabilities— including analytics and emerging AI features—across event management, observability, and SIEM tooling to reduce operational friction and increase time to value
Responsibilities
Platform Ownership
Network & Monitoring Tools (must have)
Familiar with tools such as SolarWinds (including NetPath). As a platform owner, ensure platform stability, upgrades, patching, and day to day support.
Has knowledge about network centric monitoring capabilities including SNMP polling, traps, and device visibility etc. Ensure new sites and devices are properly onboarded
Partner with platform and cloud teams to ensure migrated workloads meet monitoring standards. Systems Administration (must have)
Provide sysadmin support for Linux and Windows servers, including:
Agent deployment and upgrades (SolarWinds, Datadog, Dynatrace)
OS level troubleshooting and configuration
Monitoring and logging enablement
Support hybrid environments spanning on prem and Azure infrastructure.
A developer mindset with experience in Dev workflow, GitHub, PowerShell etc.
Observability & Event Management Support (should have)
Has experience with tools such as Datadog and Dynatrace. The person will be responsible for collaborating with platform owners to support integrations, data quality, and alerting hygiene.
Assist with event management workflows, ensuring alerts are actionable and routed correctly.
Participate in efforts to reduce alert noise and repeat incidents. SIEM & Security Visibility (nice to have)
Develop a working understanding of SIEM concepts and platforms such as Azure Sentinel and CRIBL.
Support log ingestion, troubleshooting, and collaboration with security and incident response teams.
Ensure infrastructure and network telemetry supports security detection requirements. Cloud Monitoring & Azure Integration (should have)
Has experience with Azure cloud platform. Have either directly supported or is familiar with Azure based monitoring and logging, including:
Azure Monitor and Log Analytics integrations
Observability for Azure hosted workloads Automation, AI & Continuous Improvement (nice to have)
Explore and apply AI assisted features within monitoring, event management, and SIEM tools to:
Improve signal quality / reduce alert fatigue
Support faster incident triage
Contribute to documentation, runbooks, and operational improvements focused on small, incremental wins.
Knowledge Transfer & Operational Resilience
Participate in knowledge transfer activities related to platform transitions and retirements. Maintain documentation.
Support on call or escalation rotations as needed.
Skills
Must have
Minimum 4-5 years of experience in infrastructure operations, monitoring, observability, or platform operations roles, supporting enterprise environments
Hands on experience with systems administration for Linux and Windows servers, including troubleshooting, configuration, and deployment of monitoring or management agents (e.g., SolarWinds, Datadog, Dynatrace).
Foundational networking knowledge, including concepts such as SNMP, network monitoring, LAN/WAN fundamentals, firewalls, and telemetry collection, sufficient to support network centric monitoring platforms like SolarWinds
Not a must but nice to have experience with platform like StruxureWare.
Experience with observability or monitoring platforms, such as SolarWinds, Datadog, Dynatrace, or similar tools, with an understanding of alerting, dashboards, and signal quality.
Exposure to cloud environments, preferably Microsoft Azure, including familiarity with monitoring and logging concepts (e.g., cloud based telemetry, logs, metrics, and integrations).
Basic understanding of incident and event management practices, including alert triage, escalation, and collaboration with incident response or operations teams.
Demonstrated willingness and ability to learn new technologies quickly, with examples of picking up new platforms, tools, or domains outside of prior core expertise.
Familiarity with Agile or SAFe ways of working, including collaboration in sprint based delivery models, and cross functional team engagement is a plus.
Strong communication and collaboration skills, with the ability to work effectively with platform owners, operations teams, security teams, and external stakeholders.
Experience working in a modern Dev workflow using GitHub (branches, pull requests, code reviews, and CI/CD) to manage and deploy scripts/automation used for platform operations
Working proficiency in scripting languages such as PowerShell, Python, BASH, or similar scripting languages.
Knowledge with Azure, Azure Active Directory (AD), and hybrid cloud environments is a plus.
Exposure to SIEM concepts or platforms such as Azure Sentinel, CRIBL, or similar is a plus.
Experience with change management practices in an enterprise IT environment is beneficial
Job Title: Jenkins Administrator Engineer
Location: Phoenix, AZ
Duration: 12 Months
Experience: 6-9 years
Job Requirement:
• Hands-on experience of software development Java/ Python/ NodeJS/ Go
• Hands-on experience of using and administrating Jenkins and Artifactory is must
• Work experience on multiple DevOps and collaboration tools including Jenkins, GitHub, GitHub Actions, SonarQube, Slack, Confluence, Jira
• Hands-on experience of Linux server administration
• Good communication skills and Aptitude for learning new technology.
This technology engineer is responsible for ensuring the reliability, supportability, and continuous improvement of key infrastructure monitoring and management platforms, with primary ownership focus on tools such as SolarWinds, Azure Sentinel. This role requires a developer mindset. This person will also be providing operations systems administration support for hands on Linux and Windows systems. This role partners closely with internal teams across operations, monitoring, and security to strengthen platform health, improve signal quality, and enable effective incident response workflows. The engineer will support a hybrid environment with strong emphasis on Microsoft Azure monitoring and logging, contribute to platform lifecycle activities (patching, upgrades, onboarding, documentation), and continuously learn and apply modern capabilities— including analytics and emerging AI features—across event management, observability, and SIEM tooling to reduce operational friction and increase time to value.
Responsibilities:
Platform Ownership - Network & Monitoring Tools (must have)
• Familiar with tools such as SolarWinds (including NetPath). As a platform owner, ensure platform stability, upgrades, patching, and day to day support.
• Has knowledge about network centric monitoring capabilities including SNMP polling, traps, and device visibility etc. Ensure new sites and devices are properly onboarded
• Partner with platform and cloud teams to ensure migrated workloads meet monitoring standards.
Systems Administration (must have)
• Provide sysadmin support for Linux and Windows servers, including:
• Agent deployment and upgrades (SolarWinds, Datadog, Dynatrace)
• OS level troubleshooting and configuration
• Monitoring and logging enablement
- Support hybrid environments spanning on prem and Azure infrastructure.
- A developer mindset with experience in Dev workflow, GitHub, PowerShell etc.
- Observability & Event Management Support (should have)
- Has experience with tools such as Datadog and Dynatrace. The person will be responsible for collaborating with platform owners to support integrations, data quality, and alerting hygiene.
- Assist with event management workflows, ensuring alerts are actionable and routed correctly.
- Participate in efforts to reduce alert noise and repeat incidents.
SIEM & Security Visibility (nice to have)
- Develop a working understanding of SIEM concepts and platforms such as Azure Sentinel and CRIBL.
- Support log ingestion, troubleshooting, and collaboration with security and incident response teams.
- Ensure infrastructure and network telemetry supports security detection requirements.
Cloud Monitoring & Azure Integration (should have)
- Has experience with Azure cloud platform. Have either directly supported or is familiar with Azure based monitoring and logging, including:
- Azure Monitor and Log Analytics integrations
- Observability for Azure hosted workloads
Automation, AI & Continuous Improvement (nice to have)
• Explore and apply AI assisted features within monitoring, event management, and SIEM tools to:
- Improve signal quality / reduce alert fatigue
- Support faster incident triage
• - Contribute to documentation, runbooks, and operational improvements focused on small, incremental wins.
- Knowledge Transfer & Operational Resilience
- Participate in knowledge transfer activities related to platform transitions and retirements. Maintain documentation.
- - Support on call or escalation rotations as needed.
Mandatory Skills Description:
• Minimum 4-5 years of experience in infrastructure operations, monitoring, observability, or platform operations roles, supporting enterprise environments
• Hands on experience with systems administration for Linux and Windows servers, including troubleshooting, configuration, and deployment of monitoring or management agents (e.g., SolarWinds, Datadog, Dynatrace).
• Foundational networking knowledge, including concepts such as SNMP, network monitoring, LAN/WAN fundamentals, firewalls, and telemetry collection, sufficient to support network centric monitoring platforms like SolarWinds
• Not a must but nice to have experience with platform like StruxureWare.
• Experience with observability or monitoring platforms, such as SolarWinds, Datadog, Dynatrace, or similar tools, with an understanding of alerting, dashboards, and signal quality.
• Exposure to cloud environments, preferably Microsoft Azure, including familiarity with monitoring and logging concepts (e.g., cloud based telemetry, logs, metrics, and integrations).
• Basic understanding of incident and event management practices, including alert triage, escalation, and collaboration with incident response or operations teams.
• Demonstrated willingness and ability to learn new technologies quickly, with examples of picking up new platforms, tools, or domains outside of prior core expertise.
• Familiarity with Agile or SAFe ways of working, including collaboration in sprint based delivery models, and cross functional team engagement is a plus.
• Strong communication and collaboration skills, with the ability to work effectively with platform owners, operations teams, security teams, and external stakeholders.
• Experience working in a modern Dev workflow using GitHub (branches, pull requests, code reviews, and CI/CD) to manage and deploy scripts/automation used for platform operations
• Working proficiency in scripting languages such as PowerShell, Python, BASH, or similar scripting languages.
• Knowledge with Azure, Azure Active Directory (AD), and hybrid cloud environments is a plus.
• Exposure to SIEM concepts or platforms such as Azure Sentinel, CRIBL, or similar is a plus.
• Experience with change management practices in an enterprise IT environment is beneficial.
Nice-to-Have Skills Description:
Agile Methodologies
Location: Alpharetta, GA (3 days a week onsite)
Duration: 6 months
Job Description:
We are seeking a skilled Site Reliability Engineer to join our team and help build, maintain, and scale our cloud-native infrastructure. You will work closely with development and operations teams to ensure our systems are reliable, scalable, and efficient. The ideal candidate is passionate about automation, observability, and infrastructure-as-code, and thrives in a collaborative, fast-paced environment.
Key Responsibilities
Design, implement, and manage cloud infrastructure on Azure using Terraform and Terragrunt.
Maintain and optimize Kubernetes clusters on Azure Kubernetes Service (AKS).
Build and manage CI/CD pipelines using GitHub Actions/Workflows and ArgoCD for GitOps deployments.
Enhance system reliability by implementing monitoring, alerting, and observability solutions with Grafana.
Automate operational tasks to reduce toil and improve team efficiency.
Participate in on-call rotations, incident response, and post-mortem analysis.
Collaborate with development teams to improve application performance, scalability, and resilience.
Implement and advocate for SRE best practices, including SLIs, SLOs, and error budgets.
Continuously improve system performance, cost efficiency, and security.
Required Skills & Qualifications
3+ years of experience in an SRE, DevOps, or cloud infrastructure role.
Strong experience with Azure cloud services and infrastructure.
Hands-on experience with java and Terraform and Terragrunt for infrastructure-as-code.
Proficiency with Kubernetes (preferably AKS and container orchestration.
Experience with CI/CD tools, especially GitHub Workflows/Actions and ArgoCD.
Solid understanding of observability tools like Grafana (Prometheus, Loki, Tempo experience is a plus).
Education Requirements Bachelor's degree required, (Masters preferred)
Job Title: Software Engineer
Duration: 12 months (Right to Hire)
Location: 100% Remote
Responsibilities:
- Design and build internal tools and automation-including API linting frameworks, OpenAPI specification validators, code?generation utilities, and workflow automation-to improve consistency, quality, and efficiency across the API lifecycle.
- Ensure developer experience is at the center of all software created, building intuitive, reliable, and friction?reducing tools that empower API producers and consumers and simplify their workflows.
- Collaborate, coordinate, and align with technical stakeholders such as architecture, platform engineering, security, and API governance teams to ensure tooling meets enterprise needs and integrates seamlessly with broader technical ecosystems.
- Apply industry best practices to deliver secure, scalable, and maintainable solutions that align with clients engineering, security, and compliance standards.
- Drive development activities from design through delivery, ensuring tools and services are released on time and effectively support both API producers and consumers.
- Champion code quality, implementing comprehensive unit testing, functional testing, and automated validation to ensure highly reliable solutions and fast feedback loops.
- Demonstrate engineering excellence, consistently applying high?quality engineering practices-including clean code principles, strong testing strategies (unit, integration, functional), CI/CD pipeline integration, versioning discipline, and reliable automated deployment strategies-to ensure tooling is robust, maintainable, and production?ready.
- Ensure all software created adheres to strong security principles, including secure coding practices, automated security scanning, vulnerability mitigation, and alignment with enterprise security standards-ensuring tooling is safe by design, safe by default, and safe in production.
- Support the tech lead in evaluating and shaping technical decisions, contributing insights and execution capabilities related to tooling, automation, and developer?experience improvements.
Tools & Technologies:
- Programming & Scripting: Java | Python | JavaScript | TypeScript, Bash / Shell Scripting
- API Design & Management: RESTful APIs, OpenAPI / Swagger (Specification, Validation), API Linting Frameworks, API Governance & Standards Enforcement, API Versioning Strategies
- Automation & Tooling: Code Generation Utilities, Workflow Automation Tools, Internal Developer Tooling, CLI Tools
- Testing & Quality Engineering: Unit Testing | Integration Testing | Functional Testing, Automated Validation Frameworks, Test Automation Tools, Code Quality & Static Analysis Tools
- CI/CD & DevOps: CI/CD Pipelines (GitHub Actions, GitLab CI, Jenkins), Automated Build & Deployment Pipelines, Artifact Repositories, Infrastructure Automation
- Cloud & Platforms: Cloud Platforms (AWS / Azure / GCP), Containerization (Docker), Kubernetes (optional / platform-dependent)
- Security & Compliance: Secure Coding Practices, Automated Security Scanning (SAST / DAST), Vulnerability Management Tools, Dependency Scanning, Compliance & Enterprise Security Standards
- Developer Experience (DX): Developer Tooling & Enablement Platforms, Documentation Automation, API Consumer & Producer Enablement Tools
- Collaboration & Version Control: Git | GitHub | GitLab, Agile / Scrum Methodologies, Issue & Work Tracking Tools (Jira, similar)
Duration: 6+ months (CTH)
Location: hybrid (Newark, NJ)
Summary
As a Senior Software Engineer on the Retirement Strategies Technology team, you will partner with product owners, tech leads, designers, engineers and delivery professionals to deliver quality platforms and products with speed.? You will code, test and debug new and existing applications as you implement capabilities to solve sophisticated business problems, deploy innovative products, services and experiences to delight our customers! In addition to advanced technical expertise and experience, you will bring excellent problem solving, communication and teamwork skills, along with agile ways of working, strong business insight, an inclusive leadership attitude and a continuous learning focus to all that you do.
Here is What You Can Expect on a Typical Day
Build applications ensuring that the code follows latest coding practices and industry standards, using modern design patterns and architectural principles; remove technical impediments??
Develop high quality, well documented and efficient code adhering to all applicable Prudential standards??
Collaborate with product owners in understanding needs and defining feature stories, tech leads in defining technical design and other team members to understand the system end-to-end and deliver robust solutions that bring about business impact?
Write unit, integration tests and functional automation, researching problems discovered by quality assurance or product support, developing solutions to address the problems??
Bring a strong understanding of relevant and emerging technologies, provide input and coach team members and embed learning and innovation in the day-to-day??
Work on complex problems in which analysis of situations or data requires an evaluation of intangible variables.
Use programming languages including but not limited to Java, JavaScript, Springboot, Node.js frameworks?
The Skills & Expertise You Bring:
Bachelor of Computer Science or Engineering or experience in related fields
Ability to coach others with minimal guidance and effectively leverage diverse ideas, experiences, thoughts and perspectives to the benefit of the organization??
Experience with agile development methodologies and Test-Driven Development (TDD)
Knowledge of business concepts tools and processes that are needed for making sound decisions in the context of the company's business
Ability to learn new skills and knowledge on an on-going basis through self-initiative and tackling challenges
Excellent problem solving, communication and collaboration skills
Advanced experience and/or expertise with several of the following:
Programming Languages:? Java, Java Script; working in distributed systems, object oriented programming, design patterns and design methodology; JAVA services using Spring,, Microservices, multi-threading, Concurrency and parallel processing
Frameworks:?Springboot, Node.js
Data Store:?NoSQL or Relational Data structures;
Data Streaming:?SQS, SNA
Application Programming Interfaces (API): Consumption & Development; implementing service oriented architecture (SOA) patterns; Web service technologies such as APIs, REST, JSON, SQL
API Management & Integration : Kong, Apigee
Unit, interface and end user testing?concepts and tooling (functional & non-functional)
Automated testing
Accessibility awareness
Software security skills?including?secure coding, web application security and ; Solid grasp of security concepts (authentication, authorization, encryption, digital signature, JWT), SSL, web service proxies, firewall, SAML 2.0, OpenID Connect, OAuth 2.0)
Dev Ops Tools & Practices: Branching techniques and usage of GitHub; DevOps
Software Development Life Cycle (SDLC): Monitoring and logging techniques
AWS Core Services across compute, storage, DB, IAM
Preferred Qualifications:
Strong experience with Domain Driven Development (DDD)
AWS cloud native solution development
Architecture Patterns
Design and critical Thinking
Financial/Insurance industry experience is a must, not a plus
People Leadership Experience is a plus.
Experience with agentic frameworks and AI driven development tools is a major plus [Claude Code, GitHub Copilot etc]
Length of Contract: 6 months
Location: Remote (Eastern time zone)
What are the top 3-5 skills, experience or education required for this position:
a. Proficiency in databases (SQL) and coding in R/Python
b. Experience with API development
c. Familiarity with AI techniques and strong curiosity for new technologies
d. Experience managing and curating bioinformatics datasets (BulkRNAseq, Proteomics, scRNAseq, CRISPR)
e. Code management, documentation, and version control (e.g., GitHub)
Job Overview: As a Data Analyst, you'll drive data quality and consistency in our central hub for storing OMICS data, address impactful data loading and curation projects and help improve and automate processes using agentic AI. Working closely with researchers, you'll ensure their data needs are met and help accelerate scientific discovery.
Key Responsibilities:
- Contribute to important data loading and curation projects for the departments Omics data server
- Address data quality and consistency issues in the CRISPR database.
- Apply agentic AI approaches for data loading and querying OMICS data
- Database Interaction: Use PostgreSQL to build, manage, and query large genomic datasets.
- API Development: Design and implement APIs for improved data accessibility and integration across platforms.
- Automation: Use Python and R to automate and optimize data workflows, prioritizing data quality and integrity.
- ETL Process Management: Develop and execute ETL processes to integrate high-value datasets in line with organizational standards.
- Collaboration: Work with cross-functional teams and research scientists to gather requirements, align to common data model standards, and facilitate effective data management.
- Documentation: Maintain comprehensive documentation and version control for reproducibility and teamwork.
Required qualifications:
- Master's degree in computer science, bioinformatics, or a related field, with 3+ years of relevant experience.
- Proven experience working with databases (PostgreSQL proficiency).
- Advanced skills in Python and R for automation and data manipulation.
- Experience handling and curating bioinformatics datasets (BulkRNAseq, Proteomics, scRNAseq, CRISPR).
- Code management, documentation, and usage of Github.
- Curiosity and basic knowledge of AI techniques applicable to data loading and querying.
- Excellent communication skills and a collaborative mindset.
- Demonstrated experience with AWS resources.
- Experience in API