Incident Io Api Jobs in Usa

2,257 positions found — Page 10

Data Analyst
Salary not disclosed
Raleigh, NC 2 days ago
Job Title: Data Analyst

Length of Contract: 6 months

Location: Remote (Eastern time zone)

What are the top 3-5 skills, experience or education required for this position:

a. Proficiency in databases (SQL) and coding in R/Python

b. Experience with API development

c. Familiarity with AI techniques and strong curiosity for new technologies

d. Experience managing and curating bioinformatics datasets (BulkRNAseq, Proteomics, scRNAseq, CRISPR)

e. Code management, documentation, and version control (e.g., GitHub)

Job Overview: As a Data Analyst, you'll drive data quality and consistency in our central hub for storing OMICS data, address impactful data loading and curation projects and help improve and automate processes using agentic AI. Working closely with researchers, you'll ensure their data needs are met and help accelerate scientific discovery.

Key Responsibilities:

- Contribute to important data loading and curation projects for the departments Omics data server

- Address data quality and consistency issues in the CRISPR database.

- Apply agentic AI approaches for data loading and querying OMICS data

- Database Interaction: Use PostgreSQL to build, manage, and query large genomic datasets.

- API Development: Design and implement APIs for improved data accessibility and integration across platforms.

- Automation: Use Python and R to automate and optimize data workflows, prioritizing data quality and integrity.

- ETL Process Management: Develop and execute ETL processes to integrate high-value datasets in line with organizational standards.

- Collaboration: Work with cross-functional teams and research scientists to gather requirements, align to common data model standards, and facilitate effective data management.

- Documentation: Maintain comprehensive documentation and version control for reproducibility and teamwork.

Required qualifications:

- Master's degree in computer science, bioinformatics, or a related field, with 3+ years of relevant experience.

- Proven experience working with databases (PostgreSQL proficiency).

- Advanced skills in Python and R for automation and data manipulation.

- Experience handling and curating bioinformatics datasets (BulkRNAseq, Proteomics, scRNAseq, CRISPR).

- Code management, documentation, and usage of Github.

- Curiosity and basic knowledge of AI techniques applicable to data loading and querying.

- Excellent communication skills and a collaborative mindset.

- Demonstrated experience with AWS resources.

- Experience in API
Not Specified
MuleSoft Developer
🏢 ClifyX
Salary not disclosed
Hartford County, CT 2 days ago

MuleSoft Developer

Location: Remote OR Stamford CT

Hire Type: Full Time

Job Description

Must Have Technical/Functional Skills

  • Analyze and understand business and technical requirements and translate them into MuleSoft‑based integration solutions.
  • Prepare Low Level Design (LLD) documents for APIs and integrations following enterprise standards.
  • Design and develop REST and SOAP APIs using MuleSoft Anypoint Platform and Anypoint Studio.
  • Implement data transformations using DataWeave and handle error/exception scenarios effectively.
  • Perform unit testing, system integration testing, and defect fixing for developed MuleSoft components.
  • Support CI/CD pipelines and deployments across environments (Dev, QA, UAT, Production).
  • Participate in production releases, post‑deployment validation, and stabilization support.
  • Maintain technical documentation, runbooks, and API specifications with proper version control.

Roles & Responsibilities

· Strong hands‑on experience with MuleSoft Anypoint Platform (Mule 4.x).

· Experience developing REST/JSON and SOAP/XML integrations.

· Proficiency in DataWeave, API Manager, and Anypoint Exchange.

· Working knowledge of CI/CD tools (Git, GitLab, Jenkins, etc.).

· Experience in integration patterns, error handling, and security concepts (OAuth 2.0, tokens).

· Good understanding of SDLC and Agile methodologies.

Generic Managerial Skills, If any

· Creative thinking.

· Building and managing relationships.

· Emotional agility.

· Technology Business Requirements Definition, Analysis and Mapping.

· Adaptability.

· Learning Agility.

Not Specified
FTE - Java Backend Developer
✦ New
Salary not disclosed
Roanoke, TX 1 day ago

• 6+ years of demonstrable experience in full stack programming building scalable applications

• 6+ years’ experience with Java Spring based frameworks and libraries preferably Spring boot

• 6+ years hands experience in cloud-based technologies, Microsoft Azure (Optional)

• 6+ years hands on experience working with messaging technologies

• 6+ years’ experience building and consuming REST API based integration and microservices architecture

• 6+ years’ experience and understanding of core Java, SOAP Web Service, SAML, REST APIs, Spring, Spring MVC and Spring Boot ,Spring modules including IOC, MVC (REST)

• 6+ years’ experience and understanding of REST concepts and REST APIs using Spring boot with TomCAT and Docker

• Good relational DB knowledge involving SQL and PLSQL

• Understanding and experience working with CI/CD processes and tools such as Jenkins

• Experience with application testing frameworks like Junit

• Strong analytical, technical, and problem-solving skills to understand complex customer needs and transactions


Required Skills/Knowledge

• Java 1.8/J2EE

• Azure/Cloud

• SOAP / REST Web Services (API)

• Sprint Boot, Hibernate

  • • SQL, PL/SQL
permanent
Quality Engineer (17422)
✦ New
Salary not disclosed
Irving, TX 1 day ago

Baer is looking for Quality Engineer for a 6+ month project located in Irving, TX


Title: Quality Engineer

Location: Hybrid – Irving, TX (3 days per week onsite)

Duration: 6 months

Rate: All-Inclusive

Alignment: W2 (C2C Not Permitted)


Overview


We are seeking a Quality Engineer to support a large-scale healthcare platform focused on ERP integrations and platform stability. In this role, you will validate complex data flows, APIs, and event-driven systems that support critical business processes. You will work closely with Engineering and Product teams in an Agile environment to ensure reliable, high-quality platform performance.


Description


  • Test ERP integrations, APIs, and complex data flows across systems.
  • Design and execute test strategies for platforms integrating with SAP, Workday, Oracle, Infor, or similar ERP systems.
  • Build and maintain automated tests using Playwright, Postman/Newman, REST Assured, Cypress, or similar tools.
  • Perform API, functional, regression, and performance testing.
  • Use SQL to validate data transformations and backend pipelines.
  • Test event-driven systems such as Azure Event Hub, Service Bus, or Kafka.
  • Create and maintain test plans, cases, and defect documentation.
  • Collaborate with Engineering and Product teams in Agile ceremonies and quality planning.


Requirements


  • Experience testing ERP-integrated and data-intensive systems.
  • Hands-on experience with test automation frameworks and API testing tools.
  • Strong SQL skills for backend data validation.
  • Experience testing distributed or event-driven architectures.
  • Solid understanding of Agile/Scrum methodologies.
  • Strong analytical, troubleshooting, and communication skills.


Preferred


  • Experience in healthcare or regulated industries.
  • Experience improving test automation frameworks or strategies.
  • Familiarity with AI-assisted testing or workflow automation tools.


Company Overview:


Baer provides best-in-class engagement experiences for our consultants. Our job requirements are carefully vetted and are typically associated with pivotal programs offering tremendous opportunities to expand your skills leveraging the latest solutions.


Baer is an equal opportunity employer including disability/veteran.


ALL OPEN JOBS

Not Specified
Senior Full Stack AI and Data Engineer
✦ New
Salary not disclosed
Minneapolis, MN 6 hours ago

**Candidate must be willing to go into office 3 days a week**


Senior Full-Stack AI & Data Engineer – Contract


RBA is an established leader and trusted partner for enterprise and mid-size organizations seeking to transform their business through technology solutions. As a Digital and Technology consultancy, we combine strategic insight with technical expertise to deliver impactful, scalable solutions that align with business goals. We take pride in working with some of the most recognized companies in our market—while fostering a culture that blends challenging career opportunities with a collaborative, fun work environment.


We are seeking a Senior Full-Stack AI & Data Engineer to join our growing Data & AI practice, supporting a high-impact client. In this role, you will lead the design and development of end-to-end AI-powered applications that drive personalization, predictive analytics, and next-generation digital experiences.


You’ll partner with business stakeholders, product teams, and engineers to build production-grade AI solutions—from data pipelines and model development to APIs and user-facing applications. The ideal candidate brings deep expertise across the full stack, modern data platforms, and generative AI technologies, with a passion for solving complex business challenges through innovative solutions.


Responsibilities

  • Design and develop end-to-end AI-powered applications, including backend APIs and user-facing interfaces, to enable scalable and intuitive AI solutions.
  • Build and maintain robust APIs using technologies such as Node.js, NestJS, or FastAPI, and develop modern web applications using React or similar frameworks.
  • Develop, fine-tune, and deploy machine learning models using frameworks such as PyTorch and Scikit-learn.
  • Implement advanced generative AI solutions, including Retrieval-Augmented Generation (RAG) pipelines and multi-modal AI applications.
  • Design and build agentic AI systems using frameworks such as LangChain, enabling multi-step reasoning, tool use, and automation.
  • Architect and optimize end-to-end data pipelines (ETL/ELT) using Python, SQL, and orchestration tools such as Airflow.
  • Manage and integrate data workflows within Snowflake, leveraging technologies such as Snowpark or Cortex.
  • Implement monitoring and observability for AI systems, including tracking model performance, drift, latency, and reliability.
  • Design and deploy cloud-native solutions using Docker, Kubernetes, and CI/CD pipelines across AWS, Azure, or GCP.
  • Collaborate with business stakeholders to translate data into actionable insights and intelligent applications.
  • Contribute to DevOps best practices, including infrastructure-as-code (Terraform) and automated testing.
  • Mentor junior engineers and promote best practices in AI ethics, data governance, and code quality.


Requirements

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
  • 5+ years of experience across full-stack development, including backend (Node.js/Python) and frontend frameworks (React or similar).
  • Strong experience designing and building data pipelines and modern data platforms, including expertise in SQL and data modeling.
  • Proven experience deploying AI/ML solutions in production environments, including MLOps and model lifecycle management.
  • Hands-on experience with generative AI technologies, including LLMs, prompt engineering, and RAG architectures.
  • Experience with cloud platforms such as AWS, Azure, or GCP.
  • Strong understanding of DevOps practices, including CI/CD, containerization, and infrastructure-as-code (Terraform).
  • Excellent communication skills and ability to work effectively in client-facing environments.


Preferred Qualifications

  • Experience with Snowflake, including Snowpark, Cortex, or similar data platform capabilities.
  • Experience building agent-based AI systems or working with frameworks such as LangChain.
  • Familiarity with vector databases and semantic search architectures.
  • Experience developing mobile applications using React Native or Flutter.
  • Knowledge of mobile architecture, UI/UX principles, and API integration patterns.
  • Experience deploying applications to Apple App Store or Google Play Store.
  • Familiarity with security and authentication protocols, including OAuth2, biometric authentication, and secure data handling.
  • Cloud or data platform certifications (AWS, Azure, GCP, Snowflake, or similar).


Leadership & Culture

  • Demonstrate leadership through mentorship, technical guidance, and promoting engineering best practices.
  • Balance innovation with pragmatism—able to work across cutting-edge AI solutions and foundational data engineering tasks.
  • Thrive in a collaborative, fast-paced consulting environment with a strong focus on client impact and delivery excellence.
permanent
AI Engineer
✦ New
Salary not disclosed
Greenwich, CT 6 hours ago

We are looking for a highly motivated AI Engineer to join our IT team. This role is ideal for someone passionate about building real-world AI solutions and eager to work across the full AI technology stack—from model integration and retrieval pipelines to agentic AI workflows, multi-agent orchestration, and application-level features used by business teams. You will also contribute to data engineering efforts that feed AI capabilities, working alongside a modern analytics platform built on Microsoft Fabric.


As an AI Engineer, you will help design, develop, and deploy AI capabilities. You will contribute to production-grade AI features in areas such as Open-to-Buy planning, Sales Forecasting, Intelligent Order Management Systems (OMS), Product Copy Generation, and Image Generation.

This is a unique opportunity to work on meaningful, high-impact AI initiatives while implementing modern AI infrastructure, LLMOps practices, and scalable system design.


This role will work from our Greenwich, CT office and report to the Senior Director of System Integration & Operation on our current hybrid schedule, 3 days in office and 2 days remote.


Key Responsibilities:


AI Application Development

Build and maintain AI-powered features including:

  • Open-to-Buy optimization and inventory planning models
  • Sales forecasting and demand prediction solutions
  • Intelligent OMS features for routing, allocation, and automation
  • Marketing AI tools such as product copy generation and AI-assisted image generation

Integrate custom and foundation LLMs into internal applications using API and SDK interfaces, leveraging structured outputs, function/tool calling, and prompt caching to optimize reliability and cost.


RAG, GraphRAG, + Vector DB Engineering

  • Develop retrieval pipelines using vector embeddings and similarity search (Azure AI Search, FAISS, Pinecone, or equivalent).
  • Implement chunking, embedding, indexing, query routing, and relevance-tuning strategies, including advanced reranking and hybrid search techniques.
  • Maintain a high-quality knowledge base to support AI features via Retrieval-Augmented Generation.
  • Explore and implement GraphRAG patterns to improve knowledge retrieval over structured enterprise data and entity relationships.


AI Agents & Orchestration

  • Design and build AI agents capable of planning, tool use, and multi-step reasoning using frameworks such as LangGraph, PydanticAI, CrewAI, or Google ADK.
  • Implement Model Context Protocol (MCP) and Agent-to-Agent (A2A) protocol integrations to connect AI agents with internal tools, APIs, data systems, and other agents in a standardized, interoperable way.
  • Build guardrails, evaluation frameworks, and human-in-the-loop checkpoints to ensure reliable and safe agent behavior in production.


AI Infrastructure & System Architecture

  • Maintain private cloud LLM instance landscape, ensuring secure and efficient usage.
  • Assist in deploying scalable inference pipelines, batching, and caching layers.
  • Collaborate with DevOps and Data Engineering on CI/CD, model deployment workflows, monitoring, and integration with the Microsoft Fabric data platform (including Fabric MCP for agent-to-data connectivity).


Data Engineering, Pipelines & Model Training

  • Clean, transform, and prepare datasets for ML/AI pipelines; contribute to data engineering workflows including ELT pipeline design, medallion architecture patterns, and data transformation within the Lakehouse layer.
  • Train, validate, and fine-tune models where appropriate (LLMs, forecasting models, classification models, etc.); familiar with parameter-efficient techniques such as LoRA and QLoRA.
  • Evaluate model performance and optimize latency, accuracy, and cost using LLM evaluation and observability frameworks (e.g., RAGAS, LangSmith, Langfuse, Helicone, or custom evals); manage prompt versioning and regression testing.


Required Qualifications:

  • Bachelor’s degree in Computer Science, Data Science, AI/ML, Engineering, or related field.
  • Strong foundations in Python, data structures, and machine learning concepts.
  • Comfortable working with LLM APIs, embeddings, vector databases, and RAG patterns; exposure to agentic patterns, tool use, and GraphRAG concepts is a strong plus.
  • Familiarity with cloud environments (Azure preferred; AWS or GCP also acceptable).
  • Understanding of systems diagrams, architecture patterns, and AI infrastructure components.
  • Exposure to SQL/NoSQL databases.
  • Exposure to data engineering concepts such as ELT/ETL pipelines, data transformation, and data modeling.
  • Awareness of responsible AI principles including bias detection, fairness, and model interpretability.
  • Awareness of AI agent frameworks and orchestration concepts (e.g., LangGraph, PydanticAI, Semantic Kernel, CrewAI, or Google ADK).
  • Familiarity with prompt engineering best practices including chain-of-thought, few-shot prompting, and structured output design.


Preferred Qualifications:

  • Familiarity with Microsoft Fabric (OneLake, Lakehouse, Spark notebooks, semantic models) and Power BI; experience with Fabric MCP integrations is a strong differentiator.
  • Experience implementing MCP (Model Context Protocol) servers or A2A (Agent-to-Agent) protocol endpoints, or integrating AI agents with external tools and APIs.
  • Exposure to multimodal AI capabilities (vision-language models) for applications such as product image analysis or document understanding.
  • Experience building small AI apps, demos, or tools—portfolio/GitHub encouraged.


What you'll Gain:

  • Hands-on impact in designing enterprise AI capabilities from the ground up.
  • Opportunities to work with cutting-edge LLM technologies in a private, secure environment, alongside a modern Microsoft Fabric data platform.
  • A chance to shape AI products used across supply chain, marketing, and e-commerce.


Company Overview:

Established in 2005, Marc Fisher Footwear company is a leading full-service, product-driven fashion footwear company with knowledge and expertise in design, sales, sourcing, distribution and marketing – all with dedicated and strategic direction for each brand within the portfolio, which includes GUESS, G by Guess, Nine West, Tommy Hilfiger, Earth, Calvin Klein, Kenneth Cole Men's, Hunter Boots, Rockport, Bandolino, indigo rd., Unisa, and Easy Spirit along with the namesake brands – Marc Fisher and Marc Fisher LTD.


Our diverse portfolio of globally recognized brands – available domestically and internationally via wholesale and retail channels – consistently meets the widest range of consumers’ fashion footwear needs, from classic to contemporary, sport to dress, men’s to women’s. Headquartered in Greenwich, Connecticut, with showrooms in New York City, Marc Fisher Footwear is sold worldwide through department stores, specialty stores and e-commerce channels.


Marc Fisher Footwear is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, religion, color, national origin, sex, sexual orientation, age, status as a protected veteran, among other things, or status as a qualified individual with a disability. EEO Employer/Vet/Disabled.

Not Specified
Salesforce Engineer
✦ New
🏢 HCLTech
Salary not disclosed
Seattle, WA 6 hours ago

HCLTech is looking for a highly talented and self- motivated Senior Engineer — Salesforce Service Cloud (Integrations Focus) to join it in advancing the technological world through innovation and creativity.


Job Title: Senior Engineer — Salesforce Service Cloud (Integrations Focus)

Job ID: 86069

Position Type: Full Time

Location: Onsite


Location: Seattle/Vancouver

Type: Full-time/Contract



Role Summary

Design, build, and optimize integrations for SFSC at scale. Own API design, middleware patterns, error handling, and performance. Work closely with architects and platform engineers to deliver resilient, secure data flows.


Key Responsibilities

  • Implement integrations using REST/SOAP APIs, Platform Events, Change Data Capture, and External Services.
  • Build/consume integrations via MuleSoft (preferred) or equivalent ESB; design RAML/JSON schemas.
  • Implement named credentials, OAuth flows, security patterns, and robust error handling/retries.
  • Optimize callouts, governor-limits compliance, and bulk patterns.
  • Support data migrations (ETL, staging, delta loads), ID strategy, and data quality controls.
  • Contribute to CI/CD (Gearset/Copado), automated tests for integration flows, and observability/alerts.


Required Skills

  • 6–9+ years overall; 4+ years Salesforce with deep API/integration experience.
  • Strong Apex (callouts, queueables, batch), Platform Events, Flow, and SOQL performance patterns.
  • Middleware experience (MuleSoft preferred) or Boomi/Informatica/SnapLogic.
  • Integration security (OAuth2/JWT/mTLS), error handling, idempotency, and monitoring.


Preferred / Nice to Have

  • Event-driven design, Kafka/Kinesis; caching strategies.
  • Experience with async processing and large data volumes (LDV).
  • Certifications: Integration Architect, MuleSoft Developer/Architect, Platform Developer I.


Interview Focus

API design, error handling patterns, bulk/async, LDV, observability, solution trade-offs.


Pay and Benefits

Pay Range Minimum: $77000 per year

Pay Range Maximum: $188000 per year


HCLTech is an equal opportunity employer, committed to providing equal employment opportunities to all applicants and employees regardless of race, religion, sex, color, age, national origin, pregnancy, sexual orientation, physical disability or genetic information, military or veteran status, or any other protected classification, in accordance with federal, state, and/or local law. Should any applicant have concerns about discrimination in the hiring process, they should provide a detailed report of those concerns to for investigation.

A candidate’s pay within the range will depend on their skills, experience, education, and other factors permitted by law. This role may also be eligible for performance-based bonuses subject to company policies. In addition, this role is eligible for the following benefits subject to company policies: medical, dental, vision, pharmacy, life, accidental death & dismemberment, and disability insurance; employee assistance program; 401(k) retirement plan; 10 days of paid time off per year (some positions are eligible for need-based leave with no designated number of leave days per year); and 10 paid holidays per year

How You’ll Grow

At HCLTech, we offer continuous opportunities for you to find your spark and grow with us. We want you to be happy and satisfied with your role and to really learn what type of work sparks your

brilliance the best. Throughout your time with us, we offer transparent communication with senior-level employees, learning and career development programs at every level, and opportunities to experiment in different roles or even pivot industries. We believe that you should be in control of your career with unlimited opportunities to find the role that fits you best.

Not Specified
Python AI Engineer
✦ New
🏢 Yochana
Salary not disclosed
Atlanta, GA 6 hours ago

Python AI Engineer (Prompt & Agentic Systems)

Location: Hybrid –Atlanta, GA (3 days a week onsite)

Client: Retail client

About the Role

We’re looking for a hands-on engineer who can build AI-enabled applications end-to-end using Python, with strong skills in prompt engineering and agentic system design (multi-agent/orchestrated AI workflows). You’ll design, develop, and productionize intelligent features—ranging from retrieval-augmented generation (RAG) to autonomous tasking agents integrated with internal tools and APIs.

Key Responsibilities

  • Design & Build AI Services: Develop Python-based back-end services that integrate LLMs for reasoning, extraction, summarization, and decision support.
  • Prompt Engineering: Craft, version, and evaluate prompts/system instructions; design guardrails, test prompt variants, and optimize for reliability, latency, and cost.
  • Agentic Systems: Architect and implement autonomous/multi-agent workflows—planning, tool-use, memory, error recovery, and human-in-the-loop controls.
  • RAG Pipelines: Implement document ingestion, chunking, embeddings, vector search (semantic/re-ranking), and grounding strategies.
  • Evaluation & Observability: Define metrics and build eval suites for quality (accuracy, factuality, safety), and establish tracing/telemetry for LLM calls.
  • API & Tool Integrations: Enable agents to use tools (internal APIs, search, databases, workflow engines); handle auth, rate limits, and fallbacks.
  • MLOps / AIOps: Package, containerize, and deploy services (Docker/K8s); manage keys, secrets, CI/CD; support canary rollouts and cost governance.
  • Security & Compliance: Apply data privacy principles, PII handling, redaction, prompt injection defenses, and audit logging.
  • Cross-Functional Collaboration: Partner with product, data, and security teams to translate requirements into reliable AI features.

Required Qualifications

  • Strong Python (typing, async, testing, packaging) and experience building production APIs/services (FastAPI/Flask).
  • Hands-on with LLMs (OpenAI, Azure OpenAI, Anthropic, etc.) and embedding/RAG workflows.
  • Proven prompt engineering experience (few-shot strategies, tool-use instructions, output schemas, function/tool calling).
  • Experience with agent frameworks or custom agent orchestration (e.g., LangGraph/LangChain/AutoGen, or in-house equivalents).
  • Vector databases (e.g., FAISS, Chroma, Pinecone, Weaviate) and search relevance tuning.
  • Familiar with MLOps/DevOps: Docker, CI/CD, monitoring (Prometheus/Grafana), logging (OpenTelemetry), secrets management.
  • Testing & Evals: unit/integration tests, offline evals, golden datasets, regression checks.
  • Practical understanding of AI safety/guardrails (prompt injection, data leakage, jailbreak prevention).

Nice to Have

  • Experience with Azure (or AWS/GCP) AI services, key vaults, and networking.
  • Knowledge of Model Context Protocol (MCP) or tool-server patterns for secure tool access.
  • Experience with retrievers (BM25, hybrid search), re-rankers, or LlamaIndex/LangChain.
  • Familiarity with streaming UIs and structured outputs (JSON, Pydantic schemas).
  • Background in LLM finetuning, RLHF/DPO, or synthetic data generation.
  • Front-end basics for AI UX (React/Next.js) or chat UI patterns.
  • Domain knowledge in HR/ATS, customer support, or internal enterprise workflows.
Not Specified
Senior Java Developer - VP
✦ New
🏢 Citi
Salary not disclosed
Irving, TX 1 day ago
Senior Java Developer - VP

Working at Citi is far more than just a job. A career with us means joining a team of more than 230,000 dedicated people from around the globe. At Citi, you'll have the opportunity to grow your career, give back to your community and make a real impact.

Job Overview

The Senior Java Developer is a senior level position responsible for establishing and implementing new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to lead applications systems analysis and programming activities.

Responsibilities:

  • Partner with multiple management teams to ensure appropriate integration of functions to meet goals as well as identify and define necessary system enhancements to deploy new products and process improvements
  • Resolve variety of high impact problems/projects through in-depth evaluation of complex business processes, system processes, and industry standards
  • Provide expertise in area and advanced knowledge of applications programming and ensure application design adheres to the overall architecture blueprint
  • Utilize advanced knowledge of system flow and develop standards for coding, testing, debugging, and implementation
  • Develop comprehensive knowledge of how areas of business, such as architecture and infrastructure, integrate to accomplish business goals
  • Provide in-depth analysis with interpretive thinking to define issues and develop innovative solutions
  • Serve as advisor or coach to mid-level developers and analysts, allocating work as necessary
  • Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency.

Qualifications:

  • 10 -13 years of relevant experience in Apps Development or systems analysis role
  • Extensive experience system analysis and in programming of software applications
  • Experience in managing and implementing successful projects
  • Subject Matter Expert (SME) in at least one area of Applications Development
  • Ability to adjust priorities quickly as circumstances dictate
  • Demonstrated leadership and project management skills
  • Consistently demonstrates clear and concise written and verbal communication

Backend Development (Required):

  • Strong hands-on core Java, functional programming, and Spring Boot microservices development experience.
  • Understanding of concurrent and parallel programming, including threads, processes, synchronization, and handling race conditions.
  • Knowledge of reactive programming for building asynchronous, event/message-driven systems in microservices based applications that are highly concurrent.
  • Proficient in containerizing applications, continuous integration, and continuous delivery in Java ecosystem.
  • Knowledge of distributed tracing and API Gateway integration for microservices architecture.
  • Proficient in functional programming concepts with Streams API, Lambda Expressions and Optional.
  • Understanding of secure coding practices, SSL/TLS, OAuth, and JWT token handling in Java-based applications.
  • Expertise in integrating Java with NoSQL databases such as MongoDB for scalable, high availability applications.
  • Strong experience in data modeling and experience with relational and no-SQL database, Oracle and MongoDB.
  • Understanding of integrating APIs with third party libraries/vendors and handle the security around it.
  • Understanding of the principles of distributed systems, including data partitioning, replication, and consistency models.
  • Strong grasp of data structures and algorithms, especially those relevant to distributed systems like distributed hash tables and load balancing techniques.
  • Understanding of microservices architecture, including service discovery, API gateways, and inter-service communication.

Other areas (Required):

  • Exceptional Problem-Solving and Analytical Skills to diagnose and resolve issues in distributed environments.
  • Above average skills in monitoring, logging, and debugging distributed systems to ensure reliability and performance.
  • Expertise in fundamental concepts such as consistency, availability, partition tolerance, fault tolerance, and scalability.
  • Familiarity with container orchestration (e.g., Kubernetes), and distributed messaging systems (e.g., Kafka).
  • Experience using Git/BitBucket.
  • Good communication skills, both written and verbal.

Other areas (Good to have):

  • Unix shell scripting.
  • Knowledge of ElasticSearch, and GraphQL.
  • Experience with building apps which are highly performant and scalable will be great.
  • Knowledge of Generating Artificial Intelligence (AI), Machine Learning (ML), and Large Language Models (LLMs).

Education:

  • Bachelor's degree/University degree or equivalent experience
  • Master's degree preferred
Not Specified
Headless CMS Consultant
✦ New
Salary not disclosed
New York 1 day ago

Duration: Full Time Opportunity

Job Description:

  • We are seeking a CMS Consultant specializing in Headless CMS and Digital Experience Platforms (DXP) to design, implement, and optimize modern digital platforms that enable seamless and personalized customer experiences.
  • The ideal candidate will have strong experience with headless CMS platforms, content migration, API integrations, and information architecture, while also advising stakeholders on SEO strategy, content analytics, and digital experience optimization.
  • This role works closely with business, product, marketing, and engineering teams to ensure digital platforms align with business goals and deliver scalable, high-performance content solutions.

Responsibilities:

  • Design and implement Digital Experience Platforms (DXP) that deliver personalized and scalable digital customer experiences.
  • Work with stakeholders to analyze business requirements and translate them into CMS and content architecture solutions.
  • Lead CMS implementation, configuration, and optimization initiatives.
  • Define content models, taxonomies, and governance structures.
  • Execute content migration strategies during platform modernization initiatives.
  • Build and support API integrations between CMS platforms and enterprise services.
  • Provide guidance on SEO strategy, content optimization, and performance analytics.
  • Collaborate with marketing, product, engineering, and UX teams to ensure seamless content delivery across digital channels.
  • Support sales initiatives (proactive and reactive) by contributing to solution design and technical discussions.
  • Deliver value-based conversations with clients to expand engagement opportunities and grow accounts.

Experience:

  • Hands-on experience with Headless CMS platforms such as Optimizely, Contentful, Contentstack, Strapi, or similar solutions.
  • Strong understanding of content modeling, workflows, content governance, and Information Architecture (sitemaps, taxonomy, content hierarchy).
  • Experience with content migration, CMS upgrades, and re-platforming from legacy CMS to modern headless platforms.
  • Experience integrating CMS with enterprise systems using REST APIs, GraphQL, and ETL processes.
  • Familiarity with SPA (Single Page Applications), PWA (Progressive Web Applications), and API management platforms such as MuleSoft, Dell Boomi, or Apigee.
  • Understanding of SEO best practices and web/content analytics tools such as Google Analytics, Adobe Analytics, or DOMO to optimize content performance.

Skills:

  • Headless CMS
  • CMS Integration

Education:

  • Bachelor's degree or equivalent experience.

About US Tech Solutions:

US Tech Solutions is a global staff augmentation firm providing a wide range of talent on-demand and total workforce solutions. To know more about US Tech Solutions, please visit Tech Solutions is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, colour, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.

Recruiter Details:

Name: Deepak

Email:

Internal Id: 26-05821

Not Specified
jobs by JobLookup
✓ All jobs loaded