Suno Api Reddit Jobs in Usa

927 positions found — Page 5

Platform Security Architect
✦ New
Salary not disclosed
Dallas, Texas 15 hours ago

We are seeking a Platform Security Architect to help secure customer-facing platforms across web, mobile, and API environments within an AWS ecosystem. This role partners closely with software engineering and architecture teams to ensure secure-by-design solutions for modern cloud applications.

Key Responsibilities

  • Lead security architecture reviews for web, mobile, and API-driven platforms
  • Provide hands-on guidance to engineering teams on secure design patterns
  • Define and implement security guardrails across AWS cloud environments
  • Support secure authentication and authorization frameworks (OAuth2, OIDC, SAML)
  • Ensure secure integration across microservices, APIs, and CI/CD pipelines

Required Experience

  • 7+ years of IT experience with a focus on cloud or application security
  • Background in software engineering, platform engineering, or cloud architecture
  • Strong experience securing API-driven applications and microservices
  • Hands-on AWS cloud security (IAM, encryption, networking, monitoring)
  • Experience with OAuth2, OpenID Connect, SAML, or CIAM architectures

Preferred

  • Experience supporting customer-facing platforms or payment environments
  • Familiarity with PCI DSS and modern security frameworks
  • AWS or security-related certifications

This role is ideal for a software engineer or cloud architect who transitioned into security, with experience securing modern API-based platforms in AWS.

Not Specified
Senior Developer — AI Evaluation & Cloud Infrastructure
✦ New
Salary not disclosed
Boston, Massachusetts 15 hours ago

Senior Developer, AI Evaluation & Cloud Infrastructure | Just Horizons Alliance

Join us to build the technical foundation for AI accountability.

The Role

Just Horizons Alliance is an 18-year-old applied research lab focused on ethics and technology. Our current focus is the AI Ethics Index, a measurement framework for evaluating AI systems on ethics, safety, and societal impact.

We need a senior engineer to own the technical infrastructure end-to-end: learn what exists, close critical gaps, and build something that lasts.

The evaluation methodology is validated and in use. We're now at the stage where the systems need to mature alongside the research. This is the first dedicated infrastructure hire for this work, and you'll shape how it scales.

What You'll Do

Months 1–3: Learn the System

Map the current architecture with Sophia Zitman (AIEI Team Lead). Understand the evaluation methodology, the data flows, and the infrastructure that supports them. Identify what needs to evolve for multi-domain benchmarking—reproducibility, security posture, test coverage, deployment pipeline. Begin implementing the highest-priority improvements.

Months 4–6: Build for Scale

Architect the infrastructure to support the next phase of the Index. CI/CD that maintains stability as the system grows. IAM and secret management built for a production environment. Experiment tracking that makes every evaluation run auditable. Documentation that enables the research team to work independently.

Months 7–12: Expand

Multi-domain benchmarking across education, healthcare, finance, and other sectors. Reproducibility standards that meet external scientific scrutiny. A system the research team can extend without engineering support for every change. At this point, the infrastructure should be stable enough that you're focused on capability, not maintenance.

Why This Role Is Difficult

This is infrastructure for a scientific standard, not a product feature.

Correctness and delivery both matter. A bug in the evaluation engine doesn't break a feature, instead it invalidates a measurement. A flawed pipeline doesn't slow things down, it compromises the credibility of the research. At the same time, methodology that never runs in production has no impact. The role requires both rigor and momentum.

You're translating between disciplines. Your stakeholders are researchers, ethicists, and governance specialists. You'll need to take concepts like \"operationalizing an ethical construct\" and turn them into data models and pipelines. This is a translation problem as much as an engineering problem.

The work is novel. There's no existing system to reference. The AI Ethics Index is defining what rigorous AI evaluation looks like. You'll be making architectural decisions in areas where best practices don't yet exist.

You'll have full ownership. This is not a role where you're executing someone else's technical vision. You're setting the direction. That means autonomy, but it also means accountability.

You're probably the right person if

You've built evaluation systems or data pipelines that other people depended on for correctness, not just uptime

You're comfortable with GCP and have deployed containerized workloads in a real production context

You've worked with LLM APIs and understand their reliability and reproducibility characteristics

You can read a paper about measurement methodology and turn it into a working data structure

You build for durability. Your code is still running 18 months later because you thought about the next person

You've worked somewhere between 5 and 50 people and you're comfortable being the person who figures things out without a playbook

You find working on AI ethics infrastructure more interesting than building another e-commerce checkout flow

You're probably not the right fit if

Enterprise environments make up most of your experience. This is not a large-team context

You need clearly defined requirements before you can start. The requirements here evolve through conversation with ethicists

You're based on the West Coast US or expect West Coast US working hours

You mainly build user-facing APIs and features — this is systems and infrastructure work

You're looking for a high-growth startup where shipping speed is everything. This is a scientific organization. Correctness is everything.

Hard Skills

These are the technical capabilities you need going in — or need to be able to build up fast using an AI coding agent. We're not looking for someone who ticks every box. We're looking for someone who closes gaps quickly and knows how to learn.

  • Python — strong enough to design systems architecture and reason about failure modes, even if you work with AI assistance for implementation details
  • Google Cloud Platform — specifically Cloud Run, IAM design, secret management, and containerized workload deployment in a real production context
  • API and model documentation — able to read, write, and navigate API specs and model documentation fluently; you know how to figure out how a system behaves from its documentation without needing someone to walk you through it
  • Structured step-by-step reasoning — when you hit a complex problem, you decompose it immediately and visibly into logical steps; you don't disappear into your head and come back with an answer, you think out loud and in sequence, which makes collaboration with the ethics and research team possible
  • LLM API integration — understanding the reliability, reproducibility, and failure characteristics of external model endpoints
  • Data pipeline architecture — building evaluation or measurement systems where correctness is non-negotiable, not just data-moving
  • Experiment tracking and reproducibility standards — always looking to improve the evaluation pipeline; you understand what needs to be tracked, why reproducibility matters scientifically, and you find the right approach for the context without being dogmatic about tooling
  • Statistical reliability concepts — enough to understand what inter-rater reliability means for evaluation output and why reproducibility matters scientifically

What you get

The role: You'll work directly with Sophia Zitman (AIEI Team Lead) as the technical backbone of the AI Ethics Index. Full engineering ownership of the evaluation engine.

The comp: Base salary $110,000.

The team: Small, split between ethicists and engineers. You will interview with Janet Kang (Executive Director) and Sophia Zitman (AIEI Team Lead).

The environment: Boston-based non-profit (501(c)(3)). East Coast US or Western Europe time zones. Collaborative but autonomous — Sophia won't micromanage, but she will hold you to a high standard of systems thinking.

The upside: You'll have built the technical foundation of what may become the globally referenced standard for AI system evaluation. That's a meaningful line on any CV — and a genuinely hard thing to have done.

Not Specified
Gemini Enterprise SME
✦ New
Salary not disclosed
Atlanta, GA 15 hours ago

Role : Gemini Enterprise SME

Location: Remote

Position Type : Contract


Role Summary

  • Seeking a Gemini Enterprise Experience Engineer to design, build, and operationalize enterprise‑grade Gemini‑powered solutions on Cloud Platform (GCP).
  • This role focuses on Gemini APIs, Vertex AI, and agentic AI frameworks to deliver secure, scalable, and production‑ready AI experiences for enterprise users.


Key Responsibilities

  • Design and implement Gemini Enterprise solutions using Gemini APIs and Vertex AI
  • Build and deploy agentic AI workflows using Agent Builder, ADK, and LangGraph‑style orchestration [Manideep M...prise 3-10 | PDF], [Clo...iew_Jerome | PowerPoint]
  • Integrate Gemini with enterprise data sources, APIs, and business systems
  • Productionize AI experiences on GCP with strong focus on security, governance, and observability
  • Collaborate with engineering and customer teams to translate business needs into scalable AI experiences


Required Skills

  • Hands‑on experience with Gemini Enterprise / Gemini APIs
  • Strong experience with GCP, especially Vertex AI
  • Proficiency in Python and API‑based AI integration
  • Experience building enterprise‑grade GenAI applications


Nice to Have

  • Experience with Agent Builder, Agent Development Kit (ADK), or Agent Engine
  • Familiarity with RAG patterns, structured outputs, and tool‑calling
  • Experience with secure or privacy‑sensitive enterprise data
  • Exposure to CI/CD and cloud‑native deployment on GCP


Experience

  • 5+ years in cloud, AI/ML, or platform engineering
  • Prior experience delivering enterprise AI solutions on Cloud preferred



Best Regards,

Bismillah Arzoo (AB)

Not Specified
Backend Engineer
✦ New
Salary not disclosed
El Segundo, CA 1 day ago

About Wave Health

Wave Health, powered by Treatment Technologies & Insights, Inc. (TTI), aims to improve treatment experiences and outcomes for patients with cancer and chronic illnesses. We develop custom software and mobile applications to help patients manage their treatment and generate insights on their personal experiences with high-acuity or chronic conditions.


We have aggressive plans to continually enhance our infrastructure, and due to ongoing partnerships and strategic growth, we are seeking to grow our Engineering Team.


The Role

TTI is looking for a Senior Backend Engineer to join our platform team. This team member will play a critical role in designing and building the APIs and services that power the Wave Health platform.

As part of a team providing core system functionality that other engineers build upon, we are looking for someone who can not only tackle tough technical problems but also collaborate and evangelize best practices across teams. We value engineers who bring fresh ideas and are willing to own problems through to a solution. You should be comfortable working in a fast-paced startup culture and have experience architecting complex, production-grade healthcare systems.

This is a Hybrid (in-office 3x per week) role.


Required Qualifications

  • BS/BA degree in Computer Science or equivalent practical experience
  • Solid understanding of Computer Science fundamentals
  • 5+ years of relevant backend engineering experience
  • Proficiency in PHP and the Laravel framework
  • Strong experience with MySQL, including query optimization, indexing strategies, and database architecture
  • Experience designing and building RESTful APIs at scale
  • Experience in architecting complex systems with high security, reliability, and scalability requirements
  • Familiarity with healthcare data privacy requirements (e.g., HIPAA compliance)
  • Experience contributing to architecture decisions in production environments, handling sensitive data
  • Ability to mentor and support other engineers through code reviews and technical guidance


Nice to Have

  • Experience with AWS services (EC2, EKS, RDS, S3, CloudFormation, etc.)
  • Knowledge of HL7 FHIR standards and healthcare interoperability protocols
  • Experience integrating with Electronic Health Record (EHR) systems
  • Experience with containerization and orchestration (Docker, Kubernetes)
  • Familiarity with CI/CD pipelines and DevOps practices
  • Experience with microservices architecture patterns
  • Exposure to mobile API development for iOS and Android platforms


What You Will Do

  • Design, build, and maintain backend services and APIs that power the Wave Health platform
  • Architect scalable solutions for chronic condition management and patient data workflows
  • Collaborate with mobile (iOS and Android) teams to define and optimize API contracts
  • Drive technical decisions on system design, data modeling, and service architecture
  • Ensure platform security and compliance with healthcare regulations (HIPAA, GDPR)
  • Improve system reliability, performance, and observability across the platform
  • Participate in code reviews and contribute to engineering best practices and standards



Why Wave Health

  • Mission-driven work improving the lives of cancer and chronic illness patients
  • Small, collaborative engineering team where your contributions have an outsized impact
  • Opportunity to shape the technical direction of a growing healthcare platform
  • Work with modern tooling and cloud-native infrastructure
  • Fast-paced startup environment with room for growth and leadership


Expected Salary Range: $130 - 150k

Not Specified
Managing Technical Consultant
Salary not disclosed
San Antonio 6 days ago
Candidates who receive an offer will be required to successfully complete a background check and drug test as a condition of employment.

Oracle Fusion Technical Lead San Antonio, TX (REMOTE) 12+Month Contract Client is looking for an experienced Oracle Fusion Technical Lead to oversee and deliver complex Oracle Cloud ERP solutions.

The ideal candidate will have hands-on expertise across multiple Oracle Fusion tools and technologies including OTBI, BI Publisher (BIP), Financial Reporting Studio (FRS), SmartView, Fast Formula, REST/SOAP APIs, OIC, and various customization and personalization frameworks.

This role requires strong leadership, a solution-oriented mindset, and the ability to manage both project delivery and stakeholder expectations effectively.

Key Responsibilities Client Engagement & Leadership • Serve as the primary technical lead for US enterprise clients.

• Conduct design workshops and solution discussions.

• Provide architectural guidance and integration strategy.

• Lead offshore/onshore technical teams.

Oracle Fusion Development • Develop and support: o OTBI, BI Publisher, FRS, SmartView o BI Extracts and ESS Jobs o Fast Formulas (Oracle HCM) o Application Composer extensions o Visual Builder custom UIs Enterprise Integrations • Design integrations using: o REST APIs o SOAP Web Services o API mediation frameworks o Oracle Integration Cloud (OIC) • Implement: o Synchronous (Request/Response) o Asynchronous integrations o Batch/File-based integrations (FBDI) • Work with: o XML, JSON, SOAP o IDOC, RFC o CSV/File-based exchanges o WebService & API-based connectivity Data & Middleware • Manage file-based integrations and batch processing.

• Monitor integration health and resolve production issues.

• Preferred exposure with middleware platforms such as IBM ACE, MuleSoft and Other enterprise integration tools Governance & Compliance • Ensure adherence to Oracle Cloud best practices.

• Work with security models and OCI awareness.

• Support production deployments and release cycles.

Required Qualifications • 15+ years of Oracle technical experience.

• 5+ years in Oracle Fusion Cloud.

• Strong hands-on expertise in: o OIC o REST/SOAP integrations o OTBI/BIP/FRS o Application Composer & Fast Formula o FBDI and file-based integrations • Solid understanding of Oracle Cloud ERP (Financials, HCM, SCM).

• Strong communication and executive presentation skills.

• Experience working directly with US enterprise clients.

• Ability to lead design workshops and technical architecture discussions.

Preferred • Oracle Cloud certifications.

• Experience in multi-country or global ERP rollouts.

• Knowledge of Oracle Utilities modules
- WACS, CCS, OFS.

• Experience in regulated industries (Healthcare, Financial Services, Manufacturing, Utilities).

Education Bachelor’s or master’s degree in computer science or related discipline.

Metasys Technologies is an equal opportunity employer.

All applicants will be considered for employment without attention to race, color, religion, sex, sexual orientation, gender identify, national origin, veteran or disability status.
Not Specified
Enterprise Agentic Platform Specialist
Salary not disclosed
Santa Clara 3 days ago
Enterprise Agentic Platform Specialist Santa Clara, CA
- Onsite 12 Months Contract Looking only Locals who can do Onsite Interview We are seeking an Enterprise Agentic Platform Specialist to lead the design, development, and delivery of enterprise-scale Data Science and Generative AI (GenAI) solutions.

This role will drive the implementation of AI agents, LLM orchestration frameworks, and enterprise automation pipelines, working cross-functionally with business stakeholders, data engineers, data scientists, DevOps teams, and UI developers.

The ideal candidate will combine hands-on GenAI engineering expertise with strong program delivery capabilities, ensuring solutions deliver measurable business outcomes while meeting enterprise standards for governance, security, and Responsible AI.

Key Responsibilities AI Agent and GenAI Development Lead the end-to-end delivery of enterprise data science and GenAI solutions.

Design, develop, and deploy AI agents using Microsoft Copilot Studio, Claude agent frameworks, and enterprise LLM orchestration patterns.

Implement prompt engineering strategies, grounding techniques, and Retrieval-Augmented Generation (RAG) pipelines.

Architecture and Integration Define architecture standards for agentic systems, including tool calling schemas, prompt frameworks, grounding flows, and RAG pipelines.

Translate complex workflows into modular, automated, event-driven pipelines.

Integrate AI solutions with enterprise systems via REST APIs, Power Platform connectors, and enterprise data services.

Connect Copilot Studio agents to enterprise data sources such as: SharePoint Dataverse SQL SAP Enterprise Platform Integration Oversee system integration across enterprise platforms including: ServiceNow SharePoint Microsoft Teams Power Automate Azure APIs Design MCP-based agent architectures, integration layers, authentication flows (OAuth / Microsoft Entra), and system messaging frameworks.

Governance, Security and Responsible AI Collaborate with AI architects, MLOps teams, and security teams to enforce: Responsible AI standards Data governance policies Security and access control frameworks Model safety guidelines Implement agent observability frameworks including logging, telemetry instrumentation, latency metrics, error tracking, and automated remediation workflows.

Delivery and Program Management Lead cross-functional teams delivering AI solutions across data engineering, data science, DevOps, and UX teams.

Manage delivery using Agile / Scrum or hybrid PM methodologies.

Track dependencies, risks, sprint alignment, and release orchestration.

Metrics and Performance Monitoring Define KPIs and operational dashboards for AI automation, including: Cycle time reduction Accuracy improvements Governance compliance Agent uptime and reliability Required Qualifications Hands-on experience delivering enterprise Data Science and GenAI solutions.

Experience designing and deploying AI agents using Microsoft Copilot Studio or similar agent frameworks.

Strong knowledge of LLMs, prompt engineering, and Retrieval-Augmented Generation (RAG).

Experience integrating AI solutions with enterprise platforms and APIs.

Understanding of MLOps, governance frameworks, and Responsible AI standards.

Experience working in Agile delivery environments with cross-functional teams.

Preferred Qualifications Hands-on expertise building agents in Microsoft Copilot Studio.

Familiarity with agent frameworks such as: LangChain AutoGen CrewAI OpenAI Assistants / Functions APIs Experience implementing enterprise AI observability and monitoring frameworks.

Strong understanding of enterprise security, authentication, and governance models.

Thanks Sri Vardhan Chilakamukku Infobahn SoftWorld Inc.
Not Specified
Sr. Generative AI Developer
✦ New
Salary not disclosed
Dallas 1 day ago
Sr.

Generative AI Developer Location: Dallas TX/ Tampa FL/New Jersey
- Hybrid Fulltime/FTE Salary: Market Client: Bank Role Overview We are seeking an experienced Senior Generative AI Developer to design and implement cutting-edge AI solutions leveraging Retrieval-Augmented Generation (RAG) techniques.

The ideal candidate will have strong expertise in Python programming, FastAPI, and cloud platforms (AWS, Azure, or GCP).

This role requires a deep understanding of system architecture design, scalable APIs, and end-to-end AI solution development.

Key Responsibilities Architect and develop Generative AI applications using RAG frameworks for enterprise-scale solutions.

Design and implement robust system architectures for AI-driven platforms ensuring scalability, security, and performance.

Build and optimize APIs using FastAPI for seamless integration with AI models and data pipelines.

Collaborate with cross-functional teams to integrate AI solutions into existing systems and workflows.

Implement data ingestion, preprocessing, and retrieval mechanisms for large-scale knowledge bases.

Ensure compliance with best practices for cloud deployment (AWS, Azure, or GCP).

Conduct performance tuning and optimization of AI models and APIs.

Stay updated with the latest advancements in Generative AI, LLMs, and RAG methodologies.

Required Skills & Qualifications 8+ years of professional experience in software development and system design.

Strong proficiency in Python and experience with FastAPI for API development.

Hands-on experience with Generative AI frameworks and RAG architectures.

Solid understanding of system and architecture design principles for distributed applications.

Experience deploying solutions on any major cloud platform (AWS, Azure, GCP).

Familiarity with vector databases, embedding models, and retrieval pipelines.

Strong problem-solving skills and ability to work in a fast-paced environment.

Preferred Qualifications Experience with LLM fine-tuning, prompt engineering, and model evaluation.

Knowledge of containerization (Docker) and orchestration (Kubernetes).

Exposure to CI/CD pipelines and DevOps practices.

Email:
Not Specified
Site Reliability Engineer
✦ New
Salary not disclosed
Austin, TX 1 day ago

Job Title: Site Reliability Engineer (SRE) – DataHub & GraphQL

Location: Austin, TX & Sunnyvale, CA '


Looking For Only Independent Visa


Role Overview

We are seeking a highly skilled Site Reliability Engineer (SRE) with strong expertise in DataHub ingestion pipelines and GraphQL APIs. The ideal candidate will be responsible for designing, building, and maintaining scalable data ingestion frameworks, ensuring reliability and performance of enterprise data platforms, and enabling seamless integration with downstream applications. This role requires a balance of software engineering, systems reliability, and data platform knowledge.

Key Responsibilities

  • Design, implement, and optimize DataHub ingestion pipelines for large-scale enterprise data systems.
  • Develop and maintain GraphQL APIs to support data discovery, metadata management, and integration.
  • Ensure high availability, scalability, and performance of data services across cloud and on-prem environments.
  • Collaborate with data engineering, product, and infrastructure teams to deliver reliable data solutions.
  • Automate monitoring, alerting, and incident response processes to improve system resilience.
  • Drive best practices in observability, logging, and distributed system reliability.
  • Troubleshoot complex production issues and implement long-term fixes.

Must-Have Skills

  • 5+ years of experience as an SRE, DevOps Engineer, or Software Engineer with a focus on reliability and scalability.
  • Strong hands-on experience with DataHub ingestion frameworks and metadata pipelines.
  • Proficiency in GraphQL API design and implementation.
  • Solid understanding of cloud platforms (AWS, GCP, or Azure) and container orchestration (Kubernetes, Docker).
  • Expertise in monitoring tools (Prometheus, Grafana, ELK, Datadog, etc.).
  • Strong programming skills in Python, Java, or Go.
  • Experience with CI/CD pipelines and infrastructure-as-code (Terraform, Ansible).

Good-to-Have Skills

  • Familiarity with data governance and metadata management tools.
  • Experience integrating with data platforms like Kafka, Spark, or Snowflake.
  • Knowledge of REST APIs and microservices architecture.
  • Exposure to security and compliance practices in data systems.

Qualifications

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
  • Proven track record of delivering reliable, scalable data infrastructure solutions.
Not Specified
React Native developer (Only w2)
✦ New
Salary not disclosed
Bentonville, AR 1 day ago

We are actively looking for a React Native Developer in Bentonville, AR.


Job Title: React Native Developer

Location: Bentonville, AR - Hybrid

Duration: 6 to 12+ Months

Rate: DOE

Only W2


Hybrid position

React Native Developer:

Role Overview

We are seeking a skilled React Native Developer to build high‑performance, scalable, and user-friendly mobile applications for both iOS and Android platforms.

The ideal candidate has strong experience with modern JavaScript frameworks, mobile UI/UX patterns, and integrating mobile apps with backend APIs and cloud services.

Key Responsibilities

  • Develop, test, and deploy React Native applications for iOS and Android.
  • Collaborate with designers, backend engineers, and product teams to deliver seamless user experiences.
  • Build reusable components, implement mobile design patterns, and ensure high code quality.
  • Integrate mobile apps with REST/GraphQL APIs, third‑party SDKs, push notifications, and authentication systems.
  • Optimize app performance, responsiveness, and memory usage.
  • Debug and resolve issues related to performance, crashes, and compatibility.
  • Work with native modules for iOS (Swift/Objective‑C) and Android (Kotlin/Java) when required.
  • Participate in code reviews, sprint planning, and Agile ceremonies.
  • Maintain documentation and contribute to best practices, architecture standards, and reusable libraries.

Required Qualifications

  • 6+ years of experience in mobile application development.
  • Strong hands‑on experience with React Native, JavaScript (ES6+), and TypeScript.
  • Experience with state management tools such as Redux, MobX, Recoil, or Context API.
  • Knowledge of iOS and Android build processes, app store deployment, signing, and provisioning.
  • Proficiency with REST APIs, JSON, authentication flows, and error handling.
  • Experience with version control (Git) and CI/CD pipelines.
  • Strong debugging skills and familiarity with tools like Flipper, Chrome DevTools, or Xcode/Android Studio.
Not Specified
Headless CMS Consultant
✦ New
Salary not disclosed
New York, NY 15 hours ago

Duration: Full Time Opportunity


Job Description:

  • We are seeking a CMS Consultant specializing in Headless CMS and Digital Experience Platforms (DXP) to design, implement, and optimize modern digital platforms that enable seamless and personalized customer experiences.
  • The ideal candidate will have strong experience with headless CMS platforms, content migration, API integrations, and information architecture, while also advising stakeholders on SEO strategy, content analytics, and digital experience optimization.
  • This role works closely with business, product, marketing, and engineering teams to ensure digital platforms align with business goals and deliver scalable, high-performance content solutions.


Responsibilities:

  • Design and implement Digital Experience Platforms (DXP) that deliver personalized and scalable digital customer experiences.
  • Work with stakeholders to analyze business requirements and translate them into CMS and content architecture solutions.
  • Lead CMS implementation, configuration, and optimization initiatives.
  • Define content models, taxonomies, and governance structures.
  • Execute content migration strategies during platform modernization initiatives.
  • Build and support API integrations between CMS platforms and enterprise services.
  • Provide guidance on SEO strategy, content optimization, and performance analytics.
  • Collaborate with marketing, product, engineering, and UX teams to ensure seamless content delivery across digital channels.
  • Support sales initiatives (proactive and reactive) by contributing to solution design and technical discussions.
  • Deliver value-based conversations with clients to expand engagement opportunities and grow accounts.


Experience:

  • Hands-on experience with Headless CMS platforms such as Optimizely, Contentful, Contentstack, Strapi, or similar solutions.
  • Strong understanding of content modeling, workflows, content governance, and Information Architecture (sitemaps, taxonomy, content hierarchy).
  • Experience with content migration, CMS upgrades, and re-platforming from legacy CMS to modern headless platforms.
  • Experience integrating CMS with enterprise systems using REST APIs, GraphQL, and ETL processes.
  • Familiarity with SPA (Single Page Applications), PWA (Progressive Web Applications), and API management platforms such as MuleSoft, Dell Boomi, or Apigee.
  • Understanding of SEO best practices and web/content analytics tools such as Google Analytics, Adobe Analytics, or DOMO to optimize content performance.


Skills:

  • Headless CMS
  • CMS Integration


Education:

  • Bachelor’s degree or equivalent experience.


About US Tech Solutions:

US Tech Solutions is a global staff augmentation firm providing a wide range of talent on-demand and total workforce solutions. To know more about US Tech Solutions, please visit Tech Solutions is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, colour, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.


Recruiter Details:

Name: Deepak

Email:

Internal Id: 26-05821

Not Specified
jobs by JobLookup
✓ All jobs loaded