Cloudera Data Platform Jobs in Usa
12,118 positions found — Page 6
- Onsite 12 Months Contract Looking only Locals who can do Onsite Interview We are seeking an Enterprise Agentic Platform Specialist to lead the design, development, and delivery of enterprise-scale Data Science and Generative AI (GenAI) solutions.
This role will drive the implementation of AI agents, LLM orchestration frameworks, and enterprise automation pipelines, working cross-functionally with business stakeholders, data engineers, data scientists, DevOps teams, and UI developers.
The ideal candidate will combine hands-on GenAI engineering expertise with strong program delivery capabilities, ensuring solutions deliver measurable business outcomes while meeting enterprise standards for governance, security, and Responsible AI.
Key Responsibilities AI Agent and GenAI Development Lead the end-to-end delivery of enterprise data science and GenAI solutions.
Design, develop, and deploy AI agents using Microsoft Copilot Studio, Claude agent frameworks, and enterprise LLM orchestration patterns.
Implement prompt engineering strategies, grounding techniques, and Retrieval-Augmented Generation (RAG) pipelines.
Architecture and Integration Define architecture standards for agentic systems, including tool calling schemas, prompt frameworks, grounding flows, and RAG pipelines.
Translate complex workflows into modular, automated, event-driven pipelines.
Integrate AI solutions with enterprise systems via REST APIs, Power Platform connectors, and enterprise data services.
Connect Copilot Studio agents to enterprise data sources such as: SharePoint Dataverse SQL SAP Enterprise Platform Integration Oversee system integration across enterprise platforms including: ServiceNow SharePoint Microsoft Teams Power Automate Azure APIs Design MCP-based agent architectures, integration layers, authentication flows (OAuth / Microsoft Entra), and system messaging frameworks.
Governance, Security and Responsible AI Collaborate with AI architects, MLOps teams, and security teams to enforce: Responsible AI standards Data governance policies Security and access control frameworks Model safety guidelines Implement agent observability frameworks including logging, telemetry instrumentation, latency metrics, error tracking, and automated remediation workflows.
Delivery and Program Management Lead cross-functional teams delivering AI solutions across data engineering, data science, DevOps, and UX teams.
Manage delivery using Agile / Scrum or hybrid PM methodologies.
Track dependencies, risks, sprint alignment, and release orchestration.
Metrics and Performance Monitoring Define KPIs and operational dashboards for AI automation, including: Cycle time reduction Accuracy improvements Governance compliance Agent uptime and reliability Required Qualifications Hands-on experience delivering enterprise Data Science and GenAI solutions.
Experience designing and deploying AI agents using Microsoft Copilot Studio or similar agent frameworks.
Strong knowledge of LLMs, prompt engineering, and Retrieval-Augmented Generation (RAG).
Experience integrating AI solutions with enterprise platforms and APIs.
Understanding of MLOps, governance frameworks, and Responsible AI standards.
Experience working in Agile delivery environments with cross-functional teams.
Preferred Qualifications Hands-on expertise building agents in Microsoft Copilot Studio.
Familiarity with agent frameworks such as: LangChain AutoGen CrewAI OpenAI Assistants / Functions APIs Experience implementing enterprise AI observability and monitoring frameworks.
Strong understanding of enterprise security, authentication, and governance models.
Thanks Sri Vardhan Chilakamukku Infobahn SoftWorld Inc.
Title: Lead Software Engineer - AI Application Platform
Mode of interview 1 round in person
Location: Must be in Charlotte, NC to work Hybrid Model
Main Skill set: Python, AI and Angular
Description:
Lead Software Engineer - AI Application Platform
The Opportunity
We are seeking a Lead Software Engineer to guide the architectural development and execution of the client, a sophisticated AI-powered application generation platform. This role suits a proven technical leader with deep, hands-on expertise across the full software stack who finds enabling a team to build better software deeply satisfying.
You will shape critical systems, mentor senior and junior developers through complex technical decisions, conduct rigorous code reviews across multiple technology domains, and directly influence the platform's trajectory through strategic engineering leadership.
This is for someone who:
- Engages thoughtfully when a junior developer asks targeted architectural questions—because you see an opportunity to shape how someone thinks about systems
- Takes time to explain subtle type-safety issues in code review, understanding that feedback is a teaching moment
- Can present architecture clearly to executives and confidently explain both what we're building and why it matters
- Finds more energy in the code your team ships than in the code you write individually
- Has proven depth across the full stack and a track record of developing engineers into stronger contributors
This is not a single-language codebase. The role requires the ability to make informed decisions on TypeScript design patterns, Python FastAPI architecture, AWS security posture, and Terraform state management in context with one another.
The Platform Challenge
The client is fundamentally a Platform-as-a-Service (PaaS) for dynamic application generation. This differs from building a traditional SaaS product. Rather than building one application, you're building infrastructure that enables users to build their own applications.
What this means architecturally:
- Dynamic Content Generation at Scale: Unlike traditional development where code is fixed, AppGen generates JSON form schemas, validation rules, and UI layouts on demand. The FormBuilder component doesn't know what fields will exist until runtime. The layout engine renders user-designed screens from configuration, not hardcoded templates.
- Multi-Tenant Isolation & Data Segregation: Each user gets their own generated app, potentially deployed to their own AWS environment. The architecture must account for data isolation, namespace management, and cross-tenant security considerations.
- User-Defined Data Structures: Traditional applications are built with predetermined database schemas. AppGen works differently—form structures, field types, and validation rules emerge from user conversations with Claude. This brings engineering challenges: How do you safely execute validation logic that users define? When users modify existing forms that have thousands of submissions, how do you maintain backward compatibility? How do you version schemas?
- Content Rendering, Not Code Generation: Unlike traditional no-code platforms where users drag-and-drop to build, AppGen uses AI instead. Users chat with Claude, Claude generates a form schema, and your platform renders that schema reliably across diverse field types, validation patterns, and workflows. The system renders configurations for immediate use, rather than generating code for later deployment.
Experience that directly transfers:
- You've contributed to or led development of low-code/no-code platforms (visual builders, workflow engines, configuration-driven systems)
- You've worked on SaaS platforms with multi-tenant architecture and understand isolation strategies, rate limiting, and per-customer customization
- You've built dynamic rendering systems that handle unknown/arbitrary schemas at runtime
- You've addressed the unique challenges of treating data configurations as user-created content (form builders, report designers, automation workflows)
- You understand the difference between platform infrastructure and applications built on that infrastructure—and the architectural implications of each
Core Responsibilities
1. Technical Architecture & Systems Thinking (40%)
- Shape architectural decisions across the full stack: How should the component layer handle dynamically generated forms? What's the right approach to validate complex cross-field dependencies in the FormBuilder? What separation of concerns makes sense between the Generator Lambda and the Parent Backend?
- Guide architecture discussions: Help senior developers think through design trade-offs. Should we use NgRx or Angular signals for this feature? When does a new Lambda function become worthwhile given cold-start costs?
- Identify and address system-wide bottlenecks: Work across layers to improve performance. Explore Lambda cold-start optimization, RDS query efficiency, and DynamoDB access patterns.
- Establish patterns and guide consistency: Define coding conventions that work across Python, TypeScript, and Terraform. Help new team members understand the reasoning behind architectural choices.
- What this looks like in practice: You're able to justify architectural decisions with technical reasoning. When someone questions an approach, you can explain the trade-offs you considered. You can write code in multiple languages to validate an approach if needed.
2. Code Review & Technical Guidance (30%)
- Full-stack PR reviews: Review Python FastAPI endpoints and Angular components with equal depth, understanding how they interact.
- Deep technical review: Catch issues thoughtful code review can surface:
- RxJS Observable lifecycle and potential memory patterns in Angular
- Query efficiency and data loading patterns in SQLAlchemy
- Terraform module organization and state management implications
- Type safety and TypeScript coverage gaps
- AWS security and IAM configurations
- Educational feedback: Your code reviews help the team learn. When you identify an issue, reviewees understand not just what changed, but how to think about similar problems in the future.
- Define quality expectations: Work with the team to establish what "production-ready" means for this platform and support consistent application of those standards.
- What this requires: Experience reviewing code across teams and multiple languages. You know how to write feedback that resonates—clear, constructive, and focused on helping people improve.
3. Mentorship & Team Development (20%)
- Expand specialist capabilities: Help backend specialists learn to contribute to the forms-engine. Support frontend experts in understanding FastAPI patterns.
- Accelerate junior developers: Pair on complex problems. Explain the reasoning behind patterns like DataState. Connect architectural choices to implementation details and performance implications.
- Identify and address gaps: Recognize when someone is struggling with a technology and provide targeted support—training, pair programming, or guidance through architectural decisions.
- Create growth opportunities: Stretch the team into new areas. A backend engineer working on their first Terraform contribution. A frontend specialist implementing an AWS Lambda authorizer.
- What this requires: Genuine investment in people's growth. You've walked developers through major transitions (generalist to specialist, specialist to full-stack, or into new technology areas). You understand that team strength grows when individuals expand their capabilities.
4. Stakeholder Communication & Technical Leadership (10%)
- Explain to diverse audiences: Translate architectural choices and trade-offs for product managers, executives, and business stakeholders. Connect "optimizing DynamoDB queries" to "improving form submission latency by 30%."
- Shape technical direction: Contribute the engineering perspective on feasibility, risk, and what unlocks future capabilities.
- Support release confidence: You understand the code changes, comprehend the risks, and know what to monitor. You can stand behind releases.
Required Qualifications
Technical Skills
Frontend (Production Experience)
- 5+ years of Angular (including handling version migrations, optimizing change detection, and guiding teams through reactive patterns)
- Strong TypeScript skills with generics, discriminated unions, and strict mode
- RxJS depth: You understand hot vs. cold observables, unsubscription patterns, and can identify potential memory issues in reviews
- NgRx state management: You've designed stores at scale, optimized selectors, and evaluated architectural implications
- CSS Grid & Responsive Design: You can assess component hierarchy and layout decisions
- Material Design: You've worked within it and know when and how to extend it
Backend (Production Experience)
- 5+ years of Python (async/await, type hints, data modeling)
- FastAPI production experience: session management, dependency injection, middleware
- SQL and ORMs (SQLAlchemy): You write efficient queries and review them critically
- AWS services: Understanding of Lambda behavior, IAM least-privilege patterns, VPC networking
- REST API design: Versioning, error handling, idempotency
- Testing frameworks: pytest, testing st
Remote working/work at home options are available for this role.
Job Title: Sr AI Platform Engineer- AI Platform Engineer (Guardrails, Observability & Evaluation Infrastructure)
Location, Charlotte, NC, USA (3 days onsite)
Role Overview
AI Platform Engineer to design and build the foundational components that power enterprise scale GenAI applications. This includes data guardrails, model safety tooling, observability pipelines, evaluation harnesses, and standardized logging/monitoring frameworks. This role is critical for enabling safe, reliable, and compliant AI development across multiple use cases, teams, and business units. Idea is to create the common platform services that AI team will build upon.
Key Responsibilities
1. Guardrails, Safety & Governance
• Design and implement data guardrail frameworks (pre processing, redaction, PII/PHI filtering, DLP integration, prompt defenses).
• Build “Model Armor” components such as:
o Input validation & sanitization
o Prompt injection defenses
o Harmful content detection & policy enforcement
o Output filtering, fact checking, grounding checks
• Integrate safety tooling (policy engines, classifiers, DLP APIs, safety models).
• Collaborate with Security, Compliance, and Data Privacy teams to ensure frameworks meet enterprise governance requirements.
2. Observability Frameworks
• Build and maintain observability pipelines using tools like Arize AI (tracing, quality metrics, dataset drift/hallucination tracking, embedding monitoring).
• Define and enforce platform wide standards for:
o Tracing LLM calls
o Token usage and cost monitoring
o Latency and reliability metrics
o Prompt/model version tracking
• Provide reusable SDKs or middleware for engineering teams to adopt observability with minimal friction.
3. Logging, Monitoring & Telemetry
• Design standardized LLM-specific logging schemas, including:
o Inputs/outputs
o Model metadata
o Retrieval metadata
o Safety flags
o User context and attribution
• Build monitoring dashboards for performance, cost, anomalies, errors, and safety events.
• Implement alerting and SLOs/SLIs for LLM inference systems.
4. Evaluation Infrastructure
• Architect and maintain evaluation harnesses for GenAI systems, including:
o RAG evaluation (faithfulness, relevance, hallucination risk)
o Summarization/QA evaluation
o Human-in-the-loop review workflows
o Automated eval pipelines integrated into CI/CD
• Support frameworks such as RAGAS, G Eval, rubric scoring, pairwise comparisons, and test case generation.
• Build reusable tooling for teams to write, run, and track model evaluations.
5. Platform Engineering & Reusable Components
• Develop shared libraries, APIs, and services for:
o Prompt management/versioning
o Embedding pipelines and model wrappers
o Retrieval adapters
o Common data loaders and document preprocessing
o Tool/function schemas
• Drive consistency across teams with standards, reference architectures, and best practices.
• Review system designs across use cases to ensure alignment to platform patterns.
6. Collaboration & Enablement
• Partner with AI engineers, product teams, and data scientists to understand cross cutting needs and convert them into reusable platform features.
• Create documentation, onboarding guides, examples, and developer tooling.
• Provide internal training (brown bags, workshops) on guardrails, observability, and evaluation frameworks.
Required Qualifications
Technical Skills
• 5–10+ years software engineering or ML infrastructure experience.
• Strong Python engineering fundamentals (FastAPI, async, typing/Pydantic, testing).
• Experience with model safety/guardrails approaches (prompt injection defense, PII redaction, toxicity filters, policy enforcement).
• Hands on with Arize AI, LangSmith, or similar LLM observability platforms.
• Experience creating evaluation frameworks using RAGAS, G Eval, or custom rubric systems.
• Strong familiarity with vector databases (Pinecone, Weaviate, Milvus), embeddings, and retrieval pipelines.
• Solid understanding of LLM architectures, tokenization, embeddings, context limits, and RAG patterns.
• Experience in cloud (GCP preferred), Kubernetes/GKE, containers, and CI/CD.
• Strong understanding of security, governance, DLP, data privacy, RBAC, and enterprise compliance requirements.
Soft Skills
• Strong documentation and communication skills.
• Ability to influence engineering teams and standardize best practices.
• Comfortable working across multiple stakeholders—platform, security, ML engineering, product.
Nice to Have
• Experience with LangChain/LangGraph or LlamaIndex orchestrations.
• Experience with , Rebuff, Protect AI, or similar LLM security tooling.
• Experience with GCP Vertex AI pipelines, Model Monitoring, and Vector Search.
• Familiarity with knowledge graphs, grounding models, fact checking models.
• Building SDKs or developer frameworks adopted across multiple teams.
• On prem or hybrid AI deployment experience.
Overall Responsibility:
This role supports the design, development, and optimization of Arora’s enterprise data and ERP systems. This role reports directly under the Data Analytics Manager to improve financial reporting, support platform integrations, and build scalable data architecture that enables informed decision-making across the organization.
The position combines technical execution (SQL, automation, system configuration) with financial reporting support and cross-platform integration work to ensure accuracy, efficiency, and long-term system sustainability.
Essential Functions:
- Execute reporting and system requests in alignment with established data governance standards and reporting frameworks under the direction of the Data Analytics Manager.
- Contribute to the design of data models and system workflows that reduce manual processes and improve cross-functional data visibility.
- Support internal dashboards by creating backend data solutions and integrating with Vision.
- Provide system-level troubleshooting and ensure data consistency and reliability across platforms.
- Collaborate with teams to streamline processes through automation and data tools.
- Maintain documentation of data procedures, workflows, and system modifications.
- Support financial reporting and analysis by developing standardized, scalable reporting solutions aligned with company-wide data architecture.
- Assist in translating financial and operational requirements into structured reporting outputs and automation workflows.
- Assist in platform integrations (ERP, CRM, BI tools, and other enterprise systems) to support long-term architectural alignment and scalability.
Needed Skills:
- Ability to program in SQL at an expert level to assist data processes. Potential need for other programming language knowledge (Java, Python, etc.).
- Ability to create and maintain productive relationships with employees, clients, and vendors.
Education/Experience Minimum:
- 3-5 years of experience
- Strong programming skills having the ability to write complex queries.
- Preferred familiarity with all Microsoft platforms, including but not limited to Excel, Power BI, SharePoint, and SQL Server.
- Preferred experience with Deltek Vision v7.6 and VantagePoint
- Experience in building automated processes and data workflows.
- Strong problem-solving and attention to detail.
Company Description
Spiras Health delivers personalized healthcare services at home, focusing on patients at high risk of emergency events or hospitalizations. By addressing the unique needs of individuals with chronic diseases, multiple health conditions, and other social determinants of health, Spiras Health enhances quality of life and reduces healthcare costs. Through collaboration with health plan providers and care management teams, the company employs local Clinical Care Teams to deliver tailored home-based care. Leveraging innovative technology, Spiras Health ensures efficient and effective program delivery. For more information, visit .
Who We Are: Excellence, Innovation, Passion, Compassion, Communication
Spiras Health is a value-based, nurse practitioner led clinical provider of care-at-home and other health-related services to individuals with complex and polychronic needs. Spiras’ comprehensive approach to care delivery includes a combination of home-based services, telehealth, two-way digital communications, and remote patient monitoring. Proprietary predictive modeling identifies and assesses individuals with an elevated probability of avoidable costs. Spiras Health then develops actionable plans of care, addresses barriers including social determinants of health and delivers high quality patient care in collaboration with the patient’s treating physicians. Spiras’ innovative multi-modal care approach delivers improved satisfaction and clinical metrics as well as financial savings to its partners, through a geographically and economically scalable delivery model. Our culture is anchored on a promise of full accountability and integrity in everything we do.
How We Serve:
The Senior Data Analyst, in partnership with the Chief Financial Officer and Chief Commercial Officer, will design and develop market and opportunity analysis, monitor and measure contract performance, and model operational strategies and initiatives. Additionally, this role offers opportunity to engage with senior management and business unit leaders, contributing key insights that inform strategic decision making that ultimately supports the company’s growth.
Job Summary:
The Sr Analyst - Data & Analytics owns the integrity, analysis, and interpretation of payer claims data to support performance measurement, utilization management, and value-based care initiatives. This role is hands-on and accountable for claims QA, utilization metrics, and analytics outputs used by clinical, operations, and executive stakeholders.
This is an individual contributor role with strong technical depth in payer claims data and healthcare analytics platforms, including MedeAnalytics.
Key Responsibilities
Claims Data Quality & Governance
- Own QA processes for medical and pharmacy claims data, including eligibility, provider, diagnosis, procedure, and financial fields
- Identify, quantify, and resolve data anomalies (e.g., lag, duplication, missing claims, inconsistent coding)
- Partner with data engineering, vendors, and payers to remediate data quality issues
- Define and maintain claims data validation rules and documentation
Analytics & Utilization Management
- Analyze and report on core utilization metrics, including:
- Inpatient admits / 1,000
- ED visits / 1,000
- Readmissions
- PMPM cost trends
- Length of stay
- Avoidable admissions
- Apply payer benchmarks (national, regional, risk-adjusted where applicable) to contextualize performance
- Support matched cohort, pre/post, and trend analyses
- Translate claims data into actionable insights for clinical and operations teams
Platform Ownership (MedeAnalytics)
- Serve as a power user and analytics owner of the MedeAnalytics platform
- Build, validate, and QA dashboards, reports, and analytic views
- Ensure alignment between platform outputs and internal data models
- Act as internal SME for MedeAnalytics capabilities and limitations
Stakeholder & Team Collaboration
- Partner closely with:
- Clinical leadership
- Operations
- Finance
- Growth
- Present findings to senior leadership in clear, non-technical language
Required Qualifications
- 5–8 years of healthcare analytics experience
- Deep experience working with payer medical and pharmacy claims data
- Strong understanding of utilization metrics and healthcare cost drivers
- Hands-on experience with MedeAnalytics (required)
- Advanced SQL/SAS skills; experience with data visualization tools (Tableau, Power BI, or equivalent)
- Proven ability to QA complex healthcare datasets
- Bachelor’s degree in Analytics, Statistics, Health Informatics, or related field
Preferred Qualifications
- Experience supporting value-based care, MA, D-SNP, or ACO programs
- Familiarity with HEDIS, Stars, or CMS reporting concepts
- Experience with risk adjustment (HCCs)
Physical Requirements:
- Prolonged periods of sitting at a desk and working on a computer.
- This job operates in a hybrid professional environment free from noise and distraction. This role routinely uses standard office equipment such as computers and phones.
- The physical demands described here are representative of those that must be met by an employee to successfully perform the essential functions of this job.
- Specific vision abilities required by this job include close vision, distance vision, color vision, peripheral vision, depth perception and ability to adjust focus.
- While performing the duties of this job, the employee is regularly required to talk and hear.
- This position requires the ability to occasionally lift office products and supplies, up to twenty pounds.
EEOC STATEMENT:
- All qualified candidates will receive consideration for employment without regard to age, race, color, national origin, gender (including pregnancy, childbirth or medical conditions related to pregnancy or childbirth), gender identity or expression, religion, physical or mental disability, medical condition, legally protected genetic information, marital status, veteran status, or sexual orientation.
Senior Platform Architect
Reports To: Director of Engineering
Department: Engineering
Location: Hybrid - Atlanta, GA
What makes MTech different:
Purpose-Driven Work – Build technology that solves real problems for the world
Casual & Collaborative – No corporate bureaucracy, direct access to senior leadership
Innovation-Focused – Healthy innovation pipeline expanding into new segments and technologies
Transparent & Data-Driven – Clear metrics, objectives, and visibility into company performance
Modern Development – Robust development tools, training programs, and technical excellence
Flexibility & Balance – Flexible work environment that values results over presenteeism
Job Summary
The Senior Platform Architect will lead the technical architecture, design, and modernization of large-scale, multi-tenant enterprise SaaS platforms built on Azure and the .NET stack. This role requires mastery of distributed systems, cloud-native design, and advanced engineering practices to deliver highly available, performant, and secure solutions for global consumer-facing SaaS and Agentic AI products.
Responsibilities and Duties
Architectural Design & Transformation
- Lead migration from monolithic systems to modular monolith and microservices architectures using domain-driven design, bounded contexts, and decomposition strategies.
- Design multi-tenant SaaS platforms with advanced tenant isolation, resource partitioning, and elastic scaling using Azure services.
- Define and enforce architectural standards for .NET (C#), TypeScript, Angular, SQL Server, and Azure, including dependency injection, SOLID principles, asynchronous programming, and reactive patterns.
- Design and implement distributed systems: service orchestration, API gateway management, IoT, edge computing, distributed transactions, eventual consistency, CQRS, and event sourcing.
- Architect for cloud-native resiliency: circuit breakers, bulkheads, retries, failover, geo-redundancy, and disaster recovery using Azure App Services, Azure Functions, Service Bus, Cosmos DB, and Azure SQL.
- Develop and maintain architecture documentation, reference models, and decision records using industry frameworks (TOGAF, Zachman, C4 Model).
Performance Engineering & Observability
- Establish and monitor platform SLOs (latency, throughput, error rates, availability) mapped to customer SLAs.
- Architect and implement advanced caching strategies, indexing, and query optimization for SQL Server and NoSQL stores in coordination with Senior Data Architect, Data Engineers, and Database Admins.
- Design and implement telemetry pipelines: distributed tracing (OpenTelemetry), structured logging, metrics collection, and real-time dashboards for system health and diagnostics.
- Conduct performance profiling, load testing, and capacity planning for backend services and frontend applications.
Automation, Quality, and DevOps
- Architect and implement CI/CD pipelines with automated build, test, security scanning, and deployment workflows.
- Integrate static code analysis, code coverage, and quality gates into the development lifecycle.
- Design and enforce automated testing strategies: unit, integration, contract, and end-to-end tests for backend and frontend components.
- Develop infrastructure as code (IaC) solutions for repeatable, scalable cloud provisioning.
- Create incident response playbooks for rollback, failover, and recovery, drive down MTTR and automate remediation where possible.
Security, Compliance, and Governance
- Architect for multi-tenant security: authentication/authorization (OAuth2, OpenID Connect), encryption at rest and in transit, secrets management, and compliance with SOC 1, SOC 2, GDPR, and other regulatory standards.
- Implement secure software development lifecycle (SSDLC) practices, threat modeling, and vulnerability management, including ZDR, DLP, No Model Training policies with AI Models.
- Ensure architectural governance and alignment with enterprise frameworks (TOGAF, Zachman), maintain architecture decision records, and participate in architecture review boards.
Technical Leadership & Collaboration
- Mentor engineering teams in advanced architectural concepts, distributed systems, cloud-native development, and best practices.
- Collaborate with Data Architect, DevOps, IT Services, Engineering and Product Management teams to ensure platform extensibility, integration, and support for complex business requirements.
- Evaluate and integrate AI/ML services, advanced analytics, and developer productivity tools to enhance platform capabilities.
- Champion a culture of technical excellence, continuous improvement, and innovation.
Required Experience & Skills
- Minimum 10+ years in software/platform engineering, with at least 8 years in platform architecture for enterprise SaaS on Azure and .NET tech stack.
- Proven experience architecting and delivering large-scale, multi-tenant SaaS platforms for global consumer-facing products.
- Deep expertise in .NET (C#), Azure cloud services (App Services, Functions, Service Bus, Cosmos DB, SQL Server), Azure Open AI, Microsoft Agent Framework, TypeScript, Angular, CI/CD, automated testing, and observability.
- Mastery of distributed systems, cloud-native patterns, event-driven architectures, and microservices.
- Demonstrated success in technical debt reduction, performance engineering, and architectural modernization.
- Experience with architectural frameworks (TOGAF, Zachman, C4 Model), architectural governance, and compliance.
- Strong understanding of platform security, regulatory compliance, and multi-tenant SaaS challenges.
Success Metrics (First 12 Months)
- Reduction in platform-related incidents/support tickets.
- Improvement in deployment speed and release velocity.
- Reduction in MTTR for platform incidents.
- Achievement of modularization milestones (monolith decomposition, service rollout, platform observability in production).
- Increase in automated test coverage, code quality, and system performance metrics.
Preferred Skills & Certifications
- TOGAF, Zachman, or similar architecture certification.
- Advanced knowledge of event sourcing, CQRS, service mesh, and cloud-native security.
- Familiarity with semantic technologies, knowledge graphs, and AI/ML integration.
- Hands-on experience with infrastructure as code, automated testing tools, and modern DevOps practices.
- Strong background in platform security, compliance, and multi-tenant SaaS challenges.
EEO Statement
Integrated into our shared values is MTech’s commitment to diversity and equal employment opportunity. All qualified applicants will receive consideration for employment without regard to sex, age, race, color, creed, religion, national origin, disability, sexual orientation, gender identity, veteran status, military service, genetic information, or any other characteristic or conduct protected by law. MTech aims to maintain a global inclusive workplace where every person is regarded fairly, appreciated for their uniqueness, advanced according to their accomplishments, and encouraged to fulfill their highest potential. We
Job Opportunity: Data Product Manager
Location: Cleveland, Ohio/ Pittsburgh, Pennsylvania Hybrid
Duration: Full-Time
Key Responsibility
Product Ownership & Strategy
- Own end-to-end lifecycle of assigned data products, including vision, strategy, roadmap, and delivery.
- Collaborate with business units, analytics teams, and technology partners to prioritize features and enhancements.
- Define and track key product success metrics and adoption KPIs.
- Advocate for data products across the organization, ensuring alignment with enterprise data governance and cloud/data strategy initiatives.
- Stakeholder Engagement
- Act as the primary liaison between business stakeholders and data engineering/analytics teams.
- Gather and translate business requirements into actionable data product specifications.
- Facilitate cross-functional collaboration to resolve trade-offs and dependencies.
Data Governance & Quality
- Ensure data products comply with regulatory, security, and privacy requirements.
- Define and enforce data quality standards, lineage, and observability metrics.
- Collaborate with Data Governance, Risk, and IT Security teams to maintain compliance and audit readiness.
Technical Leadership
- Understand and leverage modern data technologies (e.g., relational databases, data warehouses, data lakes, ETL pipelines, cloud platforms, APIs, BI tools).
- Collaborate with data engineering teams on architecture, modeling, and platform decisions.
- Evaluate emerging technologies and recommend innovations to improve data products and processes.
Execution & Delivery
- Drive delivery of data products using agile methodologies.
- Prioritize backlog, manage sprints, and ensure timely delivery of features.
- Monitor and measure product performance, adoption, and business impact.
- Thought Leadership
- Contribute to the overall data product management framework and best practices within the bank.
- Promote a culture of data-driven decision-making and product-centric thinking.
Required Qualifications
- 8+ years of experience in data product management, data strategy, or analytics roles; experience in banking/financial services preferred.
- Strong understanding of core banking products (e.g., deposits, loans, payments) and associated operational data flows.
- Solid knowledge of data architecture, warehousing, BI, analytics, and cloud platforms.
- Proven ability to manage multiple data products simultaneously.
- Excellent communication, stakeholder management, and leadership skills.
- Experience with Agile/Scrum methodologies and data governance frameworks.
- Bachelor’s degree in Computer Science, Information Systems, Finance, or related field; advanced degree preferred.
Preferred Qualifications
- Hands-on experience with HiveQL, SQL, Tableau
- Understanding of regulatory reporting requirements (e.g., CCAR, FR Y-14, Basel).
- Exposure to semantic layers, or enterprise data product management frameworks.
About Pinterest:
Millions of people around the world come to our platform to find creative ideas, dream about new possibilities and plan for memories that will last a lifetime. At Pinterest, we're on a mission to bring everyone the inspiration to create a life they love, and that starts with the people behind the product.
Discover a career where you ignite innovation for millions, transform passion into growth opportunities, celebrate each other's unique experiences and embrace theflexibility to do your best work. Creating a career you love? It's Possible.
At Pinterest, AI isn't just a feature, it's a powerful partner that augments our creativity and amplifies our impact, and we're looking for candidates who are excited to be a part of that. To get a complete picture of your experience and abilities, we'll explore your foundational skills and how you collaborate with AI.
Through our interview process, what matters most is that you can always explain your approach, showing us not just what you know, but how you think. You can read more about our AI interview philosophy and how we use AI in our recruiting process here.
About tvScientific
tvScientific is the first and only CTV advertising platform purpose-built for performance marketers. We leverage massive data and cutting-edge science to automate and optimize TV advertising to drive business outcomes. Our solution combines media buying, optimization, measurement, and attribution in one, efficient platform. Our platform is built by industry leaders with a long history in programmatic advertising, digital media, and ad verification who have now purpose-built a CTV performance platform advertisers can trust to grow their business.
As a key member of our Data Science team, you'll be responsible for turning data into actionable insights, driving business decisions, and owning analytics projects from inception to completion. Day to day, your role will involve developing customized reporting and analytics tools to meet the unique needs of our clients.
What you'll do:
- Write production code in Python.
- Design, launch, and analyze experiments to optimize ad campaigns.
- Create reporting and analytics tools for the Data Science Team's customers.
- Translate insights from reporting tools into tvScientific's Data Science Product.
What we're looking for:
- Ability to write and review production-level code in Python.
- Strong statistics and ML fundamentals.
- Desire to work in a fast-growing environment-comfortable with ambiguity, ownership, scaling new products, and an experimental, iterative development process.
- Experience defining and measuring the utility of new reporting and analytics tools.
- Ability to quickly translate between DS Analytics & DS Product team.
- Strong track record of providing high-quality service to the DS team's customers/stakeholders.
- Adtech or CTV experience.
- Teaching experience is a plus.
- Big data experience with Scala, Apache Spark, Apache Beam, and AWS Athena is a plus.
In-Office Requirement Statement:
- We recognize that the ideal environment for work is situational and may differ across departments. What this looks like day-to-day can vary based on the needs of each organization or role.
Relocation Statement:
- This position is not eligible for relocation assistance. Visit ourPinFlexpage to learn more about our working model.
#LI-SM4
#LI-REMOTE
At Pinterest we believe the workplace should be equitable, inclusive, and inspiring for every employee. In an effort to provide greater transparency, we are sharing the base salary range for this position. The position is also eligible for equity. Final salary is based on a number of factors including location, travel, relevant prior experience, or particular skills and expertise.
Information regarding the culture at Pinterest and benefits available for this position can be found here.
US based applicants only$139,764—$287,749 USDOur Commitment to Inclusion:
Pinterest is an equal opportunity employer and makes employment decisions on the basis of merit. We want to have the best qualified people in every job. All qualified applicants will receive consideration for employment without regard to race, color, ancestry, national origin, religion or religious creed, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, age, marital status, status as a protected veteran, physical or mental disability, medical condition, genetic information or characteristics (or those of a family member) or any other consideration made unlawful by applicable federal, state or local laws. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. If you require a medical or religious accommodation during the job application process, please completethis formfor support.
About US Solar
US Solar is a developer, owner, operator, and financier of solar and solar + storage projects, with a focus on emerging state markets, community solar programs, distributed generation and small-scale utility projects nationwide.
US Solar is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. We believe diverse teams and diverse perspectives lead to better outcomes and breakthrough thinking, which are differentiators in any business and fundamental to our long-term success.
About Sunscription
is US Solar’s platform for managing community solar subscriptions, billing, and customer operations across multiple markets. The platform supports both residential and commercial subscribers, enabling them to participate in community solar projects and receive savings on their electric bills.
The Subscription Data Operations Lead will join the Sunscription team and play a critical role in supporting contract execution, allocation accuracy, and financial closings by serving as the central owner of subscription data and documentation.
Position Description
The Subscription Data Operations Lead serves as the primary data input and coordination point for community solar subscriptions. This role owns the accuracy and flow of information across allocation spreadsheets, executed contracts, utility documentation, and internal systems.
The position requires strong execution within US Solar’s current Excel based allocation and mail merge workflows, while also supporting improvements to automation, documentation, and reporting processes over time. The successful candidate will be detail oriented, systems minded, and comfortable operating in a fast paced environment where processes continue to evolve.
Responsibilities
- Serve as the primary owner of subscription data across allocation spreadsheets, contracts, utility documentation, and internal platforms.
- Execute and maintain Excel based allocation models and mail merge workflows used to generate contracts and supporting documentation.
- Ensure consistency and accuracy between modeled allocations, executed agreements, and utility records through regular validation and reconciliation.
- Administer the execution and recording of commercial subscription agreements and associated costs to support long term contract management, cost, and revenue tracking.
- Track and analyze residential subscriber acquisition activity to monitor program progress, validate enrollment data, and support allocation planning
- Organize and maintain allocation lists, contracts, utility bills, and utility documentation required for enrollment, billing, and ongoing management.
- Create and maintain subscription summaries and documentation required for program and project financial closings.
- Track additional documentation requirements as projects move toward COD and financial close.
- Migrate deal information and documentation accurately and completely into internal subscriber billing and management platform
- Standardize documentation and reporting formats to improve consistency and accessibility for internal stakeholders.
- Identify opportunities to streamline manual processes and improve efficiency within existing Excel and document generation workflows.
- Collaborate with accounting, finance, asset management, and the Sunscription team to support data needs across the customer lifecycle.
- Create and deliver customer onboarding communication to support billing setup and closing requirements.
- Perform process improvement and administrative tasks to support the overall success of community solar subscriptions.
Requirements
- Bachelor’s degree and five or more years of professional experience in operations, data management, finance, or a related field.
- Exceptional attention to detail with strong organizational skills.
- Advanced proficiency in Microsoft Excel and experience managing complex spreadsheets.
- Experience executing document generation or mail merge workflows tied to structured data.
- Comfort working with contracts, utility documentation, and operational data.
- Ability to learn new tools and contribute to the gradual improvement of existing systems and processes.
- Strong communication skills and ability to collaborate across teams.
- Self directed and comfortable working independently in a fast paced environment.
- Interest in renewable energy and community solar programs.
- US Solar seeks individuals who are flexible, motivated, responsible, and eager to contribute to a collaborative team environment.
The Data Protection Software Engineering team delivers next-generation data protection and data availability enhancements and new products for a changing world. Working at the cutting edge, we design and develop software to protect data hosted across On-Prem, Public Cloud, Hybrid Cloud - all with the most advanced technologies, tools, software engineering methodologies and the collaboration of internal and external partners. Join us as a Software Principal Engineer on our Engineering Development team in Hopkinton, Massachusetts Development Center to do the best work of your career and make a profound social impact.
What you’ll achieve
As a Software Principal Engineer, you will develop next-generation cyber resiliency and data protection software for Dell's Data Protection team. You will be responsible for developing sophisticated software systems and solutions safeguarding enterprise-level customer data against data loss, cyber threats, and ransomware attacks—while driving through AI-powered solutions for enhanced cyber resiliency.
You will:
Develop next generations products and will have an opportunity to shape the best client technologies in the world
Contribute to the design and architecture of high-quality, complex systems and software/storage environments
Contribute to the development and implementation of test strategies for complex software products and systems
Prepare, review and evaluate software specifications based on the product requirements, and contribute to the designs and implement them as product features with specific focus on device and serviceability of client platforms
Take the first step towards your dream career
Every Dell Technologies team member brings something unique to the table. Here’s what we are looking for with this role:
Essential Skills:
5 - 8 Years of Software Development experience working in Agile SDLC, Bachelors or Masters in Computer Science
C/C++, Golang, Win 32/Storage API, Windows/Linux/Unix Programming, experience in Windows, Linux, Aix operating systems, Systems Programming, Networking, File systems and block layers
Strong understanding of CPU architecture, Multi-Threaded Environments, Concurrency Databases, Storage Technologies, stack and I/O data path, hands on exposure with AI technologies and proficient usage of AI tools for all facets of SDLC
Experience in Data Protection domain, Scalable Architecture, virtualization platforms like ESXI, Hyper-V and other hypervisors, excellent code detective and root cause analysis skills on a variety of platforms and languages
Experience in feature requirements, development and design of applications which interact closely with business, excellent problem solving & multi-tasking skills
Quality first mindset and attitude to take full ownership of the delivery from development to unit tests to end-to-end tests, should model behaviours to be adaptable to pick up new technologies and stay curious to drive innovation, profiling and Benchmarking techniques, good communication and technical leadership abilities to communicate the design effectively and mentor junior engineers
Desired Skill:
Experience in Operating system Clusters, Databases clusters, experience with Device drivers, and system architecture such as SCSI, cache, and message subsystem
Knowledge of AI/ML, GenAI and prompt engineering, knowledge of cloud application security & gateways
Compensation
Dell is committed to fair and equitable compensation practices. The salary range for this position is $150k - $194k.
Benefits and Perks of working at Dell Technologies
Your life. Your health. Supported by your benefits. You can explore the overall benefits experience that awaits you as a Dell Technologies team member — right now at
Who we are
We believe that each of us has the power to make an impact. That’s why we put our team members at the center of everything we do. If you’re looking for an opportunity to grow your career with some of the best minds and most advanced tech in the industry, we’re looking for you.
Dell Technologies is a unique family of businesses that helps individuals and organizations transform how they work, live and play. Join us to build a future that works for everyone because Progress Takes All of Us.
Dell Technologies is committed to the principle of equal employment opportunity for all employees and to providing employees with a work environment free of discrimination and harassment. Read the full Equal Employment Opportunity Policy here.
#LI-Onsite
Job ID: R282864