Cloudera Data Platform Cdp Jobs in Usa
9,951 positions found — Page 5
About Pinterest:
Millions of people around the world come to our platform to find creative ideas, dream about new possibilities and plan for memories that will last a lifetime. At Pinterest, we're on a mission to bring everyone the inspiration to create a life they love, and that starts with the people behind the product.
Discover a career where you ignite innovation for millions, transform passion into growth opportunities, celebrate each other's unique experiences and embrace theflexibility to do your best work. Creating a career you love? It's Possible.
At Pinterest, AI isn't just a feature, it's a powerful partner that augments our creativity and amplifies our impact, and we're looking for candidates who are excited to be a part of that. To get a complete picture of your experience and abilities, we'll explore your foundational skills and how you collaborate with AI.
Through our interview process, what matters most is that you can always explain your approach, showing us not just what you know, but how you think. You can read more about our AI interview philosophy and how we use AI in our recruiting process here.
About tvScientific
tvScientific is the first and only CTV advertising platform purpose-built for performance marketers. We leverage massive data and cutting-edge science to automate and optimize TV advertising to drive business outcomes. Our solution combines media buying, optimization, measurement, and attribution in one, efficient platform. Our platform is built by industry leaders with a long history in programmatic advertising, digital media, and ad verification who have now purpose-built a CTV performance platform advertisers can trust to grow their business.
As a Senior Data Engineer at tvScientific, you will be a key player in implementing the robust data infrastructure to power our data-heavy company. You will collaborate with our cross-functional teams to evolve our core data pipelines, design for efficiency as we scale, and store data in optimal engines and formats. This is an individual contributor role, where you will work to define and implement a strategic vision for data engineering within the organization.
What you'll do:
- Implement robust data infrastructure in AWS, using Spark with Scala
- Evolve our core data pipelines to efficiently scale for our massive growth
- Store data in optimal engines and formats
- Collaborate with our cross-functional teams to design data solutions that meet business needs
- Built out fault-tolerant batch and streaming pipelines
- Leverage and optimize AWS resources while designing for scale
- Collaborate closely with our Data Science and Product teams
- How we'll define success:
- Successful implementation of scalable and efficient data infrastructure
- Timely delivery and optimization of data assets and APIs
- High attention to detail in implementation of automated data quality checks
- Effective collaboration with cross-functional teams
What we're looking for:
- Production data engineering experience
- Proficiency in Spark and Scala, with proven experience building data infrastructure in Spark using Scala
- Familiarity with data lakes, cloud warehouses, and storage formats
- Strong proficiency in AWS services
- Expertise in SQL for data manipulation and extraction
- Excellent written and verbal communication skills
- Bachelor's degree in Computer Science or a related field
- Nice-to-Haves
- Experience in adtech
- Experience implementing data governance practices, including data quality, metadata management, and access controls
- Strong understanding of privacy-by-design principles and handling of sensitive or regulated data
- Familiarity with data table formats like Apache Iceberg, Delta
In-Office Requirement Statement:
- We recognize that the ideal environment for work is situational and may differ across departments. What this looks like day-to-day can vary based on the needs of each organization or role.
Relocation Statement:
- This position is not eligible for relocation assistance. Visit ourPinFlexpage to learn more about our working model.
#LI-SM4
#LI-REMOTE
At Pinterest we believe the workplace should be equitable, inclusive, and inspiring for every employee. In an effort to provide greater transparency, we are sharing the base salary range for this position. The position is also eligible for equity. Final salary is based on a number of factors including location, travel, relevant prior experience, or particular skills and expertise.
Information regarding the culture at Pinterest and benefits available for this position can be found here.
US based applicants only$123,696—$254,667 USDOur Commitment to Inclusion:
Pinterest is an equal opportunity employer and makes employment decisions on the basis of merit. We want to have the best qualified people in every job. All qualified applicants will receive consideration for employment without regard to race, color, ancestry, national origin, religion or religious creed, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, age, marital status, status as a protected veteran, physical or mental disability, medical condition, genetic information or characteristics (or those of a family member) or any other consideration made unlawful by applicable federal, state or local laws. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. If you require a medical or religious accommodation during the job application process, please completethis formfor support.
Be the one who makes a difference!
At Vertex Education we are a team of high achievers, courageous leaders, and passionate believers in changing lives through education. As a purpose-led education services provider, our mission is destined to benefit many and yet it starts with just one person inspired to work together with us to make a memorable and meaningful difference for our clients, schools, students, and communities. Be the one who makes a difference—with us.
The Marketing Analytics Analyst supports Legacy Traditional Schools by transforming marketing and enrollment data into actionable insights that improve student recruitment and family engagement. This role integrates data from multiple platforms, develops clear and effective dashboards, and delivers analysis that helps the marketing team make smarter, faster decisions.
Reporting to the Director of Business Intelligence, the Marketing Analytics Analyst serves as a strategic partner to marketing leadership by improving data quality, clarifying performance metrics, and identifying opportunities to optimize campaigns, resource allocation, and enrollment outcomes. This role helps ensure marketing efforts are measurable, efficient, and continuously improving so more families can find and connect with the educational opportunities Legacy provides.
Essential Functions:
1. Marketing Data Management and Governance:
- Collect, integrate, and validate data from web analytics, CRM, paid media, SIS, application, and marketing automation platforms.
- Own and maintain marketing data integrations and reporting workflows across tools such as Google Analytics, HubSpot, SchoolMint, and student information systems.
- Define, document, and maintain standardized marketing metrics, reporting logic, and data governance practices.
- Ensure marketing data is accurate, consistent, and reliable across platforms and reporting outputs.
2. Marketing Analytics and Insights:
- Analyze campaign performance, audience behavior, lead flow, and enrollment conversion trends to identify actionable opportunities.
- Design, support, and evaluate A/B tests to improve campaign effectiveness and inform future strategy.
- Develop forecasts related to lead volume, conversion, enrollment trends, and marketing performance.
- Track and interpret key performance metrics such as cost per lead, conversion rates, application yield, and enrollment outcomes.
- Translate complex data into clear insights and practical recommendations for marketing and business leaders.
3. Reporting and Visualization:
- Build, maintain, and enhance dashboards and reports that communicate marketing performance to stakeholders.
- Automate recurring reporting processes to improve efficiency, reduce manual effort, and increase accuracy.
- Tailor reporting views and analyses to meet the needs of marketing leadership and cross-functional partners.
- Present findings in a clear, compelling, and decision-oriented manner.
4. Financial and Performance Analysis:
- Monitor campaign budgets, pacing, and performance against plan.
- Evaluate the return on investment of paid media and broader marketing initiatives.
- Identify opportunities to improve efficiency and maximize enrollment impact per dollar spent.
- Partner with marketing leaders to refine strategy based on financial, operational, and performance data.
5. Continuous Improvement and Innovation:
- Stay current on marketing analytics tools, trends, and best practices.
- Recommend and implement process improvements, tools, and analytical approaches that strengthen marketing decision-making.
- Identify opportunities to streamline internal workflows, improve reporting usability, and increase data accessibility.
- Support ongoing innovation in marketing measurement and analysis to better advance student recruitment goals.
Required Qualifications:
- Bachelor’s degree in Marketing, Data Analytics, Statistics, Business, or a related field.
- Minimum of 3 years of experience in marketing analytics, campaign analysis, business intelligence, or a related data-focused role.
- Proficiency in SQL and at least one programming language, such as Python or R.
- Hands-on experience with web analytics platforms, CRM systems, and marketing automation tools.
- Experience with data visualization and reporting tools such as Tableau, Power BI, Looker, or similar platforms.
- Strong understanding of data quality, governance, and metric standardization best practices.
- Demonstrated ability to synthesize data into actionable business insights and communicate findings effectively to non-technical stakeholders.
Preferred Qualifications:
- Certifications in Google Analytics, HubSpot, or related marketing analytics platforms.
- Experience with student information systems such as Infinite Campus or PowerSchool.
- Experience with application or enrollment platforms such as SchoolMint.
- Familiarity with paid media, programmatic advertising, and digital campaign measurement.
- Advanced Excel skills, including modeling, scenario analysis, and data manipulation
Be excited to be a part of our team and grow your career with us!
Be the one who enables us to positively impact over 258,000 students across multiple states while driving our growth forward so we can enrich even more lives. Be the one who helps us achieve excellence for over 226 schools that we support with academics, finance, technology, human resources, communications, marketing, facilities, construction, and food services. Be the one who is a diverse thinker, a team player, a smart risk taker, an innovator, and a difference maker by encouraging others to climb higher and reach farther to further education.
- Be yourself surrounded by wonderful people who care about you, value your unique skills, and lift you up.
- Be supported in your work by caring leaders and team members who want you to succeed.
- Be empowered to make a difference and climb higher and reach farther to change lives through education.
- Be well in all aspects of your life from your physical, mental, and emotional wellbeing to your finances.
- Enjoy industry-leading pay, rewards, referral bonuses, with unlimited flexible paid time-off for performance.
- Be able to care for your health and your family with comprehensive medical, dental and vision benefits and invest in your future with 401(k) plans with a 6% employer match on your contributions.
- Enhance your growth and development with mentoring and money to take training classes.
- Thrive in a welcoming, supportive, and inclusive environment where we treat others with fairness and respect, celebrate diversity, and elevate equality and inclusion as an equal opportunity employer.
Be the one who makes a difference!
With an innovative mind, a hungry heart, and engaging spirit you can change lives through education. Be a part of Vertex Education and let’s make a difference together. Apply Today!
- Onsite 12 Months Contract Looking only Locals who can do Onsite Interview We are seeking an Enterprise Agentic Platform Specialist to lead the design, development, and delivery of enterprise-scale Data Science and Generative AI (GenAI) solutions.
This role will drive the implementation of AI agents, LLM orchestration frameworks, and enterprise automation pipelines, working cross-functionally with business stakeholders, data engineers, data scientists, DevOps teams, and UI developers.
The ideal candidate will combine hands-on GenAI engineering expertise with strong program delivery capabilities, ensuring solutions deliver measurable business outcomes while meeting enterprise standards for governance, security, and Responsible AI.
Key Responsibilities AI Agent and GenAI Development Lead the end-to-end delivery of enterprise data science and GenAI solutions.
Design, develop, and deploy AI agents using Microsoft Copilot Studio, Claude agent frameworks, and enterprise LLM orchestration patterns.
Implement prompt engineering strategies, grounding techniques, and Retrieval-Augmented Generation (RAG) pipelines.
Architecture and Integration Define architecture standards for agentic systems, including tool calling schemas, prompt frameworks, grounding flows, and RAG pipelines.
Translate complex workflows into modular, automated, event-driven pipelines.
Integrate AI solutions with enterprise systems via REST APIs, Power Platform connectors, and enterprise data services.
Connect Copilot Studio agents to enterprise data sources such as: SharePoint Dataverse SQL SAP Enterprise Platform Integration Oversee system integration across enterprise platforms including: ServiceNow SharePoint Microsoft Teams Power Automate Azure APIs Design MCP-based agent architectures, integration layers, authentication flows (OAuth / Microsoft Entra), and system messaging frameworks.
Governance, Security and Responsible AI Collaborate with AI architects, MLOps teams, and security teams to enforce: Responsible AI standards Data governance policies Security and access control frameworks Model safety guidelines Implement agent observability frameworks including logging, telemetry instrumentation, latency metrics, error tracking, and automated remediation workflows.
Delivery and Program Management Lead cross-functional teams delivering AI solutions across data engineering, data science, DevOps, and UX teams.
Manage delivery using Agile / Scrum or hybrid PM methodologies.
Track dependencies, risks, sprint alignment, and release orchestration.
Metrics and Performance Monitoring Define KPIs and operational dashboards for AI automation, including: Cycle time reduction Accuracy improvements Governance compliance Agent uptime and reliability Required Qualifications Hands-on experience delivering enterprise Data Science and GenAI solutions.
Experience designing and deploying AI agents using Microsoft Copilot Studio or similar agent frameworks.
Strong knowledge of LLMs, prompt engineering, and Retrieval-Augmented Generation (RAG).
Experience integrating AI solutions with enterprise platforms and APIs.
Understanding of MLOps, governance frameworks, and Responsible AI standards.
Experience working in Agile delivery environments with cross-functional teams.
Preferred Qualifications Hands-on expertise building agents in Microsoft Copilot Studio.
Familiarity with agent frameworks such as: LangChain AutoGen CrewAI OpenAI Assistants / Functions APIs Experience implementing enterprise AI observability and monitoring frameworks.
Strong understanding of enterprise security, authentication, and governance models.
Thanks Sri Vardhan Chilakamukku Infobahn SoftWorld Inc.
Job Title: Sr AI Platform Engineer- AI Platform Engineer (Guardrails, Observability & Evaluation Infrastructure)
Location, Charlotte, NC, USA (3 days onsite)
Role Overview
AI Platform Engineer to design and build the foundational components that power enterprise scale GenAI applications. This includes data guardrails, model safety tooling, observability pipelines, evaluation harnesses, and standardized logging/monitoring frameworks. This role is critical for enabling safe, reliable, and compliant AI development across multiple use cases, teams, and business units. Idea is to create the common platform services that AI team will build upon.
Key Responsibilities
1. Guardrails, Safety & Governance
• Design and implement data guardrail frameworks (pre processing, redaction, PII/PHI filtering, DLP integration, prompt defenses).
• Build "Model Armor" components such as:
o Input validation & sanitization
o Prompt injection defenses
o Harmful content detection & policy enforcement
o Output filtering, fact checking, grounding checks
• Integrate safety tooling (policy engines, classifiers, DLP APIs, safety models).
• Collaborate with Security, Compliance, and Data Privacy teams to ensure frameworks meet enterprise governance requirements.
2. Observability Frameworks
• Build and maintain observability pipelines using tools like Arize AI (tracing, quality metrics, dataset drift/hallucination tracking, embedding monitoring).
• Define and enforce platform wide standards for:
o Tracing LLM calls
o Token usage and cost monitoring
o Latency and reliability metrics
o Prompt/model version tracking
• Provide reusable SDKs or middleware for engineering teams to adopt observability with minimal friction.
3. Logging, Monitoring & Telemetry
• Design standardized LLM-specific logging schemas, including:
o Inputs/outputs
o Model metadata
o Retrieval metadata
o Safety flags
o User context and attribution
• Build monitoring dashboards for performance, cost, anomalies, errors, and safety events.
• Implement alerting and SLOs/SLIs for LLM inference systems.
4. Evaluation Infrastructure
• Architect and maintain evaluation harnesses for GenAI systems, including:
o RAG evaluation (faithfulness, relevance, hallucination risk)
o Summarization/QA evaluation
o Human-in-the-loop review workflows
o Automated eval pipelines integrated into CI/CD
• Support frameworks such as RAGAS, G Eval, rubric scoring, pairwise comparisons, and test case generation.
• Build reusable tooling for teams to write, run, and track model evaluations.
5. Platform Engineering & Reusable Components
• Develop shared libraries, APIs, and services for:
o Prompt management/versioning
o Embedding pipelines and model wrappers
o Retrieval adapters
o Common data loaders and document preprocessing
o Tool/function schemas
• Drive consistency across teams with standards, reference architectures, and best practices.
• Review system designs across use cases to ensure alignment to platform patterns.
6. Collaboration & Enablement
• Partner with AI engineers, product teams, and data scientists to understand cross cutting needs and convert them into reusable platform features.
• Create documentation, onboarding guides, examples, and developer tooling.
• Provide internal training (brown bags, workshops) on guardrails, observability, and evaluation frameworks.
Required Qualifications
Technical Skills
• 5–10+ years software engineering or ML infrastructure experience.
• Strong Python engineering fundamentals (FastAPI, async, typing/Pydantic, testing).
• Experience with model safety/guardrails approaches (prompt injection defense, PII redaction, toxicity filters, policy enforcement).
• Hands on with Arize AI, LangSmith, or similar LLM observability platforms.
• Experience creating evaluation frameworks using RAGAS, G Eval, or custom rubric systems.
• Strong familiarity with vector databases (Pinecone, Weaviate, Milvus), embeddings, and retrieval pipelines.
• Solid understanding of LLM architectures, tokenization, embeddings, context limits, and RAG patterns.
• Experience in cloud (GCP preferred), Kubernetes/GKE, containers, and CI/CD.
• Strong understanding of security, governance, DLP, data privacy, RBAC, and enterprise compliance requirements.
Soft Skills
• Strong documentation and communication skills.
• Ability to influence engineering teams and standardize best practices.
• Comfortable working across multiple stakeholders—platform, security, ML engineering, product.
Nice to Have
• Experience with LangChain/LangGraph or LlamaIndex orchestrations.
• Experience with , Rebuff, Protect AI, or similar LLM security tooling.
• Experience with GCP Vertex AI pipelines, Model Monitoring, and Vector Search.
• Familiarity with knowledge graphs, grounding models, fact checking models.
• Building SDKs or developer frameworks adopted across multiple teams.
• On prem or hybrid AI deployment experience.
Title: Lead Software Engineer - AI Application Platform
Mode of interview 1 round in person
Location: Must be in Charlotte, NC to work Hybrid Model
Main Skill set: Python, AI and Angular
Description:
Lead Software Engineer - AI Application Platform
The Opportunity
We are seeking a Lead Software Engineer to guide the architectural development and execution of the client, a sophisticated AI-powered application generation platform. This role suits a proven technical leader with deep, hands-on expertise across the full software stack who finds enabling a team to build better software deeply satisfying.
You will shape critical systems, mentor senior and junior developers through complex technical decisions, conduct rigorous code reviews across multiple technology domains, and directly influence the platform's trajectory through strategic engineering leadership.
This is for someone who:
- Engages thoughtfully when a junior developer asks targeted architectural questions—because you see an opportunity to shape how someone thinks about systems
- Takes time to explain subtle type-safety issues in code review, understanding that feedback is a teaching moment
- Can present architecture clearly to executives and confidently explain both what we're building and why it matters
- Finds more energy in the code your team ships than in the code you write individually
- Has proven depth across the full stack and a track record of developing engineers into stronger contributors
This is not a single-language codebase. The role requires the ability to make informed decisions on TypeScript design patterns, Python FastAPI architecture, AWS security posture, and Terraform state management in context with one another.
The Platform Challenge
The client is fundamentally a Platform-as-a-Service (PaaS) for dynamic application generation. This differs from building a traditional SaaS product. Rather than building one application, you're building infrastructure that enables users to build their own applications.
What this means architecturally:
- Dynamic Content Generation at Scale: Unlike traditional development where code is fixed, AppGen generates JSON form schemas, validation rules, and UI layouts on demand. The FormBuilder component doesn't know what fields will exist until runtime. The layout engine renders user-designed screens from configuration, not hardcoded templates.
- Multi-Tenant Isolation & Data Segregation: Each user gets their own generated app, potentially deployed to their own AWS environment. The architecture must account for data isolation, namespace management, and cross-tenant security considerations.
- User-Defined Data Structures: Traditional applications are built with predetermined database schemas. AppGen works differently—form structures, field types, and validation rules emerge from user conversations with Claude. This brings engineering challenges: How do you safely execute validation logic that users define? When users modify existing forms that have thousands of submissions, how do you maintain backward compatibility? How do you version schemas?
- Content Rendering, Not Code Generation: Unlike traditional no-code platforms where users drag-and-drop to build, AppGen uses AI instead. Users chat with Claude, Claude generates a form schema, and your platform renders that schema reliably across diverse field types, validation patterns, and workflows. The system renders configurations for immediate use, rather than generating code for later deployment.
Experience that directly transfers:
- You've contributed to or led development of low-code/no-code platforms (visual builders, workflow engines, configuration-driven systems)
- You've worked on SaaS platforms with multi-tenant architecture and understand isolation strategies, rate limiting, and per-customer customization
- You've built dynamic rendering systems that handle unknown/arbitrary schemas at runtime
- You've addressed the unique challenges of treating data configurations as user-created content (form builders, report designers, automation workflows)
- You understand the difference between platform infrastructure and applications built on that infrastructure—and the architectural implications of each
Core Responsibilities
1. Technical Architecture & Systems Thinking (40%)
- Shape architectural decisions across the full stack: How should the component layer handle dynamically generated forms? What's the right approach to validate complex cross-field dependencies in the FormBuilder? What separation of concerns makes sense between the Generator Lambda and the Parent Backend?
- Guide architecture discussions: Help senior developers think through design trade-offs. Should we use NgRx or Angular signals for this feature? When does a new Lambda function become worthwhile given cold-start costs?
- Identify and address system-wide bottlenecks: Work across layers to improve performance. Explore Lambda cold-start optimization, RDS query efficiency, and DynamoDB access patterns.
- Establish patterns and guide consistency: Define coding conventions that work across Python, TypeScript, and Terraform. Help new team members understand the reasoning behind architectural choices.
- What this looks like in practice: You're able to justify architectural decisions with technical reasoning. When someone questions an approach, you can explain the trade-offs you considered. You can write code in multiple languages to validate an approach if needed.
2. Code Review & Technical Guidance (30%)
- Full-stack PR reviews: Review Python FastAPI endpoints and Angular components with equal depth, understanding how they interact.
- Deep technical review: Catch issues thoughtful code review can surface:
- RxJS Observable lifecycle and potential memory patterns in Angular
- Query efficiency and data loading patterns in SQLAlchemy
- Terraform module organization and state management implications
- Type safety and TypeScript coverage gaps
- AWS security and IAM configurations
- Educational feedback: Your code reviews help the team learn. When you identify an issue, reviewees understand not just what changed, but how to think about similar problems in the future.
- Define quality expectations: Work with the team to establish what \"production-ready\" means for this platform and support consistent application of those standards.
- What this requires: Experience reviewing code across teams and multiple languages. You know how to write feedback that resonates—clear, constructive, and focused on helping people improve.
3. Mentorship & Team Development (20%)
- Expand specialist capabilities: Help backend specialists learn to contribute to the forms-engine. Support frontend experts in understanding FastAPI patterns.
- Accelerate junior developers: Pair on complex problems. Explain the reasoning behind patterns like DataState. Connect architectural choices to implementation details and performance implications.
- Identify and address gaps: Recognize when someone is struggling with a technology and provide targeted support—training, pair programming, or guidance through architectural decisions.
- Create growth opportunities: Stretch the team into new areas. A backend engineer working on their first Terraform contribution. A frontend specialist implementing an AWS Lambda authorizer.
- What this requires: Genuine investment in people's growth. You've walked developers through major transitions (generalist to specialist, specialist to full-stack, or into new technology areas). You understand that team strength grows when individuals expand their capabilities.
4. Stakeholder Communication & Technical Leadership (10%)
- Explain to diverse audiences: Translate architectural choices and trade-offs for product managers, executives, and business stakeholders. Connect \"optimizing DynamoDB queries\" to \"improving form submission latency by 30%.\"
- Shape technical direction: Contribute the engineering perspective on feasibility, risk, and what unlocks future capabilities.
- Support release confidence: You understand the code changes, comprehend the risks, and know what to monitor. You can stand behind releases.
Required Qualifications
Technical Skills
Frontend (Production Experience)
- 5+ years of Angular (including handling version migrations, optimizing change detection, and guiding teams through reactive patterns)
- Strong TypeScript skills with generics, discriminated unions, and strict mode
- RxJS depth: You understand hot vs. cold observables, unsubscription patterns, and can identify potential memory issues in reviews
- NgRx state management: You've designed stores at scale, optimized selectors, and evaluated architectural implications
- CSS Grid & Responsive Design: You can assess component hierarchy and layout decisions
- Material Design: You've worked within it and know when and how to extend it
Backend (Production Experience)
- 5+ years of Python (async/await, type hints, data modeling)
- FastAPI production experience: session management, dependency injection, middleware
- SQL and ORMs (SQLAlchemy): You write efficient queries and review them critically
- AWS services: Understanding of Lambda behavior, IAM least-privilege patterns, VPC networking
- REST API design: Versioning, error handling, idempotency
- Testing frameworks: pytest, testing st
Remote working/work at home options are available for this role.
Overall Responsibility:
This role supports the design, development, and optimization of Arora’s enterprise data and ERP systems. This role reports directly under the Data Analytics Manager to improve financial reporting, support platform integrations, and build scalable data architecture that enables informed decision-making across the organization.
The position combines technical execution (SQL, automation, system configuration) with financial reporting support and cross-platform integration work to ensure accuracy, efficiency, and long-term system sustainability.
Essential Functions:
- Execute reporting and system requests in alignment with established data governance standards and reporting frameworks under the direction of the Data Analytics Manager.
- Contribute to the design of data models and system workflows that reduce manual processes and improve cross-functional data visibility.
- Support internal dashboards by creating backend data solutions and integrating with Vision.
- Provide system-level troubleshooting and ensure data consistency and reliability across platforms.
- Collaborate with teams to streamline processes through automation and data tools.
- Maintain documentation of data procedures, workflows, and system modifications.
- Support financial reporting and analysis by developing standardized, scalable reporting solutions aligned with company-wide data architecture.
- Assist in translating financial and operational requirements into structured reporting outputs and automation workflows.
- Assist in platform integrations (ERP, CRM, BI tools, and other enterprise systems) to support long-term architectural alignment and scalability.
Needed Skills:
- Ability to program in SQL at an expert level to assist data processes. Potential need for other programming language knowledge (Java, Python, etc.).
- Ability to create and maintain productive relationships with employees, clients, and vendors.
Education/Experience Minimum:
- 3-5 years of experience
- Strong programming skills having the ability to write complex queries.
- Preferred familiarity with all Microsoft platforms, including but not limited to Excel, Power BI, SharePoint, and SQL Server.
- Preferred experience with Deltek Vision v7.6 and VantagePoint
- Experience in building automated processes and data workflows.
- Strong problem-solving and attention to detail.
Senior Platform Architect
Reports To: Director of Engineering
Department: Engineering
Location: Hybrid - Atlanta, GA
What makes MTech different:
Purpose-Driven Work – Build technology that solves real problems for the world
Casual & Collaborative – No corporate bureaucracy, direct access to senior leadership
Innovation-Focused – Healthy innovation pipeline expanding into new segments and technologies
Transparent & Data-Driven – Clear metrics, objectives, and visibility into company performance
Modern Development – Robust development tools, training programs, and technical excellence
Flexibility & Balance – Flexible work environment that values results over presenteeism
Job Summary
The Senior Platform Architect will lead the technical architecture, design, and modernization of large-scale, multi-tenant enterprise SaaS platforms built on Azure and the .NET stack. This role requires mastery of distributed systems, cloud-native design, and advanced engineering practices to deliver highly available, performant, and secure solutions for global consumer-facing SaaS and Agentic AI products.
Responsibilities and Duties
Architectural Design & Transformation
- Lead migration from monolithic systems to modular monolith and microservices architectures using domain-driven design, bounded contexts, and decomposition strategies.
- Design multi-tenant SaaS platforms with advanced tenant isolation, resource partitioning, and elastic scaling using Azure services.
- Define and enforce architectural standards for .NET (C#), TypeScript, Angular, SQL Server, and Azure, including dependency injection, SOLID principles, asynchronous programming, and reactive patterns.
- Design and implement distributed systems: service orchestration, API gateway management, IoT, edge computing, distributed transactions, eventual consistency, CQRS, and event sourcing.
- Architect for cloud-native resiliency: circuit breakers, bulkheads, retries, failover, geo-redundancy, and disaster recovery using Azure App Services, Azure Functions, Service Bus, Cosmos DB, and Azure SQL.
- Develop and maintain architecture documentation, reference models, and decision records using industry frameworks (TOGAF, Zachman, C4 Model).
Performance Engineering & Observability
- Establish and monitor platform SLOs (latency, throughput, error rates, availability) mapped to customer SLAs.
- Architect and implement advanced caching strategies, indexing, and query optimization for SQL Server and NoSQL stores in coordination with Senior Data Architect, Data Engineers, and Database Admins.
- Design and implement telemetry pipelines: distributed tracing (OpenTelemetry), structured logging, metrics collection, and real-time dashboards for system health and diagnostics.
- Conduct performance profiling, load testing, and capacity planning for backend services and frontend applications.
Automation, Quality, and DevOps
- Architect and implement CI/CD pipelines with automated build, test, security scanning, and deployment workflows.
- Integrate static code analysis, code coverage, and quality gates into the development lifecycle.
- Design and enforce automated testing strategies: unit, integration, contract, and end-to-end tests for backend and frontend components.
- Develop infrastructure as code (IaC) solutions for repeatable, scalable cloud provisioning.
- Create incident response playbooks for rollback, failover, and recovery, drive down MTTR and automate remediation where possible.
Security, Compliance, and Governance
- Architect for multi-tenant security: authentication/authorization (OAuth2, OpenID Connect), encryption at rest and in transit, secrets management, and compliance with SOC 1, SOC 2, GDPR, and other regulatory standards.
- Implement secure software development lifecycle (SSDLC) practices, threat modeling, and vulnerability management, including ZDR, DLP, No Model Training policies with AI Models.
- Ensure architectural governance and alignment with enterprise frameworks (TOGAF, Zachman), maintain architecture decision records, and participate in architecture review boards.
Technical Leadership & Collaboration
- Mentor engineering teams in advanced architectural concepts, distributed systems, cloud-native development, and best practices.
- Collaborate with Data Architect, DevOps, IT Services, Engineering and Product Management teams to ensure platform extensibility, integration, and support for complex business requirements.
- Evaluate and integrate AI/ML services, advanced analytics, and developer productivity tools to enhance platform capabilities.
- Champion a culture of technical excellence, continuous improvement, and innovation.
Required Experience & Skills
- Minimum 10+ years in software/platform engineering, with at least 8 years in platform architecture for enterprise SaaS on Azure and .NET tech stack.
- Proven experience architecting and delivering large-scale, multi-tenant SaaS platforms for global consumer-facing products.
- Deep expertise in .NET (C#), Azure cloud services (App Services, Functions, Service Bus, Cosmos DB, SQL Server), Azure Open AI, Microsoft Agent Framework, TypeScript, Angular, CI/CD, automated testing, and observability.
- Mastery of distributed systems, cloud-native patterns, event-driven architectures, and microservices.
- Demonstrated success in technical debt reduction, performance engineering, and architectural modernization.
- Experience with architectural frameworks (TOGAF, Zachman, C4 Model), architectural governance, and compliance.
- Strong understanding of platform security, regulatory compliance, and multi-tenant SaaS challenges.
Success Metrics (First 12 Months)
- Reduction in platform-related incidents/support tickets.
- Improvement in deployment speed and release velocity.
- Reduction in MTTR for platform incidents.
- Achievement of modularization milestones (monolith decomposition, service rollout, platform observability in production).
- Increase in automated test coverage, code quality, and system performance metrics.
Preferred Skills & Certifications
- TOGAF, Zachman, or similar architecture certification.
- Advanced knowledge of event sourcing, CQRS, service mesh, and cloud-native security.
- Familiarity with semantic technologies, knowledge graphs, and AI/ML integration.
- Hands-on experience with infrastructure as code, automated testing tools, and modern DevOps practices.
- Strong background in platform security, compliance, and multi-tenant SaaS challenges.
EEO Statement
Integrated into our shared values is MTech’s commitment to diversity and equal employment opportunity. All qualified applicants will receive consideration for employment without regard to sex, age, race, color, creed, religion, national origin, disability, sexual orientation, gender identity, veteran status, military service, genetic information, or any other characteristic or conduct protected by law. MTech aims to maintain a global inclusive workplace where every person is regarded fairly, appreciated for their uniqueness, advanced according to their accomplishments, and encouraged to fulfill their highest potential. We
Job Opportunity: Data Product Manager
Location: Cleveland, Ohio/ Pittsburgh, Pennsylvania Hybrid
Duration: Full-Time
Key Responsibility
Product Ownership & Strategy
- Own end-to-end lifecycle of assigned data products, including vision, strategy, roadmap, and delivery.
- Collaborate with business units, analytics teams, and technology partners to prioritize features and enhancements.
- Define and track key product success metrics and adoption KPIs.
- Advocate for data products across the organization, ensuring alignment with enterprise data governance and cloud/data strategy initiatives.
- Stakeholder Engagement
- Act as the primary liaison between business stakeholders and data engineering/analytics teams.
- Gather and translate business requirements into actionable data product specifications.
- Facilitate cross-functional collaboration to resolve trade-offs and dependencies.
Data Governance & Quality
- Ensure data products comply with regulatory, security, and privacy requirements.
- Define and enforce data quality standards, lineage, and observability metrics.
- Collaborate with Data Governance, Risk, and IT Security teams to maintain compliance and audit readiness.
Technical Leadership
- Understand and leverage modern data technologies (e.g., relational databases, data warehouses, data lakes, ETL pipelines, cloud platforms, APIs, BI tools).
- Collaborate with data engineering teams on architecture, modeling, and platform decisions.
- Evaluate emerging technologies and recommend innovations to improve data products and processes.
Execution & Delivery
- Drive delivery of data products using agile methodologies.
- Prioritize backlog, manage sprints, and ensure timely delivery of features.
- Monitor and measure product performance, adoption, and business impact.
- Thought Leadership
- Contribute to the overall data product management framework and best practices within the bank.
- Promote a culture of data-driven decision-making and product-centric thinking.
Required Qualifications
- 8+ years of experience in data product management, data strategy, or analytics roles; experience in banking/financial services preferred.
- Strong understanding of core banking products (e.g., deposits, loans, payments) and associated operational data flows.
- Solid knowledge of data architecture, warehousing, BI, analytics, and cloud platforms.
- Proven ability to manage multiple data products simultaneously.
- Excellent communication, stakeholder management, and leadership skills.
- Experience with Agile/Scrum methodologies and data governance frameworks.
- Bachelor's degree in Computer Science, Information Systems, Finance, or related field; advanced degree preferred.
Preferred Qualifications
- Hands-on experience with HiveQL, SQL, Tableau
- Understanding of regulatory reporting requirements (e.g., CCAR, FR Y-14, Basel).
- Exposure to semantic layers, or enterprise data product management frameworks.
Company Description
Spiras Health delivers personalized healthcare services at home, focusing on patients at high risk of emergency events or hospitalizations. By addressing the unique needs of individuals with chronic diseases, multiple health conditions, and other social determinants of health, Spiras Health enhances quality of life and reduces healthcare costs. Through collaboration with health plan providers and care management teams, the company employs local Clinical Care Teams to deliver tailored home-based care. Leveraging innovative technology, Spiras Health ensures efficient and effective program delivery. For more information, visit .
Who We Are: Excellence, Innovation, Passion, Compassion, Communication
Spiras Health is a value-based, nurse practitioner led clinical provider of care-at-home and other health-related services to individuals with complex and polychronic needs. Spiras’ comprehensive approach to care delivery includes a combination of home-based services, telehealth, two-way digital communications, and remote patient monitoring. Proprietary predictive modeling identifies and assesses individuals with an elevated probability of avoidable costs. Spiras Health then develops actionable plans of care, addresses barriers including social determinants of health and delivers high quality patient care in collaboration with the patient’s treating physicians. Spiras’ innovative multi-modal care approach delivers improved satisfaction and clinical metrics as well as financial savings to its partners, through a geographically and economically scalable delivery model. Our culture is anchored on a promise of full accountability and integrity in everything we do.
How We Serve:
The Senior Data Analyst, in partnership with the Chief Financial Officer and Chief Commercial Officer, will design and develop market and opportunity analysis, monitor and measure contract performance, and model operational strategies and initiatives. Additionally, this role offers opportunity to engage with senior management and business unit leaders, contributing key insights that inform strategic decision making that ultimately supports the company’s growth.
Job Summary:
The Sr Analyst - Data & Analytics owns the integrity, analysis, and interpretation of payer claims data to support performance measurement, utilization management, and value-based care initiatives. This role is hands-on and accountable for claims QA, utilization metrics, and analytics outputs used by clinical, operations, and executive stakeholders.
This is an individual contributor role with strong technical depth in payer claims data and healthcare analytics platforms, including MedeAnalytics.
Key Responsibilities
Claims Data Quality & Governance
- Own QA processes for medical and pharmacy claims data, including eligibility, provider, diagnosis, procedure, and financial fields
- Identify, quantify, and resolve data anomalies (e.g., lag, duplication, missing claims, inconsistent coding)
- Partner with data engineering, vendors, and payers to remediate data quality issues
- Define and maintain claims data validation rules and documentation
Analytics & Utilization Management
- Analyze and report on core utilization metrics, including:
- Inpatient admits / 1,000
- ED visits / 1,000
- Readmissions
- PMPM cost trends
- Length of stay
- Avoidable admissions
- Apply payer benchmarks (national, regional, risk-adjusted where applicable) to contextualize performance
- Support matched cohort, pre/post, and trend analyses
- Translate claims data into actionable insights for clinical and operations teams
Platform Ownership (MedeAnalytics)
- Serve as a power user and analytics owner of the MedeAnalytics platform
- Build, validate, and QA dashboards, reports, and analytic views
- Ensure alignment between platform outputs and internal data models
- Act as internal SME for MedeAnalytics capabilities and limitations
Stakeholder & Team Collaboration
- Partner closely with:
- Clinical leadership
- Operations
- Finance
- Growth
- Present findings to senior leadership in clear, non-technical language
Required Qualifications
- 5–8 years of healthcare analytics experience
- Deep experience working with payer medical and pharmacy claims data
- Strong understanding of utilization metrics and healthcare cost drivers
- Hands-on experience with MedeAnalytics (required)
- Advanced SQL/SAS skills; experience with data visualization tools (Tableau, Power BI, or equivalent)
- Proven ability to QA complex healthcare datasets
- Bachelor’s degree in Analytics, Statistics, Health Informatics, or related field
Preferred Qualifications
- Experience supporting value-based care, MA, D-SNP, or ACO programs
- Familiarity with HEDIS, Stars, or CMS reporting concepts
- Experience with risk adjustment (HCCs)
Physical Requirements:
- Prolonged periods of sitting at a desk and working on a computer.
- This job operates in a hybrid professional environment free from noise and distraction. This role routinely uses standard office equipment such as computers and phones.
- The physical demands described here are representative of those that must be met by an employee to successfully perform the essential functions of this job.
- Specific vision abilities required by this job include close vision, distance vision, color vision, peripheral vision, depth perception and ability to adjust focus.
- While performing the duties of this job, the employee is regularly required to talk and hear.
- This position requires the ability to occasionally lift office products and supplies, up to twenty pounds.
EEOC STATEMENT:
- All qualified candidates will receive consideration for employment without regard to age, race, color, national origin, gender (including pregnancy, childbirth or medical conditions related to pregnancy or childbirth), gender identity or expression, religion, physical or mental disability, medical condition, legally protected genetic information, marital status, veteran status, or sexual orientation.
About US Solar
US Solar is a developer, owner, operator, and financier of solar and solar + storage projects, with a focus on emerging state markets, community solar programs, distributed generation and small-scale utility projects nationwide.
US Solar is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. We believe diverse teams and diverse perspectives lead to better outcomes and breakthrough thinking, which are differentiators in any business and fundamental to our long-term success.
About Sunscription
is US Solar’s platform for managing community solar subscriptions, billing, and customer operations across multiple markets. The platform supports both residential and commercial subscribers, enabling them to participate in community solar projects and receive savings on their electric bills.
The Subscription Data Operations Lead will join the Sunscription team and play a critical role in supporting contract execution, allocation accuracy, and financial closings by serving as the central owner of subscription data and documentation.
Position Description
The Subscription Data Operations Lead serves as the primary data input and coordination point for community solar subscriptions. This role owns the accuracy and flow of information across allocation spreadsheets, executed contracts, utility documentation, and internal systems.
The position requires strong execution within US Solar’s current Excel based allocation and mail merge workflows, while also supporting improvements to automation, documentation, and reporting processes over time. The successful candidate will be detail oriented, systems minded, and comfortable operating in a fast paced environment where processes continue to evolve.
Responsibilities
- Serve as the primary owner of subscription data across allocation spreadsheets, contracts, utility documentation, and internal platforms.
- Execute and maintain Excel based allocation models and mail merge workflows used to generate contracts and supporting documentation.
- Ensure consistency and accuracy between modeled allocations, executed agreements, and utility records through regular validation and reconciliation.
- Administer the execution and recording of commercial subscription agreements and associated costs to support long term contract management, cost, and revenue tracking.
- Track and analyze residential subscriber acquisition activity to monitor program progress, validate enrollment data, and support allocation planning
- Organize and maintain allocation lists, contracts, utility bills, and utility documentation required for enrollment, billing, and ongoing management.
- Create and maintain subscription summaries and documentation required for program and project financial closings.
- Track additional documentation requirements as projects move toward COD and financial close.
- Migrate deal information and documentation accurately and completely into internal subscriber billing and management platform
- Standardize documentation and reporting formats to improve consistency and accessibility for internal stakeholders.
- Identify opportunities to streamline manual processes and improve efficiency within existing Excel and document generation workflows.
- Collaborate with accounting, finance, asset management, and the Sunscription team to support data needs across the customer lifecycle.
- Create and deliver customer onboarding communication to support billing setup and closing requirements.
- Perform process improvement and administrative tasks to support the overall success of community solar subscriptions.
Requirements
- Bachelor’s degree and five or more years of professional experience in operations, data management, finance, or a related field.
- Exceptional attention to detail with strong organizational skills.
- Advanced proficiency in Microsoft Excel and experience managing complex spreadsheets.
- Experience executing document generation or mail merge workflows tied to structured data.
- Comfort working with contracts, utility documentation, and operational data.
- Ability to learn new tools and contribute to the gradual improvement of existing systems and processes.
- Strong communication skills and ability to collaborate across teams.
- Self directed and comfortable working independently in a fast paced environment.
- Interest in renewable energy and community solar programs.
- US Solar seeks individuals who are flexible, motivated, responsible, and eager to contribute to a collaborative team environment.