Alibaba Cloud Logo Vector Jobs in Usa

2,007 positions found — Page 13

Regional Vice President of Sales
✦ New
Salary not disclosed
San Francisco Bay 15 hours ago

Job Title: Regional Vice President of Sales (East Coast)

Department: Business Development

Location: Remote (Located in San Diego area)

Job Type: Full-time


About Cinnamon

Cinnamon is a healthcare technology company dedicated to improving patient access to care by automating and streamlining patient assistance and affordability workflows. We partner with healthcare organizations and life sciences companies to reduce friction in financial assistance processes, improve data integrity, and ensure secure, compliant exchange of healthcare data. Our mission is to help patients access the care they need faster, with less administrative burden across the healthcare ecosystem.


Role Summary

Cinnamon is seeking a Regional Vice President of Sales focused on direct pharmaceutical manufacturer relationships to drive enterprise growth across a defined territory.

This role is ideal for a senior sales leader with deep experience selling patient access, affordability, adherence, hub services, or healthcare workflow technology to pharmaceutical companies.

The Regional VP will own a regional enterprise quota and be responsible for new logo acquisition and expansion within existing pharmaceutical accounts. The role requires a consultative sales approach and the ability to navigate complex buying groups across brand teams, market access, patient services, and commercial operations.

This is a highly visible role that partners closely with the CEO, Chief Revenue Officer, and product leadership to shape Cinnamon’s direct pharma go-to-market strategy.


Key Responsibilities

Enterprise Sales Leadership

  • Own a regional enterprise quota focused on pharmaceutical manufacturers.
  • Lead complex consultative sales cycles involving brand teams, market access leaders, patient services organizations, and commercial operations stakeholders.
  • Drive new logo acquisition while expanding relationships with existing pharma clients.
  • Build and maintain a strong pipeline aligned with revenue targets.

Strategic Account Development

  • Develop executive relationships within pharmaceutical companies across commercial, brand, and access functions.
  • Identify opportunities where Cinnamon’s platform can improve patient affordability, access workflows, and data exchange across the patient journey.
  • Partner with internal leadership on strategic opportunities, pricing strategy, and deal structuring.

Go-To-Market Execution

  • Execute Cinnamon’s direct pharma sales strategy within an assigned territory.
  • Identify priority accounts and develop targeted account strategies.
  • Provide ongoing market intelligence and competitive insights to leadership.

Cross-Functional Collaboration

  • Partner with Product, Implementation, and Customer Success teams to ensure successful client onboarding and long-term account growth.
  • Collaborate with peer sales leaders to refine messaging, positioning, and sales strategy.
  • Maintain disciplined CRM management and accurate revenue forecasting.


Required Qualifications

  • 10+ years of enterprise sales experience in life sciences or healthcare technology.
  • Proven success selling solutions directly to pharmaceutical manufacturers.
  • Experience selling solutions related to patient access, affordability programs, hub services, specialty pharmacy, adherence, or healthcare workflow automation.
  • Strong relationships with stakeholders across brand teams, market access, patient services, and commercial operations.
  • Track record of closing complex enterprise deals with multi-stakeholder buying groups.
  • Experience selling SaaS, technology platforms, or healthcare services into pharma organizations.
  • Exceptional executive communication and presentation skills.


What We Offer

  • Competitive base salary plus performance-based commission.
  • Opportunity to shape and lead Cinnamon’s enterprise pharma sales strategy from the ground up.
  • High visibility and close partnership with executive leadership.
  • A mission-driven culture focused on improving patient access to care.
  • Significant growth and leadership development opportunities as the company scales.


How to Apply

Please submit your resume and a brief cover letter outlining your relevant experience and interest in the role to .

Not Specified
Performance Engineer -- Non Functional QE
Salary not disclosed
San Jose, CA 3 days ago

Business Area:

Engineering

Seniority Level:

Associate

Job Description:

At Cloudera, we empower people to transform complex data into clear and actionable insights. With as much data under management as the hyperscalers, we're the preferred data partner for the top companies in almost every industry. Powered by the relentless innovation of the open source community, Cloudera advances digital transformation for the world's largest enterprises.

At Cloudera, our Data Services Pillar is the heart of data innovation. We don't just work with technology; we build it. Our mission is to empower data practitioners by creating seamless, enterprise-grade experiences for data engineering, warehousing, streaming, operational databases, and AI.

You will be a key member of the NFQE (Non Functional QE) team that drives the performance reliability of Cloudera's Kuberneteshosted data services. The role blends deep technical knowledge of performance testing, distributed data workloads, and container orchestration with a datadriven mindset. You'll design, automate, run, and analyze performance tests for Cloudera's flagship services, ensuring they meet or exceed customerdefined SLOs/SLAs at scales.

As a Performance Engineer, you will:

  • Work with internal development teams and the open source community to proactively drive performance improvements/optimizations across our data warehouse and Data Engineering stack.

  • Work with product managers, developers and the field team to understand performance and scale requirements, and develop benchmarks based on these requirements.

  • Develop automation to execute benchmarks, collect and aggregate metrics and profiles, and report results, trends, and regressions.

  • Analyze performance and scalability characteristics to identify bottlenecks in large-scale distributed systems.

  • Perform root cause analysis of performance issues identified by internal testing and from customers and suggest corrective actions.

  • Evaluate performance of systems and provide related guidance to the team.

We are excited about you if you have:

  • 3 + years of industry experience in performance-related work, ideally on large-scale distributed systems

  • Understanding of DBMS algorithms and data structure fundamentals.

  • Understanding of hardware trends and full-stack systems performance: CPU, RAM, storage, network, Linux kernel, JVM, and distributed systems performance.

  • Understanding of performance analysis tools and techniques.

  • Strong design, coding skills, and test automation skills (Java/C++/Golang/Python preferred)

  • Knowledge of relevant frameworks, cloud provider knowledge, K8s, etc.

  • Ability to work in a distributed setting with team members spread in multiple geographies

  • Demonstrated ability to work on large cross-functional projects, including strong written communication skills and a collaborative mindset, as you will be working with many teams inside and outside of Cloudera.

  • Experience with benchmark and performance test design. You eshould understand basic concepts of performance testing including different types of performance tests (microbenchmarks, end-to-end benchmarks, concurrency and scale testing), how to reduce (or deal with) noise in test results, etc.

  • Experience designing performance tests that provide useful insights into specific aspects of performance.

  • Solid understanding of basic performance theory - in particular a very good understanding of latency, throughput, and concurrency and how they relate to each other.

  • Strong understanding of the types of workloads they'll be testing Ideally they should have specific experience creating performance tests for the specific product area they'll be working on (SQL, ML, etc).

  • B.S. or M.S. in Computer Science or equivalent experience.

You might also have:

  • Experience with the Hadoop ecosystem (i.e. Hive, Impala, Spark), in specific Prior work on largescale data lakehouse or datawarehouse performance

  • Hands-on experience with containerization, Kubernetes, public cloud infrastructure (AWS, Azure and/or GCP) and mesh-networks

  • Certifications: CKA/CKAD, AWS Solutions Architect, GCP Cloud Architect, Azure Solutions Architect, or equivalent.

  • Security & Compliance: Experience writing performance tests that also verify dataprivacy and audit compliance (e.g., GDPR, HIPAA).

Why this role matters:

This is your opportunity to build cloud-native solutions that are deployable anywhere whether in massive clusters on any cloud provider or in private data centers. You'll work with cutting-edge technologies like Trino, Spark, Airflow, and advanced AI inferencing systems to shape the future of analytics. Your code will directly influence how data engineers, analysts, and developers worldwide find value in their data.

We believe in the power of open source. You'll collaborate with project committers, contributing upstream to keep technologies like Apache Hive and Impala evolving. You'll harden these engines for rock-solid security, optimize them for peak performance, and make them effortlessly run across all environments. Join us and help build the trusted, cloud-native platform that powers insights for the most data-intensive companies on the planet.

This position is not eligible for sponsorship.

The expected base salary range for this role in:

  • California is $124,000 - $155,000

The salary will vary depending on your job-related skills, experience and location.


What you can expect from us:

  • Generous PTO Policy

  • Support work life balance with Unplugged Days

  • Flexible WFH Policy

  • Mental & Physical Wellness programs

  • Phone and Internet Reimbursement program

  • Access to Continued Career Development

  • Comprehensive Benefits and Competitive Packages

  • Paid Volunteer Time

  • Employee Resource Groups

EEO/VEVRAA

#LI-SZ1

#LI-HYBRID

Not Specified
Netcool Developer
Salary not disclosed
Basking Ridge, NJ 2 days ago

Netcool Developer with AIOps Cloud Pak Expertise


• Responsible for integrating and migrating traditional IBM Netcool Operations Insight (NOI) environments into IBM Cloud Pak for AIOps

• Connects on‑prem Netcool/OMNIbus and Netcool/Impact systems with Cloud Pak for AIOps using native connectors

• Migrates existing event filters, automations, and runbook policies into the AIOps platform

• Ensures seamless bidirectional synchronization of event data between Netcool and Cloud Pak for AIOps

• Configures event and alert data mapping and transformation rules (e.g., JSONata) for consistent processing

• Develops automation policies and runbooks using Netcool/Impact, and potentially Python or Bash scripting

• Supports the AIOps platform by supplying and validating high‑quality data for ML models (event grouping, log anomaly detection, metric anomaly detection, change risk assessment)

• Leverages Cloud Pak for AIOps topology and resource management features to build application‑centric infrastructure views

• Collaborates with DevOps, SRE, and operations teams to integrate third‑party tools such as Splunk, ServiceNow, Slack, and others

• Troubleshoots and resolves complex hybrid‑cloud issues arising during integration and ongoing operations

  • • Possesses deep expertise in the IBM Netcool suite, including Netcool/OMNIbus, Netcool/Impact, probes, gateways, and Web GUI
Not Specified
Databricks Architect/ Senior Data Engineer
🏢 OZ
Salary not disclosed
Boca Raton, FL 2 days ago

OZ – Databricks Architect/ Senior Data Engineer


Note: Only applications from U.S. citizens or lawful permanent residents (Green Card Holders) will be considered.


We believe work should be innately rewarding and a team-building venture. Working with our teammates and clients should be an enjoyable journey where we can learn, grow as professionals, and achieve amazing results. Our core values revolve around this philosophy. We are relentlessly committed to helping our clients achieve their business goals, leapfrog the competition, and become leaders in their industry. What drives us forward is the culture of creativity combined with a disciplined approach, passion for learning & innovation, and a ‘can-do’ attitude!


What We're Looking For:

We are seeking a highly experienced Databricks professional with deep expertise in data engineering, distributed computing, and cloud-based data platforms. The ideal candidate is both an architect and a hands-on engineer who can design scalable data solutions while actively contributing to development, optimization, and deployment.


This role requires strong technical leadership, a deep understanding of modern data architectures, and the ability to implement best practices in DataOps, performance optimization, and data governance.


Experience with modern AI/GenAI-enabled data platforms and real-time data processing environments is highly desirable.


Position Overview:

The Databricks Senior Data Engineer will play a critical role in designing, implementing, and optimizing enterprise-scale data platforms using the Databricks Lakehouse architecture. This role combines architecture leadership with hands-on engineering, focusing on building scalable, secure, and high-performance data pipelines and platforms. The ideal candidate will establish coding standards, define data architecture frameworks such as the Medallion Architecture, and guide the end-to-end development lifecycle of modern data solutions.


This individual will collaborate with cross-functional stakeholders, including data engineers, BI developers, analysts, and business leaders, to deliver robust data platforms that enable advanced analytics, reporting, and AI-driven decision-making.


Key Responsibilities:

  • Architecture & Design: Architect and design scalable, reliable data platforms and complex ETL/ELT and streaming workflows for the Databricks Lakehouse Platform (Delta Lake, Spark).
  • Hands-On Development: Write, test, and optimize code in Python, PySpark, and SQL for data ingestion, transformation, and processing.
  • DataOps & Automation: Implement CI/CD, monitoring, and automation (e.g., with Azure DevOps, DBX) for data pipelines.
  • Stakeholder Collaboration: Work with BI developers, analysts, and business users to define requirements and deliver data-driven solutions.
  • Performance Optimization: Tune delta tables, Spark jobs, and SQL queries for maximum efficiency and scalability.
  • GenAI Applications Development: It is a big plus to have experience in GenAI application development


Requirements:

  • 8+ years of experience in data engineering, with strong hands-on expertise in Databricks and Apache Spark.
  • Proven experience designing and implementing scalable ETL/ELT pipelines in cloud environments.
  • Strong programming skills in Python and SQL; experience with PySpark required.
  • Hands-on experience with Databricks Lakehouse, Delta Lake, and distributed data processing.
  • Experience working with cloud platforms such as Microsoft Azure, AWS, or GCP (Azure preferred).
  • Experience with CI/CD pipelines, Git, and DevOps practices for data engineering.
  • Strong understanding of data architecture, data modeling, and performance optimization.
  • Experience working with cross-functional teams to deliver enterprise data solutions.
  • Tackles complex data challenges, ensuring data quality and reliable delivery.


Qualifications:

  • Bachelor’s degree in Computer Science, Information Technology, Engineering, or a related field.
  • Experience designing enterprise-scale data platforms and modern data architectures.
  • Experience with data integration tools such as Azure Data Factory or similar platforms.
  • Familiarity with cloud data warehouses such as Databricks, Snowflake, or Azure Fabric.
  • Experience supporting analytics, reporting, or AI/ML workloads is highly desirable.
  • Databricks, Azure, or cloud certifications are preferred.
  • Strong problem-solving, communication, and technical leadership skills.


Technical Proficiency in:

  • Databricks, Apache Spark, PySpark, Delta Lake
  • Python, SQL, Scala (preferred)
  • Cloud platforms: Azure (preferred), AWS, or GCP
  • Azure Data Factory, Kafka, and modern data integration tools
  • Data warehousing: Databricks, Snowflake, or Azure Fabric
  • DevOps tools: Git, Azure DevOps, CI/CD pipelines
  • Data architecture, ETL/ELT design, and performance optimization


What You’re Looking For:

Join a fast-growing organization that thrives on innovation and collaboration. You’ll work alongside talented, motivated colleagues in a global environment, helping clients solve their most critical business challenges. At OZ, your contributions matter – you’ll have the chance to be a key player in our growth and success. If you’re driven, bold, and eager to push boundaries, we invite you to join a company where you can truly make a difference.


About Us:

OZ is a 28-year-old global technology consulting, services, and solutions leader specializing in creating business-focused solutions for our clients by leveraging disruptive digital technologies and innovation.


OZ is committed to creating a continuum between work and life by allowing people to work remotely. We offer competitive compensation and a comprehensive benefits package. You’ll enjoy our work style within an incredible culture. We’ll give you the tools you need to succeed so you can grow and develop with us and become part of a team that lives by its core values.

Not Specified
Senior Java Architect (Banking Domain)
Salary not disclosed
Dallas, TX 2 days ago

Position : JAVA Solution Architect

Location : TX/NJ

Duration Long Term



As a Solution Architect, you will be an integral part of shaping the future of technology. This role requires deep technical expertise to translate complex business requirements into scalable, secure, and compliant technical solutions. You will serve as a bridge between business stakeholders and development teams, ensuring the delivery of high-quality, resilient systems that drive significant business impact.

Key Responsibilities

  • Solution Design & Architecture: Lead the design and development of end-to-end enterprise solutions, including high-level and low-level design documents and architecture diagrams.
  • Technology & Platform Selection: Select the appropriate technology stack, leveraging expertise in Java, Spring Boot, microservices architecture, and cloud platforms (AWS and Azure) to build robust, scalable, and cost-efficient applications.
  • Cloud Migration & Integration: Drive cloud transformation initiatives, including migrating on-premises applications to the cloud and integrating complex systems.
  • Security & Compliance: Ensure all solutions comply with the regulatory requirements (e.g., data privacy, security standards) and implement robust security measures, including identity and access management, encryption, and network security.
  • Technical Leadership & Collaboration: Provide technical guidance and mentorship to development teams, conducting code and architecture reviews to ensure alignment with architectural principles and best practices. Collaborate with cross-functional teams, including business analysts and project managers, to align technical solutions with business goals.
  • Innovation & Problem Solving: Evaluate new and emerging technologies, conducting proofs-of-concept (PoCs) to validate assumptions and drive continuous improvement in products, processes, and tools.

Qualifications and Skills

  • Experience:
  • 5+ years of relevant experience in a solution architecture or a lead engineering role within financial services or a related regulated industry.
  • Proven experience in designing and delivering large-scale IT projects with hands-on experience in Java-based systems.
  • Demonstrated experience running production applications in public cloud environments (AWS and/or Azure).
  • Technical Skills:
  • Proficiency in Java and Java frameworks (Spring, Spring Boot).
  • Strong DB Design (RDBMS, NoSQL) abilities
  • Strong knowledge of microservices, event-driven architecture (e.g., Kafka), and RESTful API design.
  • Experience with cloud services (compute, networking, databases, security) and containerization technologies (Docker, Kubernetes).
  • Familiarity with DevOps practices and CI/CD pipelines.
  • Soft Skills:
  • Excellent communication, presentation, and stakeholder management skills, with the ability to translate complex technical concepts for non-technical audiences.
  • Strong analytical, problem-solving, and decision-making abilities.
  • A proactive, self-motivated mindset with the ability to work through ambiguous requirements in an agile environment.

Preferred Certifications

  • AWS Certified Solutions Architect (Associate or Professional)
  • Microsoft Certified: Azure Solutions Architect Expert




Best Regards,

Deepak Gulia

Sr. Talent Acquisition-USA


100 Campus Drive, Suite 420, Florham Park, NJ 07932

| | ://

Not Specified
Infrastructure Project Manager
Salary not disclosed
Sacramento, CA 2 days ago

** Infrastructure PM Role **

** Candidate Should have Public Sector Experience **


Mandatory Qualifications:

  1. Bachelor’s degree in Information Technology, Computer Science, Engineering, Business Administration, or a related field.
  2. Seven (7) years of experience managing IT infrastructure projects, including planning, execution, monitoring, and successful delivery.
  3. Three (3) years of experience working with State of California departments, agencies, or public sector organizations.
  4. Five (5) years of experience managing infrastructure modernization projects, including one or more of the following:
  • Cloud migration initiatives
  • Data center modernization
  • Server and storage infrastructure upgrades
  • Network infrastructure projects
  • Disaster recovery and business continuity planning

5. Three (3) years of experience managing AWS cloud infrastructure projects, including:

  • Cloud migration strategy
  • AWS architecture coordination
  • Infrastructure automation
  • Cloud security and compliance

6. Five (5) years of experience using formal project management methodologies, such as:

  • PMBOK
  • Agile / Scrum


Desirable Qualifications

  1. PMP (Project Management Professional) Certification.
  2. AWS Certifications, such as:
  • AWS Certified Solutions Architect
  • AWS Certified Cloud Practitioner
  • AWS Certified DevOps Engineer
  1. Experience working with California Department of Technology (CDT) Project Approval Lifecycle (PAL).
  2. Experience preparing and managing State project documentation, including:
  • Feasibility Study Report (FSR)
  • Special Project Report (SPR)
  • Project Approval Lifecycle (PAL) documentation
  • Statement of Work (SOW)
  • Request for Offer (RFO)
  1. Experience with IT Service Management frameworks (ITIL).
  2. Experience managing large-scale cloud transformation programs within government environments.
Not Specified
Sr. Technology Engineer
✦ New
🏢 Luxoft
Salary not disclosed

Project description

This technology engineer is responsible for ensuring the reliability, supportability, and continuous improvement of key infrastructure monitoring and management platforms, with primary ownership focus on tools such as SolarWinds, Azure Sentinel. This role requires a developer mindset. This person will also be providing operations systems administration support for hands on Linux and Windows systems. This role partners closely with internal teams across operations, monitoring, and security to strengthen platform health, improve signal quality, and enable effective incident response workflows. The engineer will support a hybrid environment with strong emphasis on Microsoft Azure monitoring and logging, contribute to platform lifecycle activities (patching, upgrades, onboarding, documentation), and continuously learn and apply modern capabilities— including analytics and emerging AI features—across event management, observability, and SIEM tooling to reduce operational friction and increase time to value

Responsibilities

Platform Ownership

Network & Monitoring Tools (must have)

Familiar with tools such as SolarWinds (including NetPath). As a platform owner, ensure platform stability, upgrades, patching, and day to day support.

Has knowledge about network centric monitoring capabilities including SNMP polling, traps, and device visibility etc. Ensure new sites and devices are properly onboarded

Partner with platform and cloud teams to ensure migrated workloads meet monitoring standards. Systems Administration (must have)

Provide sysadmin support for Linux and Windows servers, including:

Agent deployment and upgrades (SolarWinds, Datadog, Dynatrace)

OS level troubleshooting and configuration

Monitoring and logging enablement

Support hybrid environments spanning on prem and Azure infrastructure.

A developer mindset with experience in Dev workflow, GitHub, PowerShell etc.

Observability & Event Management Support (should have)

Has experience with tools such as Datadog and Dynatrace. The person will be responsible for collaborating with platform owners to support integrations, data quality, and alerting hygiene.

Assist with event management workflows, ensuring alerts are actionable and routed correctly.

Participate in efforts to reduce alert noise and repeat incidents. SIEM & Security Visibility (nice to have)

Develop a working understanding of SIEM concepts and platforms such as Azure Sentinel and CRIBL.

Support log ingestion, troubleshooting, and collaboration with security and incident response teams.

Ensure infrastructure and network telemetry supports security detection requirements. Cloud Monitoring & Azure Integration (should have)

Has experience with Azure cloud platform. Have either directly supported or is familiar with Azure based monitoring and logging, including:

Azure Monitor and Log Analytics integrations

Observability for Azure hosted workloads Automation, AI & Continuous Improvement (nice to have)

Explore and apply AI assisted features within monitoring, event management, and SIEM tools to:

Improve signal quality / reduce alert fatigue

Support faster incident triage

Contribute to documentation, runbooks, and operational improvements focused on small, incremental wins.

Knowledge Transfer & Operational Resilience

Participate in knowledge transfer activities related to platform transitions and retirements. Maintain documentation.

Support on call or escalation rotations as needed.

Skills

Must have

Minimum 4-5 years of experience in infrastructure operations, monitoring, observability, or platform operations roles, supporting enterprise environments

Hands on experience with systems administration for Linux and Windows servers, including troubleshooting, configuration, and deployment of monitoring or management agents (e.g., SolarWinds, Datadog, Dynatrace).

Foundational networking knowledge, including concepts such as SNMP, network monitoring, LAN/WAN fundamentals, firewalls, and telemetry collection, sufficient to support network centric monitoring platforms like SolarWinds

Not a must but nice to have experience with platform like StruxureWare.

Experience with observability or monitoring platforms, such as SolarWinds, Datadog, Dynatrace, or similar tools, with an understanding of alerting, dashboards, and signal quality.

Exposure to cloud environments, preferably Microsoft Azure, including familiarity with monitoring and logging concepts (e.g., cloud based telemetry, logs, metrics, and integrations).

Basic understanding of incident and event management practices, including alert triage, escalation, and collaboration with incident response or operations teams.

Demonstrated willingness and ability to learn new technologies quickly, with examples of picking up new platforms, tools, or domains outside of prior core expertise.

Familiarity with Agile or SAFe ways of working, including collaboration in sprint based delivery models, and cross functional team engagement is a plus.

Strong communication and collaboration skills, with the ability to work effectively with platform owners, operations teams, security teams, and external stakeholders.

Experience working in a modern Dev workflow using GitHub (branches, pull requests, code reviews, and CI/CD) to manage and deploy scripts/automation used for platform operations

Working proficiency in scripting languages such as PowerShell, Python, BASH, or similar scripting languages.

Knowledge with Azure, Azure Active Directory (AD), and hybrid cloud environments is a plus.

Exposure to SIEM concepts or platforms such as Azure Sentinel, CRIBL, or similar is a plus.

Experience with change management practices in an enterprise IT environment is beneficial

Not Specified
Sr. Technology Engineer (Operations)
✦ New
🏢 Luxoft
Salary not disclosed
Deerfield Beach, FL 1 day ago

This technology engineer is responsible for ensuring the reliability, supportability, and continuous improvement of key infrastructure monitoring and management platforms, with primary ownership focus on tools such as SolarWinds, Azure Sentinel. This role requires a developer mindset. This person will also be providing operations systems administration support for hands on Linux and Windows systems. This role partners closely with internal teams across operations, monitoring, and security to strengthen platform health, improve signal quality, and enable effective incident response workflows. The engineer will support a hybrid environment with strong emphasis on Microsoft Azure monitoring and logging, contribute to platform lifecycle activities (patching, upgrades, onboarding, documentation), and continuously learn and apply modern capabilities— including analytics and emerging AI features—across event management, observability, and SIEM tooling to reduce operational friction and increase time to value.


Responsibilities:

Platform Ownership - Network & Monitoring Tools (must have)

• Familiar with tools such as SolarWinds (including NetPath). As a platform owner, ensure platform stability, upgrades, patching, and day to day support.

• Has knowledge about network centric monitoring capabilities including SNMP polling, traps, and device visibility etc. Ensure new sites and devices are properly onboarded

• Partner with platform and cloud teams to ensure migrated workloads meet monitoring standards.


Systems Administration (must have)

• Provide sysadmin support for Linux and Windows servers, including:

• Agent deployment and upgrades (SolarWinds, Datadog, Dynatrace)

• OS level troubleshooting and configuration

• Monitoring and logging enablement

- Support hybrid environments spanning on prem and Azure infrastructure.

- A developer mindset with experience in Dev workflow, GitHub, PowerShell etc.

- Observability & Event Management Support (should have)

- Has experience with tools such as Datadog and Dynatrace. The person will be responsible for collaborating with platform owners to support integrations, data quality, and alerting hygiene.

- Assist with event management workflows, ensuring alerts are actionable and routed correctly.

- Participate in efforts to reduce alert noise and repeat incidents.


SIEM & Security Visibility (nice to have)

- Develop a working understanding of SIEM concepts and platforms such as Azure Sentinel and CRIBL.

- Support log ingestion, troubleshooting, and collaboration with security and incident response teams.

- Ensure infrastructure and network telemetry supports security detection requirements.

Cloud Monitoring & Azure Integration (should have)

- Has experience with Azure cloud platform. Have either directly supported or is familiar with Azure based monitoring and logging, including:

- Azure Monitor and Log Analytics integrations

- Observability for Azure hosted workloads


Automation, AI & Continuous Improvement (nice to have)

• Explore and apply AI assisted features within monitoring, event management, and SIEM tools to:

- Improve signal quality / reduce alert fatigue

- Support faster incident triage

• - Contribute to documentation, runbooks, and operational improvements focused on small, incremental wins.

- Knowledge Transfer & Operational Resilience

- Participate in knowledge transfer activities related to platform transitions and retirements. Maintain documentation.

- - Support on call or escalation rotations as needed.


Mandatory Skills Description:

• Minimum 4-5 years of experience in infrastructure operations, monitoring, observability, or platform operations roles, supporting enterprise environments

• Hands on experience with systems administration for Linux and Windows servers, including troubleshooting, configuration, and deployment of monitoring or management agents (e.g., SolarWinds, Datadog, Dynatrace).

• Foundational networking knowledge, including concepts such as SNMP, network monitoring, LAN/WAN fundamentals, firewalls, and telemetry collection, sufficient to support network centric monitoring platforms like SolarWinds

• Not a must but nice to have experience with platform like StruxureWare.

• Experience with observability or monitoring platforms, such as SolarWinds, Datadog, Dynatrace, or similar tools, with an understanding of alerting, dashboards, and signal quality.

• Exposure to cloud environments, preferably Microsoft Azure, including familiarity with monitoring and logging concepts (e.g., cloud based telemetry, logs, metrics, and integrations).

• Basic understanding of incident and event management practices, including alert triage, escalation, and collaboration with incident response or operations teams.

• Demonstrated willingness and ability to learn new technologies quickly, with examples of picking up new platforms, tools, or domains outside of prior core expertise.

• Familiarity with Agile or SAFe ways of working, including collaboration in sprint based delivery models, and cross functional team engagement is a plus.

• Strong communication and collaboration skills, with the ability to work effectively with platform owners, operations teams, security teams, and external stakeholders.

• Experience working in a modern Dev workflow using GitHub (branches, pull requests, code reviews, and CI/CD) to manage and deploy scripts/automation used for platform operations

• Working proficiency in scripting languages such as PowerShell, Python, BASH, or similar scripting languages.

• Knowledge with Azure, Azure Active Directory (AD), and hybrid cloud environments is a plus.

• Exposure to SIEM concepts or platforms such as Azure Sentinel, CRIBL, or similar is a plus.

• Experience with change management practices in an enterprise IT environment is beneficial.


Nice-to-Have Skills Description:

Agile Methodologies

Not Specified
Lead Dotnet Developer
✦ New
Salary not disclosed
Fort Mill, SC 9 hours ago

Job Title: Tech Lead – .NET Microservices (Kafka & AWS)

Location: Fort Mill, SC (Onsite) [Local Only]

Experience: 10–12+ Years


Job Summary:

We are seeking a highly skilled Tech Lead – .NET Microservices Engineer with strong expertise in Apache Kafka and AWS cloud services. The ideal candidate will lead the design, development, and deployment of scalable, high-performance distributed systems using modern cloud-native architectures.

Additionally, the role involves driving strategic initiatives leveraging Generative AI, Terraform, and .NET Core to enhance enterprise platforms and communication frameworks.


Top Skills Required:

  • .NET Microservices
  • Apache Kafka
  • AWS Cloud Services


Technical Skills:

  • Strong experience in .NET Core / .NET Microservices architecture
  • Hands-on experience with Apache Kafka (event-driven systems, streaming)
  • Expertise in AWS services (compute, storage, serverless, integrations)
  • Experience with Terraform (Infrastructure as Code)
  • Exposure to Generative AI concepts and applications
  • Strong understanding of distributed systems and cloud-native design
  • Experience with REST APIs, messaging systems, and scalable architectures


Responsibilities:

  • Lead design and development of scalable .NET microservices-based applications
  • Architect and implement event-driven solutions using Kafka
  • Deploy and manage applications on AWS cloud platforms
  • Provide technical leadership and mentor development teams
  • Implement Infrastructure as Code using Terraform
  • Drive adoption of Generative AI solutions for business use cases
  • Collaborate with cross-functional teams to align technical solutions with business goals
  • Ensure high performance, scalability, and reliability of systems
  • Monitor system performance and troubleshoot production issues
  • Analyze metrics and continuously improve system efficiency
  • Ensure adherence to best practices, security, and compliance standards


Qualifications:

  • 10–12+ years of overall IT experience
  • Strong expertise in .NET Core, Microservices, Kafka, and AWS
  • Proven experience leading enterprise-scale application development
  • Hands-on experience with cloud-native and distributed architectures
  • Familiarity with Generative AI and modern automation tools
  • Excellent communication and stakeholder management skills
Not Specified
Salesforce Solution Architect (Remote)
Salary not disclosed
Atlanta, Remote 2 days ago
DivIHN (pronounced “divine”) is a CMMI ML3-certified Technology and Talent solutions firm.

Driven by a unique Purpose, Culture, and Value Delivery Model, we enable meaningful connections between talented professionals and forward-thinking organizations.

Since our formation in 2002, organizations across commercial and public sectors have been trusting us to help build their teams with exceptional temporary and permanent talent.

Visit us at to learn more and view our open positions.

Please apply or call one of us to learn more For further inquiries regarding the following opportunity, please contact our Talent Specialist, Lavanya at (224) 369-0873 Title: Salesforce Solution Architect (Remote) Duration: 6 Months Location: Remote Only W2 candidates are eligible for this position.

Third-party or C2C candidates will not be considered.

Job Description: We are looking for a Salesforce Architect with strong and deep experience in the Salesforce platform.

The ideal candidate should have hands-on expertise in designing and implementing Salesforce solutions and a strong technical background.

Key Requirements: Strong experience in Salesforce with deep platform knowledge.

Experience working with Sales Cloud, Service Cloud, B2B Commerce, and Experience Cloud.

A technical background is preferred (for example, someone who started as a Salesforce Developer and moved into an Architect role).

Exposure to AI capabilities within Salesforce is a plus, as the organization is currently in the early stages of AI adoption.

Salesforce certifications are helpful and considered an advantage.

Additional Information: This individual will work with three other solution architects and report to their Application Development Director About us: DivIHN, the 'IT Asset Performance Services' organization, provides Professional Consulting, Custom Projects, and Professional Resource Augmentation services to clients in the Mid-West and beyond.

The strategic characteristics of the organization are Standardization, Specialization, and Collaboration.

DivIHN is an equal opportunity employer.

DivIHN does not and shall not discriminate against any employee or qualified applicant on the basis of race, color, religion (creed), gender, gender expression, age, national origin (ancestry), disability, marital status, sexual orientation, or military status.

Service Cloud, Sales Cloud, salesforce developer
Remote working/work at home options are available for this role.
Not Specified
jobs by JobLookup
✓ All jobs loaded