Deploy Jobs Remote Jobs in Usa
2,650 positions found
Senior Software Engineer β Deployment & Reliability (Digital Pathology / Medical Imaging)
A fast-growing technology company operating in the digital pathology and medical imaging space is seeking a Senior Software Engineer to support the deployment, configuration, and long-term reliability of advanced imaging and AI-driven software systems.
This role sits at the intersection of software deployment, infrastructure engineering, and site reliability, ensuring complex software platforms are successfully installed, integrated with customer IT environments, and maintained at high levels of performance and stability.
You will work closely with engineering, customer support, and monitoring teams to ensure a smooth transition from system deployment to ongoing operational support while contributing to improvements that make deployments more scalable and reliable over time.
Key Responsibilities
Deployment & Configuration
- Lead end-to-end deployments of imaging, AI, and data management software systems at customer environments
- Configure and integrate servers, clusters, and storage systems within hospital or laboratory IT infrastructures
- Work with networking, authentication, storage, and security configurations to ensure successful installations
- Collaborate with field engineering teams during system installation and commissioning
- Develop standardized deployment playbooks, documentation, and validation checklists
System Reliability & Upgrades
- Manage software version rollouts, upgrades, and patching across deployed customer environments
- Work with monitoring and observability teams to track system performance and health
- Troubleshoot complex issues across multi-component systems including imaging software, AI inference pipelines, and storage layers
- Improve automation around upgrades, rollbacks, and maintenance processes
Engineering Collaboration & Continuous Improvement
- Identify recurring deployment or performance challenges and work with R&D teams to design long-term solutions
- Provide structured feedback from field deployments to improve product architecture and deployment workflows
- Validate new deployment tools, frameworks, and configuration approaches prior to wider rollout
- Contribute to improving the scalability and resilience of the overall platform
Customer IT & Cross-Functional Collaboration
- Serve as a technical liaison with customer IT teams regarding networking, infrastructure, security, and data access
- Ensure deployments comply with institutional IT policies and healthcare regulatory requirements
- Collaborate closely with support and monitoring teams to align escalation processes and root cause investigations
- Participate in post-deployment reviews to improve operational processes and reliability
Documentation & Knowledge Sharing
- Maintain detailed installation and configuration documentation
- Develop deployment guides, troubleshooting documentation, and internal knowledge resources
- Support and mentor field teams on standardized deployment and configuration practices
Requirements
- Bachelorβs or Masterβs degree in Computer Science, Computer Engineering, or related discipline
- 5+ years of experience in software deployment, DevOps, infrastructure engineering, or systems engineering
- Strong Linux (Ubuntu) administration and scripting skills
- Experience with containerization and orchestration technologies (Docker, Kubernetes)
- Experience with database technologies such as PostgreSQL or MongoDB
- Familiarity with web service configuration (Nginx or Apache)
- Solid understanding of networking concepts including VPNs, firewalls, and authentication systems
- Ability to troubleshoot complex distributed systems across software, infrastructure, and data layers
- Strong communication and collaboration skills when working with cross-functional teams and customer IT stakeholders
Preferred Experience
- Exposure to medical imaging systems, digital pathology, or healthcare technology environments
- Familiarity with DICOM or PACS systems
- Experience deploying or supporting AI/ML models in production environments
- Experience with observability and monitoring tools (Prometheus, Grafana, ELK)
- Knowledge of regulated environments and healthcare compliance frameworks (HIPAA, GDPR, IVDR)
- Experience supporting hardware and software integrated systems
Why This Role
This position offers the opportunity to work on advanced digital pathology and imaging technologies that support clinical diagnostics and research globally. The role combines hands-on technical deployment with the chance to influence how complex systems are designed, automated, and scaled across a growing global customer base.
CSI Companies is seeking an Epic Technical Deployment Speciliast to work with one of the top hospital systems in the country!
Title: Technical Deployment Specialist
Location: New York, NY
Type: Hybrid (some weeks 2-3 days onsite, other weeks 1-2 days onsite)
Duration: 12+ Month Contract
Pay: $65 - $75/hr
Hours: 40 hour work weeks (occassional OT)
Description:
Position Overview
Under the direction of EITS Application Technical Leads and project leadership, the Senior Epic Technical Deployment Consultant will support the Epic deployment at Maimonides by leading hardware planning, workstation and printer build, print mapping, and Technical Dress Rehearsal (TDR) coordination.
This role requires strong onsite presence and hands-on collaboration across technical, operational, and clinical teams to ensure deployment readiness and successful go-live execution.
Key Responsibilities
- Conduct future-state workflow walkthroughs with department leaders and stakeholders to define hardware and technical requirements.
- Translate gathered requirements into detailed deployment specifications for hardware procurement, configuration, and testing teams.
- Design, build, and validate Epic workstation and printer mappings.
- Lead coordination efforts across infrastructure, networking, desktop, application, and deployment teams to ensure readiness.
- Develop Technical Dress Rehearsal (TDR) test plans and supporting documentation.
- Track TDR progress, manage issue resolution, and provide structured status updates to leadership.
- Present technical designs and deployment concepts clearly to both technical and non-technical audiences.
- Coordinate meetings across user groups, project teams, and leadership; manage agendas, minutes, and follow-up activities.
- Ensure deployment milestones are met within timelines and resource constraints defined by the PMO.
- Maintain current Epic certification and remain aligned with Epic best practices.
Qualifications
- Active Epic Certification (any major module; current or one version prior).
- Strong understanding of Epic workstation configuration, device integration, and printing workflows.
- Experience supporting Epic deployments, go-lives, or large-scale implementations.
- Proven experience building and validating printer mapping within Epic.
- Ability to work onsite with flexibility to report within 48 hours when required.
- Excellent written and verbal communication skills.
- Highly detail-oriented with strong organizational and tracking capabilities.
- Prior experience leading Technical Dress Rehearsal (TDR) activities.
- Experience coordinating cross-functional IT teams in healthcare environments.
- Strong documentation and reporting skills for executive-level updates.
- Experience working within structured PMO governance frameworks.
- Active Epic Certification (any major module; current or one version prior).
Property Deployment Strategist
Location:Β Chicago; must be willing to travel (Approximately 30%)
Employment Type:Β Full Time Onsite
Position Summary
The Property Deployment Strategist plays a critical role in the successful deployment, adoption, and optimization of our self-guided touring solutions across multifamily communities. This role isΒ primarily responsible for designing, building, and configuring self-guided tours, ensuring each property delivers a seamless, intuitive, and conversion-optimized prospect experience.
Working hands-on with our proprietary software and design tools such as Figma and Canva, the Property Deployment Strategist creates and deploys tours,Β conducts regular on-site property visits to walk communities and validate tour paths, collaborates with client teams, monitors early performance KPIs, and executes quality assurance to support long-term adoption by leasing staff and prospects.
Key Responsibilities
Client Onboarding & Tour Deployment
- Lead end-to-end onboarding for new communities, ensuring smooth system setup, CRM/data integrations, and feature activation.
- Design self-guided tours for multifamily properties using Figma and Canva, following established templates and brand guidelines.
- Build and configure tours within our proprietary platform, ensuring all required steps, checkpoints, and configurations are completed accurately prior to launch.
- Validate tour path logic on-site, confirming a seamless, intuitive, and branded prospect experience.
- Train client leasing teams, marketing staff, and leadership on platform functionality, messaging, and adoption best practices.
- Partner with client stakeholders to customize workflows and ensure alignment with community leasing goals.
Quality Assurance, Optimization & Reporting
- Conduct post-launch quality assurance testing of tours as built, validating flow, logic, and system reliability.
- Identify and correct configuration errors, incorporating client feedback and on-site observations.
- Track and report on key performance metrics, including time-to-launch, adoption rates, CSAT, first-tour success, and feature utilization.
- Provide actionable insights and recommendations to Client Success Manager to promote active conversions.
- Collaborate with internal teams to continuously refine the onboarding playbook based on lessons learned in the field.
Client Relationship & Collaboration
- Act as the primary onboarding liaison, building strong partnerships with client leadership and on-site teams.
- Partner with the Property Success Manager to help clients achieve maximum conversion potential by analyzing lead behavior, tour outcomes, and follow-up strategies within the first 45 days of deployment.
- Participate in onboarding check-ins, adoption reviews, and performance presentations to client stakeholders.
Key Performance Indicators (KPIs)
Success in this role is measured by:
- Implementation Efficiency: % of launches completed on time and error-free.
- Adoption Rates: % of on-site staff trained and % of features activated during onboarding.
- Client Satisfaction: CSAT scores following onboarding and QA visits.
- Conversion Potential: Prospect engagement and utilization metrics within first 45 days.
Qualifications
- 1-2 years of experience in frontline multifamily leasing preferred.
- 1+ client onboarding, training, project management or implementation preferred (real estate technology, SaaS, or multifamily housing).
- Proven ability to manage multiple client projects with tight timelines and high accountability.
- Strong analytical skills with experience interpreting KPI dashboards and generating actionable insights.
- Excellent communication, presentation, and relationship-building skills.
- Ability to thrive in a fast-paced, travel-heavy role.
Employee Benefits
- Health Insurance
- 401(k) + company match
- Generous PTO and paid holidays
- Competitive salary and performance-based discretionary bonus
- Growth opportunities in a high-growth startup
Why Join Us
- Play a pivotal role in revolutionizing the leasing journey for thousands of prospects.
- Collaborate with innovative property management leaders and forward-thinking technology teams.
- Career growth in a rapidly scaling prop tech environment.
Employment Contingencies
- Must be legally authorized to work in the U.S. (no visa sponsorship available at this time).
- Employment contingent on background check and reference verification.
- Compliance with Illinois and New York employment laws regarding criminal history disclosure.
Director β ISP Operations and Deployment
Location: On-site | Near Wolf Trap, VA
Industry: Mission-critical network and data-center infrastructure solutions
A high-growth leader in network infrastructure solutions is expanding its presence in the Mid-Atlantic region and is seeking a driven, hands-on operations leader to launch and lead a new Inside-Plant (ISP) construction and deployment division. This is a career-defining opportunity for a Director-level operator to build a team, stand up executional processes, and play a key role in the next wave of hyperscale data center development. The role is based on-site at a cutting-edge facility just minutes from the Ashburn fiber corridor.
Key Responsibilities:
- Oversee day-to-day execution of structured cabling and inside-plant construction projects across both operational and greenfield data center campuses.
- Directly manage field teams and project resources, including technicians, foremen, and PMs, to ensure delivery on scope, safety, quality, and schedule.
- Act as a key client-facing leader for hyperscale and enterprise deployments, guiding change orders, addressing project challenges, and maintaining strong relationships.
- Build, coach, and scale a high-performing field team, while reinforcing a culture of safety, ownership, and continuous improvement.
- Develop and enforce division SOPs for ISP deployment, testing, and documentation to meet or exceed client expectations and industry codes (BICSI, ANSI/TIA, NFPA).
- Manage project budgets, workforce planning, and materials procurement with accountability to divisional revenue and margin goals.
- Support the new VP by helping lay the operational foundation for long-term growth, including hiring estimators, PMs, and other support functions.
Required Background:
- 10+ years of experience in data center construction with a strong track record in ISP (structured cabling, fiber, copper, containment, power) delivery.
- Proven ability to lead teams on-site and manage complex inside-plant deployments at scale.
- Deep knowledge of hyperscaler environments and standards, ideally with direct exposure to the Ashburn market.
- Familiar with project tracking tools such as SiteTracker, Procore, or similar.
- OSHA-30 certification required; BICSI RCDD certification preferred.
- Bachelorβs degree in construction management, Engineering, or equivalent field experience.
Compensation & Benefits:
- Competitive base salary aligned with director-level responsibilities.
- Performance-based bonus structure tied to execution and financial metrics.
- Full benefits package including health coverage, PTO, 401(k) with match, company phone, and vehicle stipend.
Why This Role:
- Lead from the Ground Floor: Launch a division in the most critical data center market in the U.S.
- Build Your Team: Hire the field leaders and PMs you trust to deliver.
- High Visibility: Work shoulder-to-shoulder with the VP and C-suite to shape strategy and scale growth.
- Field-Driven Impact: Stay close to execution while driving operational excellence in a fast-paced, high-reward environment.
Ready to take the next step? Submit your rΓ©sumΓ© or reach out confidentially to learn more about this career-defining opportunity.
About Blue Signal:
Blue Signal is an award-winning, executive search firm specializing in various specialties. Our recruiters have a proven track record of placing top-tier talent across industry verticals, with deep expertise in numerous professional services. Learn more at /46Gs4yS
Our client a global retail company known for their supermarket chains and ecommerce platforms is hiring for a Systems Depolyment Engineer to join their team in Salisbury, North Carolina. This is an initial 5-month hybrid contract opportunity.
As their Systems Deployment Engineer you will be responsible for the planning and engineering of their systems infrastructure - including the implementation and design of both hardware and software. Focused on implementing and supporting POS systems, working with engineering and product teams to translate business needs into technical deployments, manage rollouts, and ensure systems are tested and functioning properly.
Contract: 5 months (possibility of extension)
Responsibilities:
β’ Technical SME for multiple assigned systems, services and applications for an identified functional area
β’ Responsible for leading small to mid-size project solution delivery activities leading the below listed delivery activities:
o Translating business needs identified by either the business and/or production owners into either agile stories or waterfall business requirements.
o Partners with Solution Engineers in building out technical specifications that will deliver on identified business requirements and outcomes
o Works with the business and Quality Assurance in building out test cases/matrices that will ensure proper testing of solutions prior to production deployment
o Execute assigned tasks during System Unit review and building turnover process to QA
β’ Partners with Engineers, Product Teams, and business groups to deliver standard to complex configuration changes and routine Operational changes for the services/applications within established standards
β’ Partners to determine integration needs, design improvements, and design patterns with Engineers, Developers, Suppliers, and Product teams
β’ Takes on small to medium projects from start to finish and works independently on these efforts with minimal direction required,
β’ Works on problems of moderate to complex scope where analysis of situations or data requires a review of a variety of factors
Required Qualifications:
3 TO 5 YEARS OF OVERALL EXPERIENCE
β’ POS knowledge - NCR POS EX (2.0) or NCR POS Emerald 1.0
β’ Strong analytical skills
β’ Strong excel skills
β’ Strong communication skills
β’ Knowledge of SQL
β’ Batch Scripting
β’ Managing projects
* Manage Deployments
β’ Wireless android application
β’ API knowledge
* Experience working through projects with little supervision--must be a self starter.
*Hardware -lab environment work
BioTalent is partnering with a leading life sciences manufacturer to appoint an Associate Director, Lean Deployment, to lead and elevate the organisationβs Continuous Improvement strategy across its New Castle, Delaware operations.
This is a high-impact role responsible for driving a culture of sustainable change by developing, embedding, and championing Lean and Continuous Improvement methodologies. The successful candidate will collaborate closely with site leadership and a global continuous improvement peer network to improve manufacturing and back-office processes while shaping long-term Lean strategy and deployment.
Key Responsibilities
- Partner with site leadership to develop and execute a site-wide Continuous Improvement roadmap within the organisationβs Lean Operating System.
- Lead transformation initiatives across critical operational areas.
- Facilitate Structured Problem Solving and Value Stream Mapping to guide teams through analysis, planning, and implementation.
- Build and enhance tiered visual and daily management systems that enable effective operational oversight.
- Plan and facilitate kaizen events that drive measurable, sustainable improvements.
- Identify and eliminate waste across transactional and manufacturing processes to increase efficiency and reduce cost.
- Deliver both formal and informal training on Lean and CI tools including Daily Management, Problem Solving, 6S, SMED, Kanban, OEE, and line efficiency.
- Coach and develop employees at all levels to expand Lean capability and CI mindsets.
- Challenge existing processes to elevate performance and drive continuous, sustainable improvement.
- Support improvements across other sites or functions as needed.
Qualifications
- 10+ years of progressive experience in a manufacturing environment.
- Bachelorβs degree required; advanced degree preferred.
- Proven ability to engage leaders and shop-floor teams in Lean deployment.
- Demonstrated history of delivering sustainable results through CI initiatives.
- Practical experience in Lean Manufacturing and the deployment of a Lean Operating System.
- Strong knowledge of value stream improvement tools (e.g., SMED, 6S, Visual Management, Daily Management, standard work).
- Lean/Six Sigma Black Belt certification or equivalent preferred.
- Strong leadership presence with the ability to influence at all levels.
- Proficiency in advanced statistical and Six Sigma techniques is an advantage.
- PMP certification or similar project management credentials preferred.
- Skilled in Microsoft Office and Visio.
- Excellent communication, facilitation, coaching, and problem-solving skills.
Reach out to for more information.
Type: Contract
Duration: 2 months to start
Job Description
- Responsibilities will include assisting with Rover deployments leading up to Epic Wave 2; this involves deploying the actual phones in a designated area and also deploying while rounding through the hospitals, deploying docks for shared devices, and charging cabinet demonstrations and education.
Responsibilities
- Provides first-line end user support for mobile device issues and hands-on support for device depot operations.
- Assists users with device setup, troubleshooting, and enrollment while also handling receiving, kitting, imaging, and shipping of mobile devices.
- Maintains physical device inventory and performs basic configuration tasks under supervision.
- Provide first-line end user support for mobile device issues including connectivity, email configuration, app access, and basic troubleshooting via walk-up, phone, and ticketing channels.
- Assist end users with initial device setup, enrollment, and onboarding including email, VPN, and enterprise application configuration.
- Enroll and configure devices in the MDM/UEM platform following established procedures; escalate complex issues to senior staff.
- Receive, unbox, and inventory mobile devices; perform kitting, imaging, and preparation per standard configurations.
- Process device returns, perform data wipes, and prepare devices for redeployment or disposal.
- Maintain accurate inventory records in asset management systems.
- Package and ship devices to end users and remote locations according to established processes.
- Operate with direct guidance; work assignments are generally straightforward and routine.
- Perform related duties as required.
Qualifications
- High School Diploma or equivalent required.
- 0β2 years of relevant experience, required.
- Basic familiarity with mobile devices (iOS, Android), preferred
Florida Solar Energy Center:
The Florida Solar Energy Center (FSEC), is the state's premier energy research institution. Created by the Florida Legislature in 1975 to advance research, development and education in solar energy, FSEC's focus includes renewable energy, energy efficiency, and sustainable transportation research, demonstration, and education.FSEC is administered by the University of Central Florida and is located in Cocoa, Florida.
The Opportunity:
FSEC invites applications for an Undergraduate Summer Research Internship focused on low-carbon methanol production from fugitive methane resources. This internship will take place at M2X Energy Inc. (Rockledge, Florida), where the intern will collaborate with both researchers and industry engineers. All research and hands-on activities will occur at this off-campus sponsor facility.
Responsibilities:
This program focuses on supporting activities related to process development, optimizations and designs of portable, modular gas-to-methanol technologies in partnership with M2X Energy Inc., which involves, but is not limited to:
Development and review of Process Flow Diagrams (PFDs) and Piping & Instrumentation Diagrams (P&IDs).
Creation and support of CAD drawings for equipment layouts, skid assemblies, and process components.
Design, assembly, and experimentation with benchtop instruments, including data acquisition and analysis.
Basic process control support and maintenance of engineering and manufacturing documentation.
Assembly, modification, and operation of flow reactors and adsorption testing systems (such as breakthrough rigs) to generate data for pressure swing adsorption (PSA) modeling and validation.
Participation in chemical laboratory experiments and processes, including but not limited to testing adsorbent materials to enhance methane recovery from renewable natural gas streams.
Preparation of written and oral reports summarizing experimental results, processes, and findings.
Minimum Qualifications:
Enrollment or recent completion in an engineering or science undergraduate program and basic proficiency in one or more of the following: chemical engineering, mechanical engineering, electrical engineering, or chemistry with scientific lab work.
Preferred Qualifications:
Experience with process diagrams and CAD software (AutoCAD, SolidWorks, or similar) and interpreting or drafting Piping and Instrumentation Diagrams (P&ID).
Hands-on experience in bench-scale instrumentation development and experimentation.
Familiar with piping assembly, fittings, and gas cylinder handling.
Track records with detailed documentation of experimental procedure, SOPs plus good data management skills.
Familiarity with data processing, analysis and computing tools such as Excel, Origin, MATLAB and Python.
Interest in manufacturing, process engineering and clean technology processes.
Hands-on aptitude and comfortable working in a shop, laboratory and/or manufacturing environment.
Strong communication skills and awareness of other technical staff, external partners, equipment/instrument manufacturers, etc.
Special Instructions to the Applicants:
This internship requires on-site participation at M2X Energy's site located at 270 Barnes Boulevard Rockledge, FL 32955.
Desired starting date: Early May 2026.
Applicant must be authorized to work for any U.S. employer, as sponsorship is not available for this position now or in the future.
For questions regarding the position, please contact Dr. Francis Chukwunta () and/or Dr. Veronica Rigo ( ).
Are you ready to unleash YOUR potential?
As a next-generation public research university and Forbes-ranked top employer in Florida, we are a community of thinkers, doers, creators, innovators, healers, and leaders striving to create broader prosperity and help shape a better future. No matter what your role is, when you join Knight Nation,you'llplay an integral role at one of the most impactful universities in the country.You'llbe met with opportunities to connect and collaborate with talented faculty, staff, and students across 12colleges and multiple campuses, engaging in impactful work that makes a positive difference. Your time at UCF will provide you with many meaningful opportunities to grow,you'llwork alongside talented colleagues on complex projects that will challenge you and help you gain newskillsandyou'llhave countless rewarding experiences that go well beyonda paycheck.
AreBenefitsImportant to You?
StateBenefitseligibility for OPS employees are subject to criteriaestablishedby the State of Florida. The state's benefits administrator, People First,determineseligibility and coordinates enrollment. If this position becomes eligible for statebenefitsthe employee will be notified directly by People First.OPS positions are not entitled topaidtime off.
Unless explicitlystatedon the job posting, it is UCF's expectation that an employee of UCF willresidein Florida as of the date the employment begins.
Additional Requirements related to Research Positions:
Pursuant toFlorida State Statute 1010.35, prior to offering employment to certain individuals in research-related positions, UCFis required toconductadditionalscreening. Applicants subject to additional screening include any citizen of a foreign country who is not a permanent resident of the U.S., or who is a citizen or permanent resident but is affiliated with or has had at least 1 year of education, employment, or training in China, Cuba, Iran, Russia, North Korea, Syria, or Venezuela.
Theadditionalscreening requirements only apply to research-related positions, including, but not limited to faculty, graduate positions, individuals compensated by research grants or contract funds, postdoctoral positions, undergraduate positions, visiting assistant professors, and visiting research associates.
Department
Office of Research - Florida Solar Energy Center (FSEC) - OPSHours of Work
Full timeWork Schedule
Monday through Friday - 8:00 am to 5:00 pmType of Appointment
Fixed Term (Fixed Term)Hourly Rate
$18.00 to NegotiableJob Posting End Date
AMBenefits Eligibility
State Benefits eligibility for OPS employees are subject to criteria established by the State of Florida. The state's benefits administrator, People First, determines eligibility and coordinates enrollment. If this position becomes eligible for state benefits the employee will be notified directly by People First. OPS positions are not entitled to paid time off.
As a Florida public university, the University of Central Florida makes all application materials and selection procedures available to the public upon request.
UCF is proud to be a smoke-free campus and an E-Verify employer.
an accommodation due to a disability is needed to apply for this position, please call or email .
For general application or posting questions, please email .
DEPLOY has been retained to find a Reporting & Data Architect Lead combines advanced reporting development with enterprise-level data governance and architectural leadership. In this role, you will own our client's enterprise reporting platformβdesigning robust Power BI solutions, managing shared data models, and ensuring the reporting environment remains secure, scalable, and high-performing.
You will also own our client's enterprise reporting standards and governance framework, ensuring reporting across all departments is consistent, trusted, and aligned with best practices. This includes defining reporting conventions, reviewing changes, onboarding departmental report creators, and stewarding enterprise reporting assets such as certified datasets and endorsed reports.
At the enterprise level, you will architect our client's data frameworkβdefining how data is structured, named, documented, and shared across ERP, operational, manufacturing, and corporate systems. You will own the enterprise data dictionary, the centralized semantic model, and key architectural decisions around Microsoft Fabric and other data tooling. This role interacts frequently with executives to align data strategy with organizational growth and reporting needs.
Key Responsibilities
Enterprise Reporting (Hands-On Development)
- Build, optimize, and maintain enterprise-grade Power BI reports, dashboards, datasets, and data models.
- Develop and govern shared semantic models and reusable datasets that power enterprise-wide reporting.
- Use Microsoft Fabric, Dataverse, and related ETL/data management tools to shape and integrate reporting data sources.
- Manage dataset refresh schedules, performance tuning, workspace organization, gateway configuration, and reporting system reliability.
- Implement row-level security (RLS), workspace access patterns, and enterprise reporting permissionsβResponsible, with the Director of Technology Accountable.
- Manage reporting governance artifacts including certified datasets, endorsed reports, and enterprise workspace standards.
- Support reporting scalability as our client grows (new factories, new business units, new product lines).
Enterprise Reporting Standards & Governance
- Own our client's enterprise reporting standards framework, covering naming conventions, modeling patterns, documentation practices, lifecycle management, visual design standards, and change control.
- Govern reporting development and deployment across the organization to ensure consistency and prevent duplicate or conflicting models.
- Review and approve reporting change requests, data model modifications, and access requests.
- Lead documentation and enablement for departmental report creators through training, guidance, and structured onboarding.
- Provide strategic direction around reporting maturity, sustainability, and enterprise alignment.
Enterprise Data Architecture
- Design and maintain our client's enterprise data architecture framework across ERP, operational, manufacturing, and corporate systems.
- Own the enterprise data dictionary, defining canonical field names, table structures, business definitions, and version control practices.
- Build and govern the centralized semantic model that powers reporting across the company.
- Advise and strongly influence enterprise-level decisions around Microsoft Fabric, data modeling strategy, and long-term architectural directionβand own the work that follows those decisions.
- Collaborate with engineering and system owners to coordinate schema changes, data integrations, and cross-system alignment.
Leadership & Collaboration
- Partner with C-suite and senior leaders to define reporting roadmaps, enterprise priorities, and data strategy.
- Communicate complex architectural concepts in clear, business-friendly terms.
- Lead cross-functional initiatives that require unified data structures or scalable reporting.
- Apply automation (Power Automate, Fabric pipelines) and AI tools to improve reporting efficiency, data quality, and governance workflows.
Ideal Candidate Profile
- Deep hands-on expertise with Power BI, Microsoft Fabric, data modeling, and cloud data platforms.
- Track record of establishing and enforcing enterprise reporting standards and governance.
- Strong architectural intuition: semantic modeling, master data definition, cross-system alignment, and scalable design.
- Able to operate as both an individual contributor and a strategic leader.
- Experience managing reporting governance artifacts (certified datasets, endorsed reports, workspace strategy).
- Comfortable influencing architectural decisions and guiding technical execution.
- Strong command of foundational tools and languages such as:
- DAX
- Power Query / M
- SQL
- Fabric pipelines / ETL tooling
- Experience with automation and AI-assisted analytics workflows.
Company Description
PG Forsta is the leading experience measurement, data analytics, and insights provider for complex industries-a status we earned over decades of deep partnership with clients to help them understand and meet the needs of their key stakeholders. Our earliest roots are in U.S. healthcare -perhaps the most complex of all industries. Today we serve clients around the globe in every industry to help them improve the Human Experiences at the heart of their business. We serve our clients through an unparalleled offering that combines technology, data, and expertise to enable them to pinpoint and prioritize opportunities, accelerate improvement efforts and build lifetime loyalty among their customers and employees.
Like all great companies, our success is a function of our people and our culture. Our employees have world-class talent, a collaborative work ethic, and a passion for the work that have earned us trusted advisor status among the world's most recognized brands. As a member of the team, you will help us create value for our clients, you will make us better through your contribution to the work and your voice in the process. Ours is a path of learning and continuous improvement; team efforts chart the course for corporate success.
Our Mission:
We empower organizations to deliver the best experiences. With industry expertise and technology, we turn data into insights that drive innovation and action.
Our Values:
To put Human Experience at the heart of organizations so every person can be seen and understood.
- Energize the customer relationship:Our clients are our partners. We make their goals our own, working side by side to turn challenges into solutions.
- Success starts with me:Personal ownership fuels collective success. We each play our part and empower our teammates to do the same.
- Commit to learning:Every win is a springboard. Every hurdle is a lesson. We use each experience as an opportunity to grow.
- Dare to innovate:We challenge the status quo with creativity and innovation as our true north.
- Better together:We check our egos at the door. We work together, so we win together.
Duties & Responsibilities
Design and implement processes, systems and automation to streamline the development and deployment of AI solutions.
Architect robust, reliable solutions for specific AI applications using appropriate cloud-based and open source technologies.
Design and automate data pipelines to deliver complex data products to power training and online inference of AI systems.
Deploy ML models, LLMs and GenAI systems into production, ensuring reliability, efficiency, and scalability across cloud or hybrid environments.
Build and maintain robust CI/CD pipelines tailored to ML model lifecycle management, ensuring a streamlined and agile deployment process.
Monitor model performance, identify potential improvements, and integrate feedback loops for continuous learning and adaptation.
Integrate models with chat interfaces and conversational platforms to create responsive, user-centric applications.
Investigate and implement agent-based architectures that support conversational intelligence and interaction modeling.
Collaborate with cross-functional teams to design AI-driven features that enhance user experience and interaction within chat interfaces.
Work closely with data scientists, product managers, and engineers to ensure alignment on project goals, data requirements, and system constraints.
Mentor junior engineers and provide guidance on best practices in ML model development, deployment, and maintenance.
Create and maintain comprehensive documentation for model architectures, code implementations, data workflows, and deployment procedures to ensure reproducibility, transparency, and ease of collaboration.
Technical Skills
Experience with large-scale deployment tools and environments, including Docker, Kubernetes, and cloud platforms like AWS, Azure, or GCP.
Experience deploying and managing a variety of database technologies.
Experience deploying ML models at scale and optimizing models for low-latency, high-availability environments.
Strong programming skills in Python and proficiency in libraries such as NumPy, Pandas, and Scikit-learn.
Experience with data pipelines, ETL processes, and experience with distributed data frameworks like Apache Spark or Dask.
Familiarity with machine learning frameworks such as TensorFlow, PyTorch, and Hugging Face Transformers.
Knowledge of conversational AI, agent-based systems, and chat interface development.
Proven track record in deploying and maintaining ML and AI solutions in a production setting.
Experience with version control (e.g., Git) and CI/CD tools tailored to ML workflows.
Experience with MLOps.
Experience with Databricks is a plus.
Qualifications
Minimum Qualifications
5+ years of experience in platform engineering with a focus on with a focus on data and ML systems.
Bachelor's degree in Computer Science, Engineering, Data Science, or a related field.
Don't meet every single requirement?Studies have shown that women and people of color are less likely to apply to jobs unless they meet every single qualification. At Press Ganey we are dedicated to building a diverse, inclusive and authentic workplace, so if you're excited about this role but your past experience doesn't align perfectly with every qualification in the job description, we encourage you to apply anyways. You may be just the right candidate for this or other roles.
Additional Information for US based jobs:
Press Ganey Associates LLC is an Equal Employment Opportunity/Affirmative Action employer and well committed to a diverse workforce. We do not discriminate against any employee or applicant for employment because of race, color, sex, age, national origin, religion, sexual orientation, gender identity, veteran status, and basis of disability or any other federal, state, or local protected class.
Pay Transparency Non-Discrimination Notice - Press Ganey will not discharge or in any other manner discriminate against employees or applicants because they have inquired about, discussed, or disclosed their own pay or the pay of another employee or applicant. However, employees who have access to the compensation information of other employees or applicants as a part of their essential job functions cannot disclose the pay of other employees or applicants to individuals who do not otherwise have access to compensation information, unless the disclosure is (a) in response to a formal complaint or charge, (b) in furtherance of an investigation, proceeding, hearing, or action, including an investigation conducted by the employer, or (c) consistent with the contractor's legal duty to furnish information.
The expected base salary for this position ranges from $100,000 to $140,000. It is not typical for offers to be made at or near the top of the range. Salary offers are based on a wide range of factors including relevant skills, training, experience, education, and, where applicable, licensure or certifications obtained. Market and organizational factors are also considered. In addition to base salary and a competitive benefits package, successful candidates are eligible to receive a discretionary bonus or commission tied to achieved results.
All your information will be kept confidential according to EEO guidelines.
Our privacy policy can be found here:legal-privacy/
Contract - Tier 2 Help Desk Technician
12 Months ++
W-2: Includes Benefits, PTO
Medix is looking for a seasoned Tier 2 Help Desk Technician to support a public healthcare system serving Harris County, Texas, operating multiple hospitals, specialty clinics, and community care locations across the Houston area.
We are seeking technical deployment resources who can assist with endpoint setup, infrastructure readiness, and device validation for new and remodeled facilities. These resources will play a critical role in ensuring technology infrastructure is ready prior to operational go-live for facility projects across the system
Daily Responsibilities:
- Install and configure desktop computers, laptops, and endpoint devices
- Deploy and configure printers, scanners, and peripheral hardware
- Support device deployments across multiple healthcare facilities
- Assist with workstation configuration and endpoint validation
- Troubleshoot hardware, operating system, and connectivity issues
- Validate that devices are properly connected to network resources and enterprise systems
- Support device testing activities prior to operational go-live
- Work alongside vendor installation teams during infrastructure deployments
- Escalate issues that may impact deployment timelines or system functionality
- Document device installations and support activities as needed
Public - Required Skills
- 4+ years of experience in desktop support, field services, or IT deployment roles preferably in a hospital setting
- Experience installing and troubleshooting PC hardware and endpoint devices
- Experience configuring printers and peripheral devices
- Strong troubleshooting skills related to hardware and operating systems
- Ability to work onsite across multiple locations
- Experience supporting large-scale device deployments or infrastructure rollouts
- Strong communication and collaboration skills
Public - Preferred Skills
- Experience supporting healthcare IT environments
- Experience supporting Epic or other EHR environments
- Experience participating in Epic Technical Dress Rehearsal (TDR) or go-live support
- Experience supporting technology deployments during facility openings or relocations
- Experience coordinating work with vendor installation teams
- CompTIA A+ or Network+ certifications
RESPONSIBILITIES:
* Architect, design, and maintain scalable CI/CD pipelines using Azure/AWS DevSecOps.
* Build and optimize Docker-based microservices, images, and deployment pipelines.
* Lead deployments across Docker Swarm, Kubernetes/EKS, and multi-location environments.
* Develop infrastructure automation using Ansible, bash scripting, Terraform and Git-based workflow.
* Manage release pipelines using container registries, artifact feeds, template pipelines, and multi-stage workflows.
* Design multi-environment strategies for dev, QA, staging, and production deployment.
* Implement cloud-native services with AWS & Azure cloud platforms.
* Implement basic security practices, including IAM roles, secrets management, and access controls.
* Develop secure, modular, reusable build and release systems.
* Work closely with full-stack engineering teams (Angular, Java, Python , backend APIs, database engineers).
* Mentor junior DevOps engineers and lead DevOps roadmap decisions.
KNOWLEDGE REQUIREMENTS:
DevOps Expertise :
Azure DevOps pipelines, YAML templating, CI/CD strategy, Git branching models.
Containerization & Orchestration :
Docker images, Docker Compose, Docker Swarm, multi-node/multi-location deployments.
Cloud Technologies :
Azure deployments & infrastructure, AWS (IAM, Lambda, S3, CloudWatch).
Programming / Scripting Languages :
Python, Bash, Linux/Unix administration, awk, shell automation, groovy.
Infrastructure Automation :
Ansible playbooks, tasks/roles, inventory design, configuration management.
Distributed Deployment Architecture :
Multi-site replication, node selection by IP, dynamic service routing.
Database Stack Experience :
PostgreSQL, MySQL, MariaDB operations & migrations.
Observability & Logging :
CloudWatch monitoring, log collection, Prometheus, Grafana, reporting & metrics.
Version Control & Build Systems :
Azure Devops, Git, Git submodules, artifact storage, registry solutions, Secrets Management.
Nice to have AI knowledge/experience and willingness to learn.
EDUCATION & EXPERIENCE REQUIREMENTS
* BS degree in Electrical/Computer Engineering, Computer Science or related field. MS preferred.
* 7+ years experience in a software devops/development/test capacity with enterprise server, storage or networking products.
When our customers are looking for consultative IT expertise, where else would they turn but to the company driving human progress through technology? Our Solutions Architecture team within Professional Services are specialists in package customization and integration as well as total, end-to-end solutions in targeted industry segments. After detailed consultation with our customers and careful analysis, we develop new IT systems or replace existing systems that support customersβ strategic, operational and financial goals.
Join us to do the best work of your career and make a profound social impact as a Principal Engineer, Solutions Architect β Liquid Cooling (RDHx & CDU Systems) on our Solutions Architecture Team in Austin, Texas. Or Remote United States ( Ability to travel to customer locations)
What youβll achieve
We are seeking a Senior Solutions Architect (SA) specializing in Data CenterΒ Liquid Cooling, with emphasis on rearβdoor heat exchangers (RDHx)Β and cooling distribution units (CDUs). This is a new capability within our organization, and you will serve as our primary expert in liquidβbased thermal solutions for highβdensity compute environments.
In this postβsales architecture role, you will work directly with customers to understand their requirements, design full liquid cooling solutions, support field teams during deployments, and help shape our internal best practices around liquid cooling technologies.
You will:
Customer Engagement & PostβSales Architecture
Lead technical discovery sessions with customers to gather thermal, environmental, mechanical, and operational requirements
Design endβtoβend liquid cooling solutions featuring rearβdoor heat exchangers, CDUs, manifolds, hoses, fittings, and facility water loop considerations
Produce detailed solution designs including schematics, BOMs, system flow diagrams, thermal performance expectations, and installation guidelines
Translate customer requirements into scalable, supportable, and reliable architectures
Deployment & Field Support
Support field engineering teams during installation, commissioning, and validation of RDHx and CDU systems
Provide technical oversight on-site or remotely, ensuring the deployed system aligns with the approved design
Troubleshoot flow rates, pressure drops, temperature deltas, coolant quality, sensor behavior, and CDU operational parameters
Assist with acceptance testing, monitoring configurations, and integration with facility cooling infrastructure
Internal SME & Cross-Functional Collaboration
Serve as the internal authority on liquid cooling within engineering, operations, sales, and product
Develop internal documentation: reference architectures, best practices, safety guidelines, and deployment playbooks
Train field teams and adjacent groups unfamiliar with liquid cooling practices
Collaborate with OEMs and vendors to stay aligned with the latest RDHx and CDU technologies
Practice Development
Define standards and repeatable processes for liquid cooling implementations.
Contribute to service offerings that support deployment, maintenance, and ongoing optimization of liquid cooling systems
Help shape long-term strategy and roadmap as our liquid cooling practice grows
Take the first step towards your dream career
Every Dell Technologies team member brings something unique to the table. Hereβs what we are looking for with this role:
Essential Requirements
7+ yearsΒ in data center infrastructure, solutions architecture, mechanical/thermal engineering, or HPC environments
Hands-on experienceΒ with: Rear-door heat exchangers (enclosed or active systems)
Cooling Distribution Units (CDUs), Secondary Fluids Network (SFN) design and fabrication
Coolant loop design, manifolds, flexible hose routing, connectors, dripβless fittings, sensors, etc.
Rack and system level liquid cooling technologies, including multiple cooling loops, and direct-to-chip cooling
Strong understanding of: Heat transfer, thermodynamics, and fluid mechanics
Facility water loops and integration points
Flow balancing, deltaβT analysis, and pump performance curves
Leak detection and safety best practices
Familiarity with data center power/cooling concepts (rack-level thermals, airflow management).
Architecture & Customer-Facing Skills
Experience conducting customer requirement gathering and converting needs into detailed solution architectures
Ability to write clear architectural documentation, diagrams, and BOMs
Comfort supporting deployments handsβon and resolving technical issues in the field
Ability to travel to customerβs locations1
Soft Skills
Excellent communication and customer engagement skills
Ability to simplify complex engineering topics for nonβtechnical audiences
Selfβstarter comfortable defining processes and building practice maturity
Working cross-functionally across different teams
Desirable Requirements
Bachelorβs degree or higher in Mechanical Engineering, Thermal Engineering, or similar field
Proficiency with Thermal Simulation tools (ANSYS Icepak, FloTHERM, FloEFD, ANSYS Mechanical)
Experience with highβdensity compute environments (AI/ML, HPC, GPU racks)
Data centerβrelated certifications (CDCP, CDCS, DCEP Generalist, DCEP-HVAC SpecialistΒ etc.)
Familiarity with RDHx and CDU vendors such as Vertiv, Schneider Electric, Motivair, Liebert, Rittal, nVent, CoolIT, etc.
Knowledge of monitoring and control systems (Modbus, BACnet, SNMP, CDU controllers)
Compensation: Dell is committed to Fair and Equitable compensation practices. The Base Salary Range for this role is $170,850 to $ 221,100.
Benefits and Perks of working at Dell Technologies
Your life. Your health. Supported by your benefits. You can explore the overall benefits experience that awaits you as a Dell Technologies team member β right now at
Who we are
We believe that each of us has the power to make an impact. Thatβs why we put our team members at the center of everything we do. If youβre looking for an opportunity to grow your career with some of the best minds and most advanced tech in the industry, weβre looking for you.
Dell Technologies is a unique family of businesses that helps individuals and organizations transform how they work, live and play. Join us to build a future that works for everyone because Progress Takes All of Us.
Dell Technologies is committed to the principle of equal employment opportunity for all employees and to providing employees with a work environment free of discrimination and harassment. Read the full Equal Employment Opportunity Policy here.
Job ID: R286406
**This position supports hybrid work schedule depending on organization needs.**
RESPONSIBILITIES:
- Architect, design, and maintain scalable CI/CD pipelines using Azure/AWS DevSecOps.
- Build and optimize Docker-based microservices, images, and deployment pipelines.
- Lead deployments across Docker Swarm, Kubernetes/EKS, and multi-location environments.
- Develop infrastructure automation using Ansible, bash scripting, Terraform and Git-based workflow.
- Manage release pipelines using container registries, artifact feeds, template pipelines, and multi-stage workflows.
- Design multi-environment strategies for dev, QA, staging, and production deployment.
- Implement cloud-native services with AWS & Azure cloud platforms.
- Implement basic security practices, including IAM roles, secrets management, and access controls.
- Develop secure, modular, reusable build and release systems.
- Work closely with full-stack engineering teams (Angular, Java, Python , backend APIs, database engineers).
- Mentor junior DevOps engineers and lead DevOps roadmap decisions.
KNOWLEDGE REQUIREMENTS:
DevOps Expertise :
Azure DevOps pipelines, YAML templating, CI/CD strategy, Git branching models.
Containerization & Orchestration :
Docker images, Docker Compose, Docker Swarm, multi-node/multi-location deployments.
Cloud Technologies :
Azure deployments & infrastructure, AWS (IAM, Lambda, S3, CloudWatch).
Programming / Scripting Languages :
Python, Bash, Linux/Unix administration, awk, shell automation, groovy.
Infrastructure Automation :
Ansible playbooks, tasks/roles, inventory design, configuration management.
Distributed Deployment Architecture :
Multi-site replication, node selection by IP, dynamic service routing.
Database Stack Experience :
PostgreSQL, MySQL, MariaDB operations & migrations.
Observability & Logging :
CloudWatch monitoring, log collection, Prometheus, Grafana, reporting & metrics.
Version Control & Build Systems :
Azure Devops, Git, Git submodules, artifact storage, registry solutions, Secrets Management.
Nice to have AI knowledge/experience and willingness to learn.
EDUCATION & EXPERIENCE REQUIREMENTS
- BS degree in Electrical/Computer Engineering, Computer Science or related field. MS preferred.
- 7+ years experience in a software devops/development/test capacity with enterprise server, storage or networking products.
RESPONSIBILITIES:
- Architect, design, and maintain scalable CI/CD pipelines using Azure/AWS DevSecOps.
- Build and optimize Docker-based microservices, images, and deployment pipelines.
- Lead deployments across Docker Swarm, Kubernetes/EKS, and multi-location environments.
- Develop infrastructure automation using Ansible, bash scripting, Terraform and Git-based workflow.
- Manage release pipelines using container registries, artifact feeds, template pipelines, and multi-stage workflows.
- Design multi-environment strategies for dev, QA, staging, and production deployment.
- Implement cloud-native services with AWS & Azure cloud platforms.
- Implement basic security practices, including IAM roles, secrets management, and access controls.
- Develop secure, modular, reusable build and release systems.
- Work closely with full-stack engineering teams (Angular, Java, Python , backend APIs, database engineers).
- Mentor junior DevOps engineers and lead DevOps roadmap decisions.
KNOWLEDGE REQUIREMENTS:
DevOps Expertise:
Azure DevOps pipelines, YAML templating, CI/CD strategy, Git branching models.
Containerization & Orchestration:
Docker images, Docker Compose, Docker Swarm, multi-node/multi-location deployments.
Cloud Technologies:
Azure deployments & infrastructure, AWS (IAM, Lambda, S3, CloudWatch).
Programming / Scripting Languages:
Python, Bash, Linux/Unix administration, awk, shell automation, groovy.
Infrastructure Automation:
Ansible playbooks, tasks/roles, inventory design, configuration management.
Distributed Deployment Architecture:
Multi-site replication, node selection by IP, dynamic service routing.
Database Stack Experience:
PostgreSQL, MySQL, MariaDB operations & migrations.
Observability & Logging:
CloudWatch monitoring, log collection, Prometheus, Grafana, reporting & metrics.
Version Control & Build Systems:
Azure Devops, Git, Git submodules, artifact storage, registry solutions, Secrets Management.
Nice to have AI knowledge/experience and willingness to learn.
EDUCATION & EXPERIENCE REQUIREMENTS
- BS degree in Electrical/Computer Engineering, Computer Science or related field. MS preferred.
- 7+ years experience in a software devops/development/test capacity with enterprise server, storage or networking products.
Remote working/work at home options are available for this role.
RESPONSIBILITIES:
~ Architect, design, and maintain scalable CI/CD pipelines using Azure/AWS DevSecOps.
~ Build and optimize Docker-based microservices, images, and deployment pipelines.
~ Lead deployments across Docker Swarm, Kubernetes/EKS, and multi-location environments.
~ Develop infrastructure automation using Ansible, bash scripting, Terraform and Git-based workflow.
~ Manage release pipelines using container registries, artifact feeds, template pipelines, and multi-stage workflows.
~ Design multi-environment strategies for dev, QA, staging, and production deployment.
~ Implement cloud-native services with AWS & Azure cloud platforms.
~ Implement basic security practices, including IAM roles, secrets management, and access controls.
~ Develop secure, modular, reusable build and release systems.
~ Work closely with full-stack engineering teams (Angular, Java, Python , backend APIs, database engineers).
~ Mentor junior DevOps engineers and lead DevOps roadmap decisions.
KNOWLEDGE REQUIREMENTS:
DevOps Expertise :
Azure DevOps pipelines, YAML templating, CI/CD strategy, Git branching models.
Containerization & Orchestration :
Docker images, Docker Compose, Docker Swarm, multi-node/multi-location deployments.
Cloud Technologies :
Azure deployments & infrastructure, AWS (IAM, Lambda, S3, CloudWatch).
Programming / Scripting Languages :
Python, Bash, Linux/Unix administration, awk, shell automation, groovy.
Infrastructure Automation :
Ansible playbooks, tasks/roles, inventory design, configuration management.
Distributed Deployment Architecture :
Multi-site replication, node selection by IP, dynamic service routing.
Database Stack Experience :
PostgreSQL, MySQL, MariaDB operations & migrations.
Observability & Logging :
CloudWatch monitoring, log collection, Prometheus, Grafana, reporting & metrics.
Version Control & Build Systems :
Azure Devops, Git, Git submodules, artifact storage, registry solutions, Secrets Management.
Nice to have AI knowledge/experience and willingness to learn.
EDUCATION & EXPERIENCE REQUIREMENTS
~ BS degree in Electrical/Computer Engineering, Computer Science or related field. MS preferred.
~7+ years experience in a software devops/development/test capacity with enterprise server, storage or networking products.
We are seeking a highly skilled Senior Splunk Enterprise Security Engineer to join our dynamic Security Engineering & Architecture team in Irving, TX. This is an exciting opportunity to lead the deployment, optimization, and administration of our Splunk Enterprise Security (ES) platform within a cloud-based environment. The ideal candidate will have extensive hands-on experience with Splunk ES, cloud security platforms, and a deep understanding of SIEM operations at an enterprise scale. This role offers a chance to work in a complex, high-volume environment and make a significant impact on the organizationβs security infrastructure.
Job Title: Senior Splunk Security Engineer
Job Location: Irving, TX 75063
Job Duration: 12 Months with possible extension
Comments: The candidate must have hands-on Splunk expertise, with a minimum of 5+ years specific to Splunk Enterprise Security.
Additional Skills: Splunk Enterprise Security Administrator, Splunk Cloud Administrator.
Job Description:
We are seeking a Senior Splunk Enterprise Security Engineer to join our Security Engineering & Architecture team in Irving, TX. In this high-impact individual contributor role, you will own the deployment, optimization, and day-to-day administration of our Splunk Enterprise Security (ES) platform across a cloud-based environment supporting one of the largest retail operations in the country.
You will be the go-to subject matter expert for Splunk ES, partnering with SOC analysts, threat intelligence teams, compliance stakeholders, and IT leadership to ensure our security monitoring platform delivers maximum visibility, reliability, and value. This is a hands-on, technically deep role for someone who thrives in complex, high-volume environments and takes pride in building resilient security infrastructure.
Responsibilities:
- Lead the end-to-end administration of Splunk Enterprise Security across a cloud-hosted (AWS/Azure/GCP) deployment, including architecture decisions, capacity planning, performance tuning, and version upgrades.
- Design, implement, and maintain ES frameworks including notable event configurations, risk-based alerting, asset and identity correlation, and threat intelligence integrations.
- Develop and optimize correlation searches, dashboards, and investigation workflows to reduce alert fatigue and accelerate analyst response times.
- Drive data source onboarding and ensure CIM (Common Information Model) compliance for new and existing log sources across the enterprise.
- Partner with compliance teams to ensure Splunk ES configurations directly support PCI DSS, SOX, and NIST CSF audit and reporting requirements.
- Establish and maintain health monitoring for the Splunk environment, including search performance, indexing throughput, forwarder connectivity, and license utilization.
- Create and maintain operational documentation, runbooks, and knowledge base articles for Splunk ES administration and troubleshooting.
- Serve as the escalation point for complex Splunk issues and participate in incident response efforts during critical security events as needed.
- Evaluate and recommend new Splunk apps, add-ons, and integrations that strengthen the organizationβs security posture.
- Collaborate with Security Architecture peers to align Splunk ES capabilities with the broader security tooling ecosystem and long-term technology roadmap.
Required Skills:
- 5+ years of hands-on experience with Splunk platform administration, with significant depth in Splunk Enterprise Security.
- Active Splunk certifications required: Splunk Enterprise Certified Admin and/or Splunk ES Certified Admin.
- Proven experience managing Splunk deployments in cloud environments (AWS, Azure, or GCP).
- Deep understanding of security monitoring, log management, SIEM operations, and event correlation at enterprise scale.
- Working knowledge of PCI DSS, SOX, and NIST CSF compliance frameworks and how they translate into SIEM use cases and reporting requirements.
- Strong SPL (Search Processing Language) proficiency, including complex statistical commands, lookups, macros, and data models.
- Experience with Splunk infrastructure components: indexers, search heads, heavy/universal forwarders, deployment servers, and cluster management.
- Excellent communication skills with the ability to translate complex technical concepts for non-technical stakeholders.
Desired Skills:
- Experience in large-scale retail or similarly complex, high-transaction-volume environments.
- Familiarity with Splunk SOAR (formerly Phantom) and security automation/orchestration workflows.
- Background in detection engineering, threat hunting, or SOC operations.
- Additional certifications such as CISSP, GIAC (GCIA, GCIH), or cloud security credentials (AWS Security Specialty, AZ-500).
- Experience with Infrastructure as Code (Terraform, Ansible) for Splunk deployment management.
- Scripting proficiency in Python, Bash, or PowerShell for automation and custom integrations.
At Rite-Hite, your work makes an impact. As the global leader in loading dock and door equipment, we design and deliver solutions that keep our customers safe, secure, and productive. Here, you'll find innovation, stability, and the chance to grow your career as part of a team that's always looking ahead.
Rite-Hite is a global leader in industrial safety and efficiency solutions. As part of our digital transformation journey, we are seeking a Systems Architect, Edge AI/ML to help build and scale intelligent systems that deliver real-time insights, automation support, and customer value at the edge of industrial environments.
The Systems Architect, Edge AI/ML is responsible for defining and guiding the architecture for artificial intelligence and machine learning deployed on devices and edge platforms. This role ensures that AI/ML solutions are scalable, reliable, secure, and aligned with enterprise product and platform objectives.
The architect partners closely with product management, device and edge software teams, data and cloud teams, and digital solution leaders to translate business and customer requirements into robust AI/ML foundations that enable consistent behavior, efficient lifecycle management, and high-quality user experiences across industrial environments.
CORE RESPONSIBILITIES
Edge AI/ML Architecture Strategy- Define and maintain reference architectures, design patterns, and standards for deploying AI and ML models on edge devices and platforms.
- Establish common approaches for data ingestion, feature pipelines, model inference, performance optimization, and lifecycle management at the edge.
- Ensure architectural decisions support real-time operation, scalability, reliability, safety, and security.
- Drive the strategic use of AI and ML at the edge by establishing repeatable patterns for model development, deployment, monitoring, and update.
- Own a portion of the internal technology radar related to AI/ML, edge analytics, and sensing technologies.
- Foster a culture of responsible, ethical, and transparent AI practices aligned with cybersecurity, privacy, and safety requirements.
- Partner with product management and engineering teams to enable AI-driven capabilities such as condition monitoring, vision-based sensing, anomaly detection, and predictive insights.
- Guide architectural decisions related to edge compute constraints, sensor integration, and real-time inference workloads.
- Promote reuse and consistency of AI/ML components across products and platforms.
- Define patterns for integration between edge AI/ML solutions, device software, hybrid mobile applications, and enterprise or cloud-based platforms.
- Ensure solutions support online, offline, and hybrid deployment models common in industrial environments.
- Promote interoperability with internal systems and approved third-party technologies.
- Define secure-by-design and safety-aligned principles for AI/ML model deployment and operation at the edge.
- Ensure alignment with applicable safety, quality, and industrial cybersecurity standards.
- Participate in architecture and design reviews to assess risk, resilience, and compliance.
- Collaborate across product management, device and edge software, data science, cloud platforms, manufacturing, service, and mobile application teams.
- Provide technical leadership, mentorship, and guidance to engineers and teams working with AI/ML technologies.
- Serve as a technical advisor in strategic customer conversations and enterprise implementations when appropriate.
- This role does not have direct reports but is expected to provide technical leadership and architectural guidance across multiple teams.
QUALIFICATIONS
Education & Experience
- Bachelor's degree in Computer Science, Software Engineering, Electrical Engineering, Data Science, or a related technical field required; Master's degree preferred.
- 8+ years of experience delivering AI/ML-enabled solutions, with demonstrated experience deploying models in edge or resource-constrained environments.
- Experience with industrial, connected, or safety-critical systems preferred.
Knowledge / Skill Requirements
- Strong understanding of AI/ML architectures, model lifecycle management, and real-time inference at the edge.
- Experience designing systems that balance performance, accuracy, latency, and resource constraints.
- Ability to think systemically across sensors, devices, edge platforms, mobile applications, and cloud services.
- Strong communication skills and ability to influence cross-functional stakeholders.
- Familiarity with industrial automation, IoT ecosystems, or computer vision is a plus.
Leadership & Collaboration:
- Visionary technical leader who can inspire cross-functional teams and align stakeholders to a shared product vision.
- Exceptional communicator across technical and non-technical audiences.
- Committed to data-driven decision-making and delivery excellence.
Why Join Rite-Hite Digital Solutions?
As a technology leader within a globally trusted industrial brand, you will play a pivotal role in shaping how AI and machine learning are applied safely and effectively in real-world environments. Your work will help customers improve safety, productivity, and uptime through intelligent, edge-driven solutions that deliver measurable operational impact.
What We Offer
At Rite-Hite, we take care of our people - because when you're supported, you can do your best work. Our benefits are designed to support your health, your future and your life outside of work:
Health & Well-being: Comprehensive medical, dental, and vision coverage, plus life and disability insurance. A robust well-being program with an opportunity to receive an extra day off and more.
Financial Security: A strong retirement savings program with 401(k), company match, and profit sharing.
Time for You: Paid holidays, vacation time, and personal/sick days each year.
Join us and build a career where you're supported - at work and beyond.
Rite-Hite is proud to be an Equal Opportunity Employer. We consider all qualified applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, veteran status, or any other protected characteristic under federal, state, or local law.In accordance with VEVRAA, we are committed to providing equal employment opportunities for protected veterans.We are also committed to maintaining a drug-free workplace for the safety of our employees and customers.
Granite delivers advanced communications and technology solutions to businesses and government agencies throughout the United States and Canada. We provide exceptional customized service with an emphasis on reliability and outstanding customer support and our customers include over 85 of the Fortune 100. Granite has over $1.85 Billion in revenue with more than 2,100 employees and is headquartered in Quincy, MA. Our mission is to be the leading telecommunications company wherever we offer services as well as provide an environment where the value of each individual is recognized and where each person has the opportunity to further their growth and achieve success.
Granite has been recognized by the Boston Business Journal as one of the "Healthiest Companies" in Massachusetts for the past 15 consecutive years.
Our offices have onsite fully equipped state of the art gyms for employees at zero cost.
Granite's philanthropy is unparalleled with over $300 million in donations to organizations such as Dana Farber Cancer Institute, The ALS Foundation and the Alzheimer's Association to name a few.
We have been consistently rated a "Fastest Growing Company" by Inc. Magazine.
Granite was named to Forbes List of America's Best Employers 2022, 2023 and 2024.
Granite was recently named One of Forbes Best Employers for Diversity.
Our company's insurance package includes health, dental, vision, life, disability coverage, 401K retirement with company match, childcare benefits, tuition assistance, and more.
If you are a highly motivated individual who wants to grow your career with a fast paced and progressive company, Granite has countless opportunities for you.
EOE/M/F/Vets/Disabled
Job Description
We are seeking a highly skilled and experienced Lead AI Engineer to join our dynamic team. The ideal candidate will excel at identifying and articulating complex business problems, and will develop innovative, scalable, and robust AI/ML solutions to address these challenges. Responsibilities will include designing, building, and deploying enterprise-grade AI systems, specifically focused on:
- Agentic AI solutions to automate operational processes (e.g., interpreting trouble tickets, performing basic troubleshooting, interacting with online portals, data entry).
- Retrieval-Augmented Generation (RAG) and ColBERTv2 pipelines for parsing, indexing, and querying enterprise documents to facilitate answers related to process guidelines, product knowledge, and training materials.
- Function calling solutions leveraging Large Language Models (LLMs) to automate and perform precise actions in enterprise workflows.
- Developing and applying reinforcement learning strategies to optimize and automate decision-making processes within enterprise operations.
This role requires hands-on expertise with model fine-tuning, training pipelines, post-training optimization techniques (e.g., model distillation), classification models, and integrating AI systems within complex enterprise environments.
Duties and Responsibilities
- Develop and implement AI solutions leveraging fine-tuned Large Language Models (e.g., OpenAI models, LLaMA, Mistral).
- Design, develop, and optimize Retrieval-Augmented Generation (RAG) pipelines using advanced vector databases (e.g., FAISS, Pinecone, Milvus).
- Build and enhance agentic AI systems utilizing frameworks like LangChain, AutoGPT, or similar automation frameworks.
- Deploy scalable ColBERTv2 architectures for semantic retrieval and classification.
- Create robust pre-processing and post-processing pipelines to enhance model performance, accuracy, and interpretability.
- Collaborate closely with cross-functional teams, including product managers, business stakeholders, data scientists, and software engineers.
- Implement best practices in model distillation, quantization, and optimization for deployment in production environments.
- Ensure compliance with enterprise-grade security, privacy standards, and data governance practices.
- Provide leadership and mentorship to team members, supporting their technical development and career growth through coaching, training, and performance feedback.
- Drive timely and successful completion of AI/ML projects by setting clear milestones, tracking progress, removing blockers, and aligning resources
Required Qualifications
- Bachelor's degree in Computer Science, Data Science, Machine Learning, AI, or related fields; advanced degree strongly preferred.
- 5+ years of proven experience developing and deploying production-grade AI/ML systems.
- Strong programming skills in Python, familiarity with libraries/frameworks such as PyTorch, TensorFlow, Hugging Face, and LangChain.
- Demonstrated expertise with LLM fine-tuning (e.g., LoRA, PEFT), distillation, and model optimization.
- Practical experience implementing RAG pipelines with embedding technologies and vector stores (e.g., FAISS, Pinecone).
- Proven track record building agentic AI systems capable of interacting with multiple enterprise applications and platforms.
- Solid understanding of NLP techniques, Transformer architectures, semantic search, and document retrieval technologies (e.g., ColBERT).
- Hands-on experience with reinforcement learning techniques, including designing, training, and deploying reinforcement learning models.
Preferred Qualifications:
- Master's or Ph.D. in Computer Science, Machine Learning, Artificial Intelligence, or related field.
- Familiarity with cloud-based AI services (e.g., AWS SageMaker, Azure ML, Google Vertex AI).
- Experience with containerization (Docker, Kubernetes) and deployment pipelines (CI/CD).
- Knowledge of advanced AI frameworks and model inference engines such as Triton Inference Server, TensorRT, and ONNX.
- Familiarity with model monitoring, observability tools, and techniques to ensure long-term reliability and performance.
- Strong communication and interpersonal skills with the ability to clearly articulate complex technical solutions to non-technical stakeholders.
- Experience in regulated industries or environments requiring strict compliance and data governance standards.
Position Title: Applied AI Systems Engineer
Location: Orange County, California (Hybrid)
Reports To: Head of Operations
Position Summary
This role is responsible for architecting, building, and deploying a production-grade AI operating system that automates core workflows across leasing, property management, accounting, construction coordination, and asset management.
The engineer will design and implement AI agents, document intelligence systems, and workflow automation pipelines that reduce manual processing, improve accuracy, and increase operational scalability across a commercial real estate portfolio.
This position requires strong systems thinking, rigorous technical execution, and the ability to translate complex operational processes into reliable automation.
Core Objectives
- Build an internal AI platform that automates high-volume operational workflows
- Reduce manual processing time and administrative overhead
- Improve accuracy, speed, and decision visibility across departments
- Establish scalable systems that support portfolio growth without proportional staffing increases
Primary Responsibilities
- AI Platform Architecture & Development
- Design and deploy AI agents to automate operational and administrative workflows
- Build LLM-powered systems for document review, data extraction, and decision support
- Develop retrieval-based systems leveraging leases, financial data, contracts, and SOPs
- Implement evaluation, monitoring, and continuous improvement frameworks
Lease & Document Intelligence Automation
- Build tools to extract key lease terms, obligations, and risk clauses
- Automates lease abstraction and document comparison workflows
- Develop compliance and deadline tracking systems
- Enable searchable knowledge retrieval across lease and legal documents
Leasing & Asset Management Automation
- Automate LOI comparison and deal workflow summaries
- Build dashboards summarizing tenant performance, lease milestones, and risk exposure
- Support market intelligence and tenant prospecting research
- Develop underwriting support and reporting tools
Property Management & Financial Workflow Automation
- Automate CAM reconciliation data processing and variance detection
- Streamline tenant reporting and communication workflows
- Track vendor contracts, compliance deadlines, and service obligations
- Extract and structure financial data from operational documents
Data Infrastructure & Knowledge Systems
- Structure internal documents and data for AI retrieval and automation
- Build document ingestion, indexing, and retrieval pipelines
- Implement vector search and knowledge retrieval systems
- Maintain data integrity, access control, and auditability
Systems Integration & Deployment
- Integrate AI tools with property management, accounting, CRM, and document platforms
- Deploy systems within secure cloud environments
- Implement logging, monitoring, performance, and cost controls
- Ensure reliability and scalability of deployed systems
Collaboration & Implementation
- Translate operational workflows into technical automation solutions
- Work directly with leadership to prioritize automation opportunities
- Train teams and implement adoption workflows
- Establish standards for responsible and secure AI usage
Required Qualifications
- Bachelorβs or advanced degree in Computer Science, Engineering, Mathematics, Statistics, or related quantitative discipline
- Demonstrated success in a rigorous academic or research environment
- 3β7+ years building production software, automation systems, or applied AI solutions
- Strong Python development and API integration experience
- Experience working with structured and unstructured data
- Experience deploying systems in cloud environments
- Strong understanding of system architecture and data pipelines
- Exceptional analytical and problem-solving ability
Preferred Qualifications
- Experience building document intelligence or contract analysis systems
- Experience with retrieval systems and vector databases
- Experience automating financial or operational workflows
- Experience integrating AI into business operations environments
- Experience in real estate, finance, logistics, or operations-heavy industries
- Evidence of research, technical publications, competitive programming, or open-source contributions
Technical Environment (Representative)
- Python and API-based architectures
- LLM platforms and agent orchestration frameworks
- Cloud infrastructure (AWS, Azure, or GCP)
- SQL and vector databases
- Workflow orchestration and automation tools
- Version control, logging, and monitoring systems
Success Metrics
- Performance in this role will be evaluated by:
- Reduction in manual administrative workload
- Automation coverage across operational workflows
- Accuracy and reliability of AI-driven outputs
- Adoption and usage across departments
- Operational efficiency gains and cost reductions
Work Environment
- Hybrid work model with in-person collaboration in Orange County
- Direct collaboration with executive leadership and operational teams
- High autonomy in system architecture and implementation decisions