Telus Digital Ai Data Solutions Jobs in Usa
13,218 positions found — Page 2
Duration: 12 Months (Temp to Hire)
Location: Newark, NJ 07102
Job Description:
Are you interested in building capabilities that enable the organization with innovation, speed, agility, scalability and efficiency? When you join our organization at Prudential, you'll unlock an exciting and impactful career - all while growing your skills and advancing your profession at one of the world's leading financial services institutions.
As a Data Scientist on/in the US Businesses PruAdvisors Data Science Team you will partner with Machine Learning Engineers, Data Engineers, Business Leaders and other professionals to build GenAI and ML models to improve advisor experience, perform lead scoring, and increase sales revenue. You will implement AI and machine learning models that will deliver stability, scalability and integration with other advisor products and services. You will implement capabilities to solve sophisticated business problems, deploy innovative products, services and experiences to delight our customers! In addition to deep technical expertise and experience, you will bring excellent problem solving, communication and teamwork skills, along with agile ways of working, strong business insight, an inclusive leadership attitude and a continuous learning focus to all that you do.
Responsibilities:
- Provide deep technical leadership to a portfolio of high impact data science initiatives involving sales and advisor experience. Identify the optimal sets of data, models, training, and testing techniques required for successful product delivery. Remove complex technical impediments
- Leverage your experience and skills to identify new opportunities where data science and AI can improve experiences, gain efficiencies, and generate sales.
- Manage team members in AI/ML and model development, testing, training, and tuning. Apply hands-on experience to ensuring best-in-class model development. Mentor team members in technical skill development and product ownership.
- Communicate clearly and concisely, in writing and verbally, all facets of model design and development. Continuously look for insights in models developed and generate new ideas for model improvement.
- Manage external vendors in the execution of parts of the data science development process as needed.
- Leverage continuous integration and continuous deployment best practices, including test automation and monitoring, to ensure successful deployment of ML models and application code on Prudential's AI/ML platform.
- Bring a deep understanding of relevant and emerging technologies, give technical direction to team members and embed learning and innovation in the day-to-day.
- Work on significant and unique issues where analysis of situations or data requires an evaluation of intangible variables and may impact future concepts, products or technologies.
- Familiarity with Python, SQL, AWS, and JIRA.
- Familiarity with LLMs, deployment of LLMs, RAG, LangChain, LangGraph, and Agentic AI concepts.
The Skills and expertise you bring:
- Applied Statistics, Computer Science, or Engineering or experience in related fields with a focus on machine learning, AI, and LLMs.
- Junior category industry experience with responsibility for developing and delivering advanced quantitative, AI/ML, analytical and statistical solutions.
- Ability to lead a small team with minimal guidance and effectively leverage diverse ideas, experiences, thoughts and perspectives to the benefit of the organization to deliver AI products.
- Ability to influence business stakeholders and to drive adoption of AI/ML solutions.
- Experience with agile development methodologies, Test-Driven Development (TDD), and product management.
- Knowledge of business concepts, tools and processes that are needed for making sound decisions in the context of the company's business
- Demonstrated ability to mentor and operational management of data science team based on project requirements, resourcing requirements, and planning dependencies as appropriate, anticipate risks and bottlenecks and proactively takes actions
- Excellent problem solving, communication and collaboration skills, and stakeholder management
- Significant experience and/or deep expertise with several of the following:
- Machine Learning and AI: Understanding of machine learning theory, including the mathematics underlying machine learning algorithms. Expertise in the application of machine learning theory to building, training, testing, interpreting and monitoring machine learning models. Expertise in traditional machine learning models (unsupervised, XGBoost, etc.) and Large Language Models (OpenAI, Claude).
- Model Deployment: Understanding of model development life cycle, CI/CD/CT pipelines (using tools like Jenkins, CloudBees, Harness, etc.), A/B testing, and pipeline frameworks such as AWS SageMaker, and newer AWS/Azure Agentic AI infrastructure products.
- Data Acquisition and Transformation: Acquiring data from disparate data sources using APIs and SQL. Transform data using SQL and Python. Visualizing data using a diverse tool set including but not limited to Python.
- Database Management Systems: Knowledge of how databases are structured and function in order use them efficiently. May include multiple data environments, cloud/AWS, primary and foreign key relationships, table design, database schemas, etc.
- Data Analysis and Insights: Analyzing structured and unstructured data using data visualization, manipulation, and statistical methods to identify patterns, anomalies, relationships, and trends.
- Programming Languages: Python and SQL
The Data Protection Software Engineering team delivers next-generation data protection and data availability enhancements and new products for a changing world. Working at the cutting edge, we design and develop software to protect data hosted across On-Prem, Public Cloud, Hybrid Cloud - all with the most advanced technologies, tools, software engineering methodologies and the collaboration of internal and external partners. Join us as a Software Principal Engineer on our Engineering Development team in Hopkinton, Massachusetts Development Center to do the best work of your career and make a profound social impact.
What you’ll achieve
As a Software Principal Engineer, you will develop next-generation cyber resiliency and data protection software for Dell's Data Protection team. You will be responsible for developing sophisticated software systems and solutions safeguarding enterprise-level customer data against data loss, cyber threats, and ransomware attacks—while driving through AI-powered solutions for enhanced cyber resiliency.
You will:
Develop next generations products and will have an opportunity to shape the best client technologies in the world
Contribute to the design and architecture of high-quality, complex systems and software/storage environments
Contribute to the development and implementation of test strategies for complex software products and systems
Prepare, review and evaluate software specifications based on the product requirements, and contribute to the designs and implement them as product features with specific focus on device and serviceability of client platforms
Take the first step towards your dream career
Every Dell Technologies team member brings something unique to the table. Here’s what we are looking for with this role:
Essential Skills:
5 - 8 Years of Software Development experience working in Agile SDLC, Bachelors or Masters in Computer Science
C/C++, Golang, Win 32/Storage API, Windows/Linux/Unix Programming, experience in Windows, Linux, Aix operating systems, Systems Programming, Networking, File systems and block layers
Strong understanding of CPU architecture, Multi-Threaded Environments, Concurrency Databases, Storage Technologies, stack and I/O data path, hands on exposure with AI technologies and proficient usage of AI tools for all facets of SDLC
Experience in Data Protection domain, Scalable Architecture, virtualization platforms like ESXI, Hyper-V and other hypervisors, excellent code detective and root cause analysis skills on a variety of platforms and languages
Experience in feature requirements, development and design of applications which interact closely with business, excellent problem solving & multi-tasking skills
Quality first mindset and attitude to take full ownership of the delivery from development to unit tests to end-to-end tests, should model behaviours to be adaptable to pick up new technologies and stay curious to drive innovation, profiling and Benchmarking techniques, good communication and technical leadership abilities to communicate the design effectively and mentor junior engineers
Desired Skill:
Experience in Operating system Clusters, Databases clusters, experience with Device drivers, and system architecture such as SCSI, cache, and message subsystem
Knowledge of AI/ML, GenAI and prompt engineering, knowledge of cloud application security & gateways
Compensation
Dell is committed to fair and equitable compensation practices. The salary range for this position is $150k - $194k.
Benefits and Perks of working at Dell Technologies
Your life. Your health. Supported by your benefits. You can explore the overall benefits experience that awaits you as a Dell Technologies team member — right now at
Who we are
We believe that each of us has the power to make an impact. That’s why we put our team members at the center of everything we do. If you’re looking for an opportunity to grow your career with some of the best minds and most advanced tech in the industry, we’re looking for you.
Dell Technologies is a unique family of businesses that helps individuals and organizations transform how they work, live and play. Join us to build a future that works for everyone because Progress Takes All of Us.
Dell Technologies is committed to the principle of equal employment opportunity for all employees and to providing employees with a work environment free of discrimination and harassment. Read the full Equal Employment Opportunity Policy here.
#LI-Onsite
Job ID: R282864
The Data Protection Software Engineering team delivers next-generation data protection and data availability enhancements and new products for a changing world. Working at the cutting edge, we design and develop software to protect data hosted across On-Prem, Public Cloud, Hybrid Cloud - all with the most advanced technologies, tools, software engineering methodologies and the collaboration of internal and external partners. Join us as a Software Principal Engineer on our Engineering Development team in Hopkinton, Massachusetts, Development Center to do the best work of your career and make a profound social impact.
What you’ll achieve
As a Software Principal Engineer, you will develop next-generation cyber resiliency and data protection software for Dell's Data Protection team. You will be responsible for developing sophisticated software systems and solutions safeguarding enterprise-level customer data against data loss, cyber threats, and ransomware attacks—while driving through AI-powered solutions for enhanced cyber resiliency.
You will:
Develop next generations products and will have an opportunity to shape the best client technologies in the world
Contribute to the design and architecture of high-quality, complex systems and software/storage environments
Contribute to the development and implementation of test strategies for complex software products and systems
Prepare, review and evaluate software specifications based on the product requirements, and contribute to the designs and implement them as product features with specific focus on device and serviceability of client platforms
Take the first step towards your dream career
Every Dell Technologies team member brings something unique to the table. Here’s what we are looking for with this role:
Essential Skills:
8 - 12 Years of Software Development experience working in Agile SDLC, Bachelors or Masters in Computer Science
C/C++, Golang, Win 32/Storage API, Windows/Linux/Unix Programming, experience in Windows, Linux, Aix operating systems, Systems Programming, Networking, File systems and block layers
Strong understanding of CPU architecture, Multi-Threaded Environments, Concurrency Databases, Storage Technologies, stack and I/O data path, hands on exposure with AI technologies and proficient usage of AI tools for all facets of SDLC
Experience in Data Protection domain, Scalable Architecture, virtualization platforms like ESXI, Hyper-V and other hypervisors, excellent code detective and root cause analysis skills on a variety of platforms and languages
Experience in feature requirements, development and design of applications which interact closely with business, excellent problem solving & multi-tasking skills
Quality first mindset and attitude to take full ownership of the delivery from development to unit tests to end-to-end tests, should model behaviours to be adaptable to pick up new technologies and stay curious to drive innovation.
Profiling and Benchmarking techniques, good communication and technical leadership abilities to communicate the design effectively and mentor junior engineers
Desired Skill:
Experience in Operating system Clusters, Databases clusters, experience with Device drivers, and system architecture such as SCSI, cache, and message subsystem
Knowledge of AI/ML, GenAI and prompt engineering, knowledge of cloud application security & gateways
Compensation
Dell is committed to fair and equitable compensation practices. The salary range for this position is $179k - $231k.
Benefits and Perks of working at Dell Technologies
Your life. Your health. Supported by your benefits. You can explore the overall benefits experience that awaits you as a Dell Technologies team member — right now at
Who we are
We believe that each of us has the power to make an impact. That’s why we put our team members at the center of everything we do. If you’re looking for an opportunity to grow your career with some of the best minds and most advanced tech in the industry, we’re looking for you.
Dell Technologies is a unique family of businesses that helps individuals and organizations transform how they work, live and play. Join us to build a future that works for everyone because Progress Takes All of Us.
Dell Technologies is committed to the principle of equal employment opportunity for all employees and to providing employees with a work environment free of discrimination and harassment. Read the full Equal Employment Opportunity Policy here.
#LI-Onsite
Job ID: R284177
Sr. Data Engineer (Hybrid)
Chicago, IL
The American Medical Association (AMA) is the nation's largest professional Association of physicians and a non-profit organization. We are a unifying voice and powerful ally for America's physicians, the patients they care for, and the promise of a healthier nation. To be part of the AMA is to be part of our Mission to promote the art and science of medicine and the betterment of public health.
At AMA, our mission to improve the health of the nation starts with our people. We foster an inclusive, people-first culture where every employee is empowered to perform at their best. Together, we advance meaningful change in health care and the communities we serve.
We encourage and support professional development for our employees, and we are dedicated to social responsibility. We invite you to learn more about us and we look forward to getting to know you.
We have an opportunity at our corporate offices in Chicago for a Sr. Data Engineer (Hybrid) on our Information Technology team. This is a hybrid position reporting into our Chicago, IL office, requiring 3 days a week in the office.
As a Sr. Data Engineer, you will play a key role in implementing
and maintaining AMA's enterprise data platform to support analytics,
interoperability, and responsible AI adoption. This role partners closely with
platform engineering, data governance, data science, IT security, and business
stakeholders to deliver highquality, reliable, and secure data products. This
role contributes to AMA's modern lakehouse architecture, optimizing data
operations, and embedding governance and quality standards into engineering
workflows. This role serves as a
senior technical contributor within the team-providing mentorship to junior
engineers and implementing engineering best practices within the data platform function,
in alignment with architectural direction set by leadership.
RESPONSIBILITIES:
Data Engineering & AI Enablement
- Build and maintain scalable data pipelines and
ETL/ELT workflows supporting analytics, operational reporting, and AI/ML use
cases. - Implement best practice patterns for ingestion,
transformation, modeling, and orchestration within a modern lakehouse
environment (e.g., Databricks, Delta Lake, Azure Data Lake). - Develop highperformance
data models and curated datasets with strong attention to quality, usability,
and interoperability; create reusable engineering components and automation. - Collaborate with the Architecture Team, the Data
Platform Lead, and federated IT teams to optimize storage, compute, and
architectural patterns for performance and costefficiency. - Build model-ready data sets and feature
pipelines to support AI/ ML use cases; serve as a technical coordination point
supporting business units' AI-related infrastructure needs. - Collaborate with data scientists and AI Working
Group to operationalize models responsibly and maintain ongoing monitoring
signals.
Governance, Quality & Compliance
- Embed data governance, metadata standards,
lineage tracking, and quality controls directly into engineering workflows;
ensure technical implementation and alignment within engineering workflows. - Work with the Data Governance Lead and business
stakeholders to operationalize stewardship, classification, validation,
retention, and access standards. - Implement privacybydesign and securitybydesign
principles, ensuring compliance with internal policies and regulatory
obligations. - Maintain documentation for pipelines, datasets,
and transformations to support transparency and audit requirements.
Platform Reliability, Observability & Optimization
- Monitor and troubleshoot pipeline failures,
performance bottlenecks, data anomalies, and platformlevel issues. - Implement observability tooling, alerts,
logging, and dashboards to ensure endtoend reliability. - Support cost governance by optimizing compute
resources, refining job schedules, and advising on efficient architecture. - Collaborate with the Data Platform Lead on
scaling, configuration management, CI/CD pipelines, and environment management. - Collaborate with business units to understand
data needs, translate them into engineering requirements, and deliver
fit-for-purpose data solutions; share and apply best practices and emerging
technologies within assigned initiatives. - Work with IT Security and Legal/ Compliance to
ensure platform and datasets meet risk and regulatory standards.
Staff Management
- Lead, mentor, and provide management oversight
for staff. - Responsible for setting objectives, evaluating
employee performance, and fostering a collaborative team environment. - Responsible for developing staff knowledge and
skills to support career development.
May include other responsibilities as assigned
REQUIREMENTS:
- Bachelor's degree in Computer Science, Engineering, Information Systems, or related field preferred or equivalent work experience and HS diploma/equivalent education required.
- 5+ years of experience in data engineering within cloud environments
- Experience in people management preferred.
- Demonstrated hands-on experience with modern data platforms (Databricks preferred).
- Proficiency in Python, SQL, and data
transformation frameworks. - Experience designing and operationalizing
ETL/ELT pipelines, orchestration workflows (Airflow, Databricks Workflows), and
CI/CD processes. - Solid understanding of data modeling,
structured/unstructured data patterns, and schema design. - Experience implementing governance and quality
controls: metadata, lineage, validation, stewardship workflows. - Working knowledge of cloud architecture, IAM,
networking, and security best practices. - Demonstrated ability to collaborate across
technical and business teams. - Exposure to AI/ML engineering concepts, feature
stores, model monitoring, or MLOps patterns. - Experience with infrastructureascode
(Terraform, CloudFormation) or DevOps tooling.
The American Medical Association is located at 330 N. Wabash Avenue, Chicago, IL 60611 and is convenient to all public transportation in Chicago.
This role is an exempt position, and the salary range for this position is $115,523.42-$150,972.44. This is the lowest to highest salary we believe we would pay for this role at the time of this posting. An employee's pay within the salary range will be determined by a variety of factors including but not limited to business consideration and geographical location, as well as candidate qualifications, such as skills, education, and experience. Employees are also eligible to participate in an incentive plan. To learn more about the American Medical Association's benefits offerings, please click here.
We are an equal opportunity employer, committed to diversity in our workforce. All qualified applicants will receive consideration for employment. As an EOE/AA employer, the American Medical Association will not discriminate in its employment practices due to an applicant's race, color, religion, sex, age, national origin, sexual orientation, gender identity and veteran or disability status.
THE AMA IS COMMITTED TO IMPROVING THE HEALTH OF THE NATION
Apply NowShare Save JobRemote working/work at home options are available for this role.
Onsite AI Engineer - Construction Industry Focus
New Haven, CT - Onsite 5 days per week
- Initial Assignment: Fully onsite 5 days per week at a construction site in Ft. Myers (FL) or New Haven (CT) for 1 year
- Post-Assignment: Relocation to one of the corporate offices for hybrid employment: Boston, MA (preferred), New York City (NY), New Haven (CT), Herndon (VA), West Palm Beach (FL), or Estero (FL)
Role Summary
As the on-site catalyst who turns AI ideas into working reality. Partnering with each project’s AI Champion (Project Manager or Superintendent), you’ll uncover pain points, redesign workflows, and deploy AI agents that cut down reporting, accelerate RFIs, simplify lookahead planning, progress updates, materials tracking, and more. When needed, you will develop user stories and coordinate development with the central AI Studio. You’ll help advance the vision of the “Construction Site of the Future,” showing how agentic AI will transform project operations.
Responsibilities
- Workflow discovery and redesign: Lead Lean/Six Sigma workshops; map value streams; log high-impact AI agent opportunities that improve field efficiency.
- AI agent development: Build and deploy multiple production-ready AI agents using Copilot Studio, Power Apps/Automate, ChatGPT Enterprise, or code-first frameworks. Integrate agents into Teams/SharePoint on the front end and Databricks Lakehouse or other enterprise data sources on the back end.
- RAG pipelines and LLMOps: Design and operate retrieval-augmented generation (RAG) pipelines with Databricks Delta Tables, Unity Catalog, and Vector Search (or Spark/Hadoop equivalents). Monitor cost, latency, adoption, and model drift.
- Cross-cloud orchestration: Blend OpenAI, Azure OpenAI, and AWS Bedrock services through secure custom connectors to maximize flexibility and adoption.
- Data integration: Partner with Data Engineering to deliver ETL/ELT pipelines, API integrations, and event-driven connectors that feed RAG pipelines and AI agents.
- Change management and adoption: Train field teams, gather feedback, iterate quickly, and embed agents into SOPs. Track usage and ROI with adoption metrics and behavior-change KPIs.
- Stakeholder communication: Translate technical results into business value for leadership and clients. Contribute use cases and playbooks for the “Construction Site of the Future.”
- Compliance and hand-offs: Ensure all AI solutions meet the company’s data governance and security standards. Draft clear user stories and specs for escalation to central AI/Data Engineering teams when necessary.
Qualifications
- 4+ years in AI engineering, data science, or ML-focused software engineering.
- Proven experience building multiple AI agents in production environments.
- 2+ years of hands-on experience with LLMs, RAG pipelines, and LLMOps practices.
- Must have strong traditional software engineering background in Python
Bonus Points
- Experience in construction, manufacturing, or other process-heavy industries.
- Advanced degree in a technical field.
The Lead AI Partner Engineer will serve as the primary technical interface between Dell Technologies and Independent Software Vendors (ISVs) joining the Dell AI Technical Partner Program. This highly technical, hands‑on role guides new AI partners through onboarding, self‑validation, architecture review, blueprint creation, and compliance with Dell’s AI Factory and Self‑Validation criteria.
This person ensures partners have a seamless technical journey and continuously improves the partner‑facing validation process to accelerate partner success.
Join us to do the best work of your career and make a profound social impact as a Lead AI Partner Engineer on our Product team in Hopkinton, Massachusetts or Round Rock, TX .
What you’ll achieve
You will be the subject matter expert on Dell’s AI platforms—including application architectures, model pipelines, AI runtimes, and infrastructure stacks—and will directly assist partners as they validate and optimize their AI solutions on Dell. This role combines developer relations, partner engineering, technical program management, and solution validation.
You will:
Lead technical onboarding and enablement of AI ISV partners , serving as the primary technical point of contact and guiding partners through AI Factory self‑validation working with business leaders and developers.
Provide hands‑on AI full stack architectural guidance and validation , working directly in partner environments to review solutions, optimize performance, and ensure compliance with Dell Self‑Validation criteria.
Develop and refine Dell Automation Platform blueprints , delivering prescriptive guidance to enable repeatable, scalable AI deployments.
Drive program and process optimization , reducing partner time‑to‑validation through improved workflows, tooling, documentation, and automation.
Collaborate cross‑functionally with Product, Engineering, Solutions, and Partner teams to align partner capabilities with Dell’s AI strategy and advocate for partner needs .
Take the first step towards your dream career Every Dell Technologies team member brings something unique to the table. Here’s what we are looking for with this role.
Essential Requirements
8–12+ years in AI engineering, solutions architecture, developer relations, or partner engineering
Deep technical understanding of AI/ML solution architecture , including:
Model training and inference patterns
AI Frameworks (PyTorch, TensorFlow, Triton, ONNX Runtime)
AI runtimes and acceleration libraries (CUDA, cuDNN, TensorRT)
Containerization and orchestration (Docker, Kubernetes, Slurm, OpenShift AI)
Hands on experience deploying and validating workloads across complete infrastructure stacks (compute, GPU, networking, storage, observability). on experience deploying and validating workloads across complete infrastructure stacks (compute, GPU, networking, storage, observability)
Experience working with or supporting software partners or developers in a technical capacity
Strong ability to communicate complex technical concepts to both engineering and business audiences
Ability to work across multiple teams and manage partner facing technical programs end-to-end
Desirable Requirements
Bachelor’s degree in Computer Science, Computer Engineering, Data Science, or similar, Master's degree preferred
Experience with Full AI Solutions Stacks , Infrastructure Automation, or similar enterprise AI platforms (NVIDIA DGX, AWS SageMaker, GKE, Azure ML)
Background in Software Development, Developer Relations, Technical Evangelism, or Technical Partner Management is a strong plus
Compensation
Dell is committed to fair and equitable compensation practices. The salary range for this position is $229,500.00 - $297,000.00
Benefits and Perks of working at Dell Technologies
Your life. Your health. Supported by your benefits. You can explore the overall benefits experience that awaits you as a Dell Technologies team member — right now at
Who we are
We believe that each of us has the power to make an impact. That’s why we put our team members at the center of everything we do. If you’re looking for an opportunity to grow your career with some of the best minds and most advanced tech in the industry, we’re looking for you.
Dell Technologies is a unique family of businesses that helps individuals and organizations transform how they work, live and play. Join us to build a future that works for everyone because Progress Takes All of Us.
Dell Technologies is committed to the principle of equal employment opportunity for all employees and to providing employees with a work environment free of discrimination and harassment. Read the full Equal Employment Opportunity Policy here.
Job ID: R285357
Overview:
The Solution Architect will be focused on customer data, personalization, and enterprise digital experience platforms. This person shapes the tech vision, translates business needs into technical blueprints, and guides delivery teams across marketing tech and core enterprise systems.
Must Haves:
- 5+ years of experience as a Solution Architect
- Extensive experience implementing a CDP or integrating with other MarTech
- Experience developing architecture blueprints, strategies, and roadmaps
- Experience delivering presentations to senior-level executives and technical audiences
- Ability to work with developers in an outsourced environment
- Good understanding of product management, agile principles and development methodologies and capability of supporting agile teams by providing advice and guidance on opportunities, impact, and risks, taking account of technical and architectural debt
Plusses:
- Adobe Experience Platform
- Adobe Journey Optimizer
- Adobe Real-Time CDP
- Bachelor's degree in computer science, information technology, engineering, system analysis or a related
Job Description:
The Solution Architect, Personalization leads and supports architecture activities for a portfolio of enterprise-level solutions. This includes systems such as customer data platforms, personalization engines, recommendation engines, loyalty and discount engines, promotional tools, communication platforms, CMS, DAM, mobile apps, master data solutions, in-store digital screens, ERP, HRMS, and POS systems.
You will provide architectural leadership, design oversight, and technology guidance to ensure solutions meet business requirements and comply with enterprise architecture governance. Responsibilities span five dimensions:
Responsibilities:
1. Interpret Business Needs
- Translate customer journeys and business requirements into capability maps, value streams, technical requirements, and architectural blueprints
- Collaborate with business owners, CX technology, product owners, and product managers
- Determine enterprise solution designs that support future business capabilities
2. Technical Leadership
- Guide development & engineering teams with technical expertise and architectural vision
3. Assess Technology
- Analyze current-state solutions for aging tech, misalignment, or deficiencies
- Support product lifecycle decisions (maintain/refresh/retire)
- Evaluate emerging technologies and market trends
- Identify and recommend solutions for legacy systems and technical debt
- Support product and project teams in selecting and configuring software
4. Apply Technology
- Lead evaluation, design, and evolution of solution architecture across applications
- Drive broader-scope architecture efforts across multiple projects/products
- Develop strategic roadmaps for transitioning from current to future-state architecture
- Act as a consultant across technologies, platforms, and vendor solutions
- Guide execution of architectural plans throughout the product lifecycle
- Ensure alignment with enterprise architecture across agile teams
5. Provide Enterprise Guidance
- Deliver reference models, standards, and architectural documentation
- Support governance, compliance, and assurance processes
- Help guide a community of practice (CoP) across technical teams
- Define principles, guidelines, standards, and patterns for enterprise‑wide architecture
Compensation:
up to $150k per year annual salary + 5% annual bonus
Exact compensation may vary based on several factors, including skills, experience, and education.
Benefits:
- Competitive salary plus annual bonus
- Competitive benefits packages (medical, dental, 401k, employee stock plan, etc.)
- People Perks which allow for great discounts on food and fuel
- Work for a leading, innovative, and growing company in convenience store operations
- Fortune 500 company and a two-time Gallup Exceptional Workplace Award winner
- Tuition reimbursement of $5,000 per year
- Learning opportunities to develop new skills and to evolve professionally in a fast-growing company
Primary Skills: Prompt Engineering(Expert), AI automation (Advanced), AI agents (Expert), Supply chain (Intermediate), no code & low code (Proficient).
Contract Type: W2
Duration: 6 Months with possible extension
Location: Boston, MA ()
Pay Range: $50.00-$58.49 Per Hour
#LP
Job Summary:
This is a dynamic role for a Business Analyst III, focusing on translating supply chain use cases into automated workflows and AI agents using enterprise no-code/low-code platforms. The ideal candidate will design, build, and maintain AI-powered solutions to streamline processes within a $1.8B supply chain operation, working directly with supply chain teams to co-develop solutions and conduct user acceptance testing. Expectations include managing 5-8 projects concurrently with high autonomy, optimizing AI agent performance, and ensuring solution longevity through detailed documentation.
Key Responsibilities:
- Design and implement automated workflows and AI agents for supply chain tasks.
- Conduct iterative testing and user acceptance testing with supply chain teams.
- Configure workflow logic, decision trees, automation sequences, and integration points for AI functionality.
- Develop hybrid solutions integrating analytics dashboards with AI workflows for process automation.
- Document workflow configurations, prompt patterns, and decisions in detail for non-technical user maintenance.
- Expertise in prompt engineering and AI platform management
- Proficiency in no-code/low-code workflow automation tools
- Deep understanding of AI agent training, context windows, model limitations, and hallucination mitigation.
- Basic technical understanding (APIs, data structures, integrations)
Knowledge of supply chain operations (procurement, inventory management, logistics) is strongly preferred.
ABOUT AKRAYA
Akraya is an award-winning IT staffing firm consistently recognized for our commitment to excellence and a thriving work environment. Most recently, we were recognized Stevie Employer of the Year 2025, SIA Best Staffing Firm to work for 2025, Inc 5000 Best Workspaces in US (2025 & 2024) and Glassdoor's Best Places to Work (2023 & 2022)!
Industry Leaders in Tech Staffing
As Talent solutions provider for Fortune 100 Organizations, Akraya's industry recognitions solidify our leadership position in the IT staffing space. We don't just connect you with great jobs, we connect you with a workplace that inspires!
Join Akraya Today!
Let us lead you to your dream career and experience the Akraya difference. Browse our open positions and join our team!
Duration: 11 Months (Contract to hire)
Location: Columbia, SC
Onsite Requirements: Partially onsite 3 days per week (Tue, Wed, Thurs) and as needed.
Standard work hours: 8:00 AM - 5:00 PM
**Credit check will be required**
Job Summary:
Day to Day:
- A typical day will involve a mix of hands-on coding, architectural design, and research.
- The engineer will spend a significant portion of their time in Python, building and optimizing agentic AI systems using frameworks like LangChain.
- This includes integrating these agents with our backend services and deploying them using CI/CD pipelines into our cloud environment.
- They will also be responsible for researching and testing new agentic models and frameworks, monitoring agent behavior in production, and collaborating with data scientists and business stakeholders to refine requirements and ensure the ethical deployment of AI solutions.
Team: The team is an innovative, collaborative, and empowering environment. We are building the next generation of AI solutions for the enterprise in a fast-paced, project-oriented setting. This is a multi-platformed environment that values creativity, continuous learning, and a customer-focused mindset. The new engineer will play a crucial role in shaping our AI strategy and building foundational tools and accelerators that will drive innovation across the company.
Job Requirements:
**This is a new role to establish a core competency in agentic AI systems. This engineer will be pivotal in designing and deploying advanced AI agents and will build the foundational frameworks for future AI use cases across the organization.**
Required Experience:
Required Software and Tools (Hands on experience required):
- Python
- JavaScript/TypeScript
- AI Tools and Libraries (e.g. LangGraph, LangChain, Deep Agents, Claude Skills, etc.)
- AI Models (e.g. Claude, OpenAI, etc.)
- AI Concepts (e.g. Prompt Engineering, RAG, Agentic AI, etc.)
- Distributed SDLC/DevOps (e.g. github, pipelines, VS Code, testing frameworks, etc.)
- Platforms (Container Platforms, Cloud Platforms, Document Databases, AWS)
- API Design
Python & AI/ML Libraries:
- Deep hands-on experience in Python for AI/ML development.
- Generative AI Development: Proven experience developing Gen AI or AI/ML solutions, from use case conceptualization to production deployment.
- Infrastructure & DevOps: Strong understanding of cloud environments (AWS preferred), LLM hosting, CI/CD pipelines, Docker, and Kubernetes.
- Agentic AI Concepts: Knowledge of agentic/autonomous systems (e.g., reasoning, planning, tool use).
Minimum Required Education: Bachelor's degree-in Computer Science, Information Technology or other job related degree or 4 years relevant experience or Associates degree + 2 years relevant experience
Minimum Required Work Experience: 6years-of application development, systems testing or other job related experience.
Required Technologies: 3-6 years of hands-on experience in Artificial Intelligence, Machine Learning, or related fields.
Nice to have/Preferred skills:
- Proficiency in Python development and FastAPI/Flask frameworks, along with SQL.
- Familiarity with agentic AI frameworks and concepts such as LangChain, LangGraph, AutoGen, Model Context Protocol (MCP), Chain of Thought prompting, knowledge stores, and embeddings.
- Experience developing autonomous agents using cloud-based AI services.
- Experience with prompt engineering techniques and model fine-tuning.
- Strong understanding of reinforcement learning, planning algorithms, and multi-agent systems.
- Experience working across cloud platforms (AWS, Azure, GCP) and deploying AI solutions at scale.
Job Summary:
Our client is seeking a Senior Project Manager - AI Implementation (ServiceNow / Now Assist) to join their team! This position is located in Minneapolis, Minnesota or Denver, Colorado.
Duties:
- Lead end-to-end implementation of AI solutions within the ServiceNow platform, including Now Assist
- Manage project scope, timeline, budget, risks, and deliverables across multiple stakeholders
- Develop detailed project plans and ensure successful execution from initiation through go-live and stabilization
- Partner with business leaders, technical teams, and vendors to align AI capabilities with business objectives
- Facilitate stakeholder meetings, status reporting, and executive communications
- Identify risks and proactively implement mitigation strategies
- Ensure adherence to PMO standards, governance, and best practices
- Support roadmap planning and contribute to long-term AI program strategy
Desired Skills/Experience:
- 7+ years of project management experience, including large-scale technology implementations
- Proven experience leading ServiceNow implementations
- Experience delivering AI-enabled solutions or enterprise AI initiatives
- Strong understanding of core project management methodologies such as: Agile, Waterfall and Hybrid
- Excellent communication, stakeholder management, and executive presentation skills
- Demonstrated ability to manage complex, cross-functional programs
- Experience implementing Now Assist or other AI capabilities within ServiceNow
- PMP or other relevant project management certification
- Experience working in enterprise environments with governance structures
- Prior experience mentoring junior PMs or leading multi-project programs
Benefits:
- Medical, Dental, & Vision Insurance Plans
- Employee-Owned Profit Sharing (ESOP)
- 401K offered
The approximate pay range for this position is between $56.00 and $80.00. Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.
At KellyMitchell, our culture is world class. We’re movers and shakers! We don’t mind a bit of friendly competition, and we reward hard work with unlimited potential for growth. This is an exciting opportunity to join a company known for innovative solutions and unsurpassed customer service. We're passionate about helping companies solve their biggest IT staffing & project solutions challenges. As an employee-owned, women-led organization serving Fortune 500 companies nationwide, we deliver expert service at a moment's notice.
By applying for this job, you agree to receive calls, AI-generated calls, text messages, or emails from KellyMitchell and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy at
Description
The Data Engineer is responsible for designing, building, and maintaining scalable data pipelines to support the bank's analytics, reporting, and decision-making processes. Working closely with analysts, reporting, integration teams and business stakeholders to ensure high-quality, secure, and efficient data solutions that comply with financial regulations and industry standards.
Below is a list of essential functions of this position. Additional responsibilities may be assigned in the position.
KEY RESPONSIBILITIES
- Build and maintain data models, schemas, and databases (e.g., data warehouses, data lakes) to support business intelligence, machine learning, and reporting needs.
- Ensure data is optimized for performance, reliability, and scalability, minimizing latency and maximizing throughput.
- Build required infrastructure for optimal extraction, transformation and loading of data from various data sources using cloud and SQL technologies
- Implement data quality checks, monitoring, and validation processes to ensure accuracy, consistency, and compliance with regulatory requirements.
- Partner with business analyst, data Integration, Automation, and IT Teams to understand data requirements and deliver solutions that align with business goals.
- Ensure data adherence to strict security protocols and regulatory standards including encryption, access controls, and audit trails.
- Champion data governance, quality standards, and performance optimization.
- Create and maintain comprehensive documentation for data schemas, processes and systems to ensure transparency and reproducibility.
ATTITUDES
Builds positive relationships with internal and external clients by valuing other's feelings and rights in both words and actions, and embracing other's unique beliefs, backgrounds, and perspectives by demonstrating:
- Respect - treat every client and colleague with dignity and respect.
- Client Focus - Design scalable and reliable data pipelines that directly support the client's business goals and decision-making needs. Actively engage with stakeholders to understand evolving requirements and deliver solutions that provide timely, actionable insights
- Inclusion - Support a diverse work environment by building data systems that are accessible, equitable, and considerate of user needs, while actively seeking input from voices across all backgrounds and roles.
BEHAVIORS
Demonstrates strong business ethics and honest behaviors and the ability to positively influence and work with others to achieve excellent results by demonstrating:
- Leadership - Proactively drives data strategy, mentoring peers, and sets high standards for quality, innovation, and collaboration across teams.
- Integrity - Establish and enforce program governance frameworks, including change control and release management.
- Collaboration - Works with stakeholders across all departments to drive data efforts. Serves as a key contributor between business stakeholders and technical teams.
- Volunteerism - Use your skill beyond the role by mentoring others, helping teammates, and supporting meaningful causes.
COMPETENCIES
Reflects skill, good judgement, positive conduct, and personal responsibility for assigned areas. Seeks to implement and leverage services and technologies that create efficiencies by demonstrating:
- Accountability - Takes ownership of work, ensuring data systems are reliable and accurate. Promptly addresses issues or errors with transparency and responsibility.
- Innovation - Embrace new ideas, new tools, and bold thinking; challenge the status quo.
- Professionalism - consistently demonstrates courteous behavior, integrity, and strong work ethic while representing the bank with a polished appearance and clear communication.
POSITION LEVEL(S) EXPECTATIONS
- Strong understanding of Data Models, databases, schemas, and security methodologies.
- Excellent leadership, strategic thinking, and stakeholder management skills.
SEEKS PROFESSIONAL DEVELOPMENT OPPORTUNITIES
Actively participate in expanding skill sets and career paths by attending training programs, workshops, certifications, and educational resources relevant to the role. Set stretch assignments and cross functional opportunities that foster growth and learning.
Requirements
QUALIFICATIONS, EDUCATION, & EXPERIENCE
To perform this position successfully, an individual must be able to perform each essential position requirement satisfactorily, and a skills inventory is listed below.
- Bachelor's degree in a technology related program or 3-5 years' experience a data related field.
- Strong understanding of data architecture and data base design principles.
- Strong leadership and communication skills across technical and non-technical audiences.
- 3-5 Years experience in Data roles.
- Proficiency in languages such as Python, Java, Scala, or SQL.
- Experience in financial services (banking, insurance, wealth management).
- Excellent problem-solving and communication skills, with a collaborative mindset.
- Demonstrated leadership and self-direction.
- A background screening will be conducted.
LANGUAGE SKILLS: Ability to read, comprehend, and interpret documents. Possesses professional communication and interpersonal skills to write and speak effectively both one-on-one and before groups of clients or employees of the organization. Ability to communicate to clients directly and effectively.
TECHNOLOGY SKILLS: Ability to utilize telephone systems and possess good digital literacy including email, internet and intranet use. Strong understanding of Salesforce platform capabilities and implementation methodologies.
MATHEMATICAL SKILLS: Ability to add, subtract, multiply, and divide in all units of measure.
REASONING ABILITY: Ability to apply common sense understanding to carry out instructions furnished in written, oral, or diagram form. Ability to solve challenging problems involving several variables in a standardized situation.
PHYSICAL DEMANDS AND WORK ENVIRONMENT: The physical demands and work environment described here are representative of those that must be met by an employee to successfully perform the essential functions of this position.
This position operates in a professional office environment with considerable time spent at a desk using office equipment such as computers, phones, and printers. Ability to travel on occasion to all market areas and attend seminars or training sessions offsite and employee meetings off-site.
Reasonable accommodation may be made to enable individuals with disabilities to perform the essential functions.
DISCLAIMER: This job description is not an exclusive list of responsibilities and duties. They may change at any time without notice.
BENEFITS
- Medical, Dental, Vision & Life Insurance
- 401K with company match
- Paid Time Off & Recognized Holidays
- Leave policies
- Voluntary Benefit Options (Life, Accident, Critical Illness, Hospital Indemnity & Pet)
- Employee Assistance Program
- Employee Health & Wellness Program
- Special Loan and Deposit Rates
- Gradifi Student Loan Paydown Plan
- Rewards & Recognition Programs and much more!
Eligibility requirements apply.
CNB Bank is an equal opportunity employer and all applicants are considered based on qualifications without regard to sex, race, color, ancestry, religious creed, national origin, sexual orientation, gender identity, physical disability, mental disability, age, marital status, disabled veteran or Vietnam era veteran status. CNB Financial Corporation is an Affirmative Action Employer and is committed to fostering, cultivating and preserving a culture of diversity and inclusion.
?LicenceId=5a7398f0-7edb-4cb7-a02b-518dcfa222fa&ProductType=IntranetLicense&SubType=PG
Our Ideal Candidate
We are looking for a Senior Data Engineer who is a self-starter and detail-oriented with a strong blend of technical expertise and business acumen. The ideal candidate has a strong foundation in data engineering, experience working with healthcare data, and the ability to build scalable data-driven solutions. You are a proactive problem-solver who takes ownership of your work, continuously seeks to improve data quality and accessibility, and is committed to delivering high-quality data solutions.
Responsibilities
- Lead data modeling efforts to create optimized data structures for reporting and analytical purposes.
- Design, develop, and maintain end-to-end data pipelines that transform raw source data into high-quality, actionable datasets.
- Build the company's data infrastructure and data catalog, from data ingestion through the semantic layer, ensuring a robust, scalable architecture on AWS.
- Collaborate with cross-functional teams (product, technology, operations, etc.) to understand data needs, align them with business goals, and translate them into technical solutions.
Qualifications
- Bachelor's or Master's (preferred) degree in Computer Science, Engineering, or a related quantitative field (Data Science).
- 5+ years of experience as a Data Engineer, Analytics Engineer, or similar role, with a strong focus on the development of end-to-end data solutions and products.
- 5+ years of hands-on experience with AWS cloud technologies is required, including designing, building, and maintaining cloud-based data infrastructure and infrastructure as a Code (IaC), such as CDK or Terraform.
- Proficiency in building and managing data infrastructure and ETL pipelines within AWS, leveraging services like AWS Glue, Athena, Redshift, Aurora, RDS, DynamoDB, EMR, Lambda, IAM, S3, EC2, CLI.
- Demonstrated experience in designing and implementing robust data models for analytical purposes.
- Strong proficiency in SQL and experience with various database systems (e.g., MySQL, NoSQL, Snowflake, Vector Databases).
- Strong proficiency in Python for data engineering and analytics, and extensive experience with data pipeline development and orchestration tools (e.g., Airflow, dbt).
- Experience with Power BI or Tableau for data reporting and dashboard development.
- Experience shipping data products to production and understanding software development lifecycle best practices.
- Strong problem-solving skills, the ability to work independently, and good communication and collaboration skills.
- Ability to learn new technologies and adapt to a fast-paced environment.
- Awareness of HIPAA, PHI, and other healthcare-specific regulations related to data and AI.
This person ensures partners have a seamless technical journey and continuously improves the partner‑facing validation process to accelerate partner success.
Join us to do the best work of your career and make a profound social impact as a Senior AI Partner Engineer on our Product team in Hopkinton, Massachusetts or Round Rock, TX .
What you’ll achieve
You will be the subject matter expert on Dell’s AI platforms—including application architectures, model pipelines, AI runtimes, and infrastructure stacks—and will directly assist partners as they validate and optimize their AI solutions on Dell. This role combines developer relations, partner engineering, technical program management, and solution validation.
You will:
Lead technical onboarding and enablement of AI ISV partners , serving as the primary technical point of contact and guiding partners through AI Factory self‑validation working with business leaders and developers
Provide hands‑on AI full stack architectural guidance and validation , working directly in partner environments to review solutions, optimize performance, and ensure compliance with Dell Self‑Validation criteria
Develop and refine Dell Automation Platform blueprints , delivering prescriptive guidance to enable repeatable, scalable AI deployments
Collaborate cross‑functionally with Product, Engineering, Solutions, and Partner teams to align partner capabilities with Dell’s AI strategy and advocate for partner needs
Take the first step towards your dream career Every Dell Technologies team member brings something unique to the table. Here’s what we are looking for with this role.
Essential Requirements
7+ years in AI engineering, solutions architecture, developer relations, or partner engineering .
Deep technical understanding of AI/ML solution architecture , including: Model training and inference patterns, AI Frameworks (PyTorch, TensorFlow, Triton, ONNX Runtime) AI runtimes and acceleration libraries (CUDA, cuDNN, TensorRT) Containerization and orchestration (Docker, Kubernetes, Slurm, OpenShift AI)
Hands-on experience deploying and validating workloads across complete infrastructure stacks (compute, GPU, networking, storage, observability)
Experience working with or supporting software partners or developers in a technical capacity.
Strong ability to communicate complex technical concepts to both engineering and business audiences
Ability to work across multiple teams and manage partner facing technical programs end-to-end
Desirable Requirements
Hands-on experience deploying and validating workloads across complete infrastructure stacks (compute, GPU, networking, storage, observability)
Experience working with or supporting software partners or developers in a technical capacity.
Strong ability to communicate complex technical concepts to both engineering and business audiences
Compensation
Dell is committed to fair and equitable compensation practices. The salary range for this position is $199,750. - $258,500.
Benefits and Perks of working at Dell Technologies
Your life. Your health. Supported by your benefits. You can explore the overall benefits experience that awaits you as a Dell Technologies team member — right now at
Who we are
We believe that each of us has the power to make an impact. That’s why we put our team members at the center of everything we do. If you’re looking for an opportunity to grow your career with some of the best minds and most advanced tech in the industry, we’re looking for you.
Dell Technologies is a unique family of businesses that helps individuals and organizations transform how they work, live and play. Join us to build a future that works for everyone because Progress Takes All of Us.
Dell Technologies is committed to the principle of equal employment opportunity for all employees and to providing employees with a work environment free of discrimination and harassment. Read the full Equal Employment Opportunity Policy here.
Job ID: R285360
The Data Engineering Manager is responsible for leading and developing a team of Data Architects and Data Solutions Engineers while actively contributing to hands-on technical projects. This role will manage the data warehouse in Snowflake, engineering automations in Alteryx and/or other solutions, while ensuring efficient project intake and prioritization. The ideal candidate combines strong technical expertise with proven technical leadership skills to drive innovation and operational excellence across the data engineering function.
As a Data Engineering Manager, you will:
- Set the technical strategy for data engineering solutions and data architecture which includes end to end data pipeline strategy, consumption management, project scoping, and data automation.
- Design, develop, and optimize data engineering solutions using Snowflake, DBT, Azure Data Factory, and Alteryx.
- Continuously assess and optimize the data engineering technology stack to ensure scalability, performance, and alignment with industry best practices.
- Implement best practices for data modeling, ETL/ELT processes, and automation.
- Own and maintain the Snowflake data warehouse roadmap and engineering standards.
- Lead data project scoping, prioritization, and resource allocation to ensure timely delivery of data engineering solutions.
- Ensure data integrity, security, and compliance across all engineering solutions.
- Collaborate with IT and rest of data teams to align solutions with enterprise
- Establish documentation and governance standards for data engineering workflows ensuring completeness, audit readiness, and traceability in alignment with enterprise architecture.
- Directly supervise the Data Architecture & Data Engineering team in accordance with Nicolet's policies and applicable laws. Responsibilities include interviewing, hiring, and training employees; planning, assigning, and directing work; appraising performance; coaching, mentoring and development planning; rewarding and disciplining employees; addressing complaints and resolving problems.
Qualifications:
- Bachelor's degree in Computer Science, Data Engineering, Data Analytics or related field.
- 7+ years in data engineering or related data roles required.
- 3+ years in leadership or management positions required.
- Strong technical expertise in Snowflake, DBT, Azure Data Factory and SQL or like systems.
- Familiarity with Alteryx, UiPath, Tableau, Power BI and Salesforce is preferred.
- Ability to design and implement scalable data solutions.
- Excellent leadership, communication, and organizational skills
- Ability to balance hands-on development with team development.
- Must be able to work fully in-office. This position does not allow for remote work.
Benefits:
- Medical, Dental, Vision, & Life Insurance
- 401(k) with a company match
- PT0 & 11 1/2 Paid Holidays
The above statements are intended to describe the general nature and level of work being performed. They are not intended to be construed as an exhaustive list of all responsibilities and skills required for the position.
Equal Opportunity Employer/Veterans/Disabled
We’re hiring a B2B SaaS Account Executive to sell AI-powered solutions to growing and enterprise businesses. This role is designed for someone who is genuinely AI-obsessed—the kind of seller who actively follows new models, experiments with AI tools, and wants to be at the front line of how AI is changing the way companies operate.
As an AI Account Executive at Commercient, you’ll own the full sales cycle for our AI automation and chatbot solutions, from prospecting and demos to closing complex B2B SaaS deals. You’ll work directly with customers to understand real business problems and translate cutting-edge AI—LLMs, intelligent automation, and ERP–CRM integrations—into practical, high-impact outcomes. This is a SaaS sales role for someone excited to sell sophisticated AI technology, engage senior stakeholders, and help shape the next generation of AI-driven sales motions.
If you’re the kind of person who keeps up with AI breakthroughs, tests new tools for fun, and wants your sales career tightly aligned with the future of AI, Commercient is where you’ll thrive.
Location: Atlanta, GA (Hybrid) — open to fully remote candidates based in the United States
What You’ll Do
As our AI Sales Representative, you’ll be on the front lines driving our growth:
- Prospect, pitch, and close deals for our AI technology solution such as our chatbot.
- Build and nurture strong client relationships with Salesforce, HubSpot, Zoho, etc.
- Represent Commercient at meetings, demos, and events across the US
- Gather insights from the market to help shape our product and sales strategy
- Hit and exceed sales targets while growing your career in a fast-moving company
- Travel to several conferences per year in the US
Who You Are
- Sales hunter with a passion for building relationships and closing deals
- Energetic, ambitious, and motivated by results
- AI enthusiast who likes to learn about AI and stays current with the trends
- Comfortable meeting clients and thriving in a dynamic, less-structured environment
- Bachelor’s degree or equivalent experience in Sales, Business Development, or related fields (optional if you have killer sales results!)
- 3-7 years of experience in SaaS or AI solution sales (ERP, CRM, or automation experience strongly preferred)
- Familiarity with Salesforce, HubSpot, or ERP ecosystems
- Understanding of AI chatbots, RAG systems, or natural language interfaces (bonus if you can explain GPT, embeddings, or vector databases in plain English)
- Consultative, high-EQ selling style with technical curiosity
- Comfortable engaging at C-level and VP-level
- Self-starter with strong pipeline discipline and storytelling ability
- Excited about shaping a next-generation AI sales motion
- Experience with any Chatbot or LLM tech stack: Google Gemini, Google AI Studio, Open AI, Liveperson, Drift chat, Microsoft Copilot, Agents, Agentforce, HubSpot AI, Support desk or Helpdesk AI assistants, Slack AI assistants, etc.
- Comfortable working independently in a remote team environment
- Applicants must have near-native English proficiency. A short written and verbal English evaluation will be part of the selection process.
Not for you if: you dislike rejection or ambitious goals.
Why Join Us?
- Be a key player in our expansion — your impact is direct and visible
- Work closely with founders and an international team
- Learn and grow in a tech-driven, fast-moving environment
- We have an engaging, collaborative culture focused on succeeding together
Compensation & Perks
- Competitive base starting at $55k (based on experience) + commission — uncapped, performance-driven commissions per annual On Target Earnings (OTE)
- Our compensation plan creates a space for you to be in control of what you make. The base is a great start, but uncapped commission is accessible your entire career with us (your base and commission will increase as you grow with the company).
- Comprehensive Benefits Package
- 401k program with generous company match
- PTO
- Hybrid role based in Atlanta, GA with fully remote option for US-based candidates
About Commercient
Commercient helps growing companies streamline Sales, Marketing, and Customer Service by seamlessly connecting ERP and CRM systems through our AI-driven integration platform. Over 50,000 users rely on Commercient SYNC daily to automate key business processes—sales, billing, invoicing, and payments—across top CRMs like Salesforce, HubSpot, and Microsoft Dynamics. We’re an innovative, global SaaS company with 20+ years of experience and customers in 1,000+ organizations worldwide.
Why Work With Us
- Work remotely with a diverse, supportive, and fun global team
- Be part of an innovative company that embraces cutting-edge technology
- Enjoy learning and development opportunities to grow your career
- Flexible work-life balance and an environment where ideas thrive
Ready to join an innovative team building the world’s leading ERP–CRM integration platform? Apply today and grow your career with Commercient.
Remote working/work at home options are available for this role.
This role requires a strong background in data engineering, hands-on experience building cloud data solutions, and a talent for communicating complex designs through clear diagrams and documentation.
Core Responsibilities Cloud Data Architecture Design & Strategy: Design and implement secure, scalable cloud-based data pipelines, data warehouses, and data lakes.
Drive the selection and integration of cloud data services (e.g., storage, databases, analytics tools).
Develop comprehensive cloud data strategies in alignment with business goals.
Diagramming & Documentation: Produce clear and informative visual diagrams (e.g., data flow diagrams, entity-relationship diagrams, system architecture diagrams) to guide implementation and knowledge sharing.
Maintain detailed documentation of data architecture, design decisions, and processes.
Hands-on Implementation & Optimization: Actively contribute to the hands-on implementation of cloud data solutions.
Proactively identify and implement performance optimization strategies for cloud data systems.
Troubleshoot and resolve issues related to data pipelines, data quality, and data accessibility.
Must Have: Bachelor's of Engineering in Computer Science "Engineering degree in another branch such as Electrical, Civil, Mechanical, or IT, etc it will not be considered" Minimum of 5 years of hands-on data engineering experience using distributed computing approaches (Spark, Map Reduce, DataBricks) Proven track record of successfully designing and implementing cloud-based data solutions in Azure Deep understanding of data modeling concepts and techniques.
Strong proficiency with database systems (relational and non-relational).
Exceptional diagramming skills with tools like Visio, Lucidchart, or other data visualization software.
Preferred Qualifications Advanced knowledge of cloud-specific data services (e.g., DataBricks, Azure Data Lake).
Expertise in big data technologies (e.g., Hadoop, Spark).
Strong understanding of data security and governance principles.
Experience in scripting languages (Python, SQL).
Additional Skills Communication: Exemplary written and verbal communication skills to collaborate effectively with all teams and stakeholders.
Problem-solving: Outstanding analytical and problem-solving skills for complex data challenges.
Teamwork & Leadership: Ability to work effectively in cross-functional teams and demonstrate potential for technical leadership.
This role involves leading complex technology projects, impacting business outcomes through innovative data solutions.
Candidates should have a strong background in data architecture, cloud technologies, and experience mentoring teams.
The successful applicant will engage with clients, ensuring effective delivery and quality management within a dynamic consulting environment.
#J-18808-Ljbffr
Job: Data-MDM Architect (Profisee) with BA/PM experience
Location: Waukesha/Milwaukee, Wisconsin
Mode: Work from office, at least 3 days in a week
Primary Purpose
- Responsible for designing and architecting data/MDM solutions, analyzing, implementing, and deploying these solutions both on-premises and in the cloud. By collaborating with diverse business teams and utilizing extensive knowledge of big data tools and products, creates scalable, flexible, and comprehensive data solutions that tackle complex business challenges.
Major Responsibilities
- Manage the technical delivery of medium to large, moderately complex projects on-time with targeted zero defects.
- Provide planning, estimation, scheduling, prioritization and coordination of technical activities related to Enterprise-wide data solutions on both cloud and on premises.
- Ensure solutions alignment to Enterprise Architecture policies and best practices; ensure that process methodologies are followed in development.
- Accountable to business and technology management for end-to-end application scoping, planning, development and delivery that meets and exceeds quality standards.
- Identify and manage dependencies and downstream impacts of the project to minimize adverse effects on other projects and / or programs.
- Assist Project manager with the estimation of technical timelines and allocation of the technical resources to specific task.
- Communicate Expectations, Roles and Responsibilities to team members and hold them accountable to meet the expectations.
- Collaborate with IT partners to devise capacity plan and ensure appropriate infrastructure for the end-to-end system delivery.
- Supervise contingent workers and their daily tasks including onshore and offshore staff.
- Identify valuable data sources and automate collection processes.
- Maintain data accuracy and timeliness, a critical highly visible aspect of the position as it impacts supply chain and sales effectiveness, financial performance of the business, and customer perception through on-time delivery, working capital, financial reporting accuracy and product quality.
- Architect and design master data to drive towards “Single source of the truth”.
- Regularly monitor and measure performance of MDM standards.
- Performs problem and trend analyses to identify and correct problems and increase data quality.
- Review / Approve execution of data changes.
- Track and report through the CAB review board.
- Develop SLA’s and ensure they are met.
- Drive data mapping workshops for migrations.
- Coordinate and participate in the ETL (extract, transform, load) process for any migrations.
- Plan and architect M&A initiatives and integrations
****We are not in a position to sponsor candidates for employment for this position nor can we work with anyone in a corp-to-corp arrangement. W2 only! No exceptions!****
Summary of Duties
The Analytics Engineer is responsible for supporting data integration, data management, analytics, and reporting initiatives across the organization. This role partners closely with project teams, business leaders, and technical teams to gather requirements, design data solutions, and deliver high-quality analytics products.
The position plays a key role in designing and developing reports, datasets, data cubes, and fact tables to support enterprise reporting and reference data management initiatives. The Analytics Engineer will also collaborate with analysts and technical teams to ensure successful delivery of analytics solutions that support business decision-making.
The successful candidate will possess strong analytical and communication skills and the ability to build effective working relationships across teams. This role requires the ability to manage multiple priorities while collaborating with project stakeholders to define requirements, identify risks, and ensure that project objectives are met.
This position also provides hands-on expertise with BI tools, helping guide the development of dashboards, reports, and supporting documentation. The role requires the ability to work independently while maintaining consistent communication with project teams, technical resources, external vendors, and end users.
A strong working knowledge of SQL, BI reporting tools, ETL data mapping, and reference data management concepts is required.
Key Responsibilities
Responsibilities include, but are not limited to:
- Support the design, development, and requirements gathering for analytics solutions in collaboration with healthcare service line leaders.
- Develop and maintain standardized enterprise reports and dashboards.
- Work with analytics platforms and tools including SQL, Cloudera, Teradata, and BI tools.
- Partner with service line leaders to ensure effective adoption and optimization of reports and analytics solutions.
- Collaborate with the Data Management team to develop queries, datasets, data tables, and data cubes.
- Manage multiple assignments and assist leadership in prioritizing analytics work.
- Work independently and collaboratively with cross-functional teams to achieve project goals.
- Participate throughout the product delivery lifecycle to ensure project milestones and analyst deliverables are completed on time.
Knowledge, Skills, and Abilities
- Professional demeanor with a strong customer service mindset
- Excellent written and verbal communication skills
- Strong interpersonal and collaboration skills
- Ability to quickly learn and apply new technologies and processes
- Working knowledge of Agile and Scrum delivery methodologies
- Strong analytical and problem-solving abilities
Technical Skills (Preferred)
- SQL
- Cloudera
- Teradata
- Business Intelligence tools (MicroStrategy, Power BI)
- Reference Data Management tools (Ataccama)
- Data modeling and ETL concepts
Education
Bachelor’s degree in Information Technology, Data Analytics, Computer Science, or a related field preferred.
Experience
- 2–4 years of experience working on increasingly complex data analytics or business intelligence projects
- Demonstrated success delivering analytics or reporting solutions
- Experience working in a fast-paced, results-oriented environment
- Experience supporting enterprise reporting or analytics initiatives preferred
Job Title: Senior Manager, Data Architecture (Ref: 195759)
Location: Charlotte, North Carolina – In-Office (5 Days Per Week)
Salary: Up to $175,000 + Bonus
Contact:
We’re looking for an experienced and forward-thinking Senior Manager, Data Architecture to define and lead the enterprise data architecture strategy within a large-scale, data-driven organization. This is a high-impact leadership role where you’ll shape the long-term data roadmap, modernize architecture standards, and guide the evolution of a cloud-based data platform.
In this role, you’ll lead a team of data architects and modelers while partnering closely with Data Engineering, Analytics, BI, Platform, and business stakeholders. You’ll ensure scalable, secure, and high-performing data solutions that enable advanced analytics, operational reporting, and strategic decision-making across the enterprise.
What You’ll Do
- Define and maintain the enterprise data architecture vision aligned to business and technology strategy
- Lead, mentor, and grow a team of data architects and modelers, establishing best practices and standards
- Design and govern scalable data platforms leveraging Azure, Snowflake, and Databricks
- Establish enterprise standards for data modeling (Dimensional, 3NF, Data Vault), integration, and storage
- Define architecture patterns for ingestion, transformation, and cross-domain data integration
- Drive architectural consistency across analytics, BI, and operational data products
- Partner with Data Governance teams to enforce data quality, lineage, metadata, and compliance standards
- Ensure solutions meet security, privacy, and regulatory requirements
- Collaborate with Engineering and Platform teams on cloud architecture and long-term technical roadmap
- Communicate complex architectural designs clearly to both technical and executive stakeholders
What You’ll Bring
- 7+ years of experience in data architecture or advanced data engineering roles
- 5+ years in a dedicated Data Architect or equivalent leadership capacity
- Deep experience designing enterprise-scale data platforms in cloud environments
- Strong expertise in Microsoft Azure data services
- Expert-level knowledge of Snowflake and Databricks
- Extensive experience with enterprise data modeling methodologies (Dimensional, 3NF, Data Vault)
- Experience with data modeling tools such as Erwin (preferred)
- Proven experience leading or mentoring architects or senior technical professionals
- Strong understanding of governance, security, and regulatory considerations in enterprise data environments
- Exceptional communication skills with the ability to influence senior stakeholders
Qualifications
- Bachelor’s degree in Computer Science, Engineering, Information Systems, or related field (or equivalent experience)
- 10+ years of progressive experience in data architecture, engineering, or enterprise data platform design