Colab Python Notebook Jobs in Usa
1,133 positions found — Page 5
Selling Partner Trust and Store Integrity (TSI) creates a trustworthy shopping experience across Amazon stores worldwide by protecting customers, brands, selling partners, vendors, and Amazon from fraud, counterfeit, and abuse. The Special Projects and Investigations (SPI) team within TSI protects Amazon customers and stores by applying systems thinking to understand how networks of users interact with multiple services. We target large-scale ecosystems that pose store-level risks and mitigate those ecosystems through internal and external means. Our growth requires highly skilled candidates who move fast, have an entrepreneurial spirit to create new solutions, demonstrate tenacity to get things done, thrive in ambiguity and change, and can break down and solve complex problems.
We catch bad actors and stop online fraud. It’s fun. It’s hard. It matters. We are passionate about protecting our selling partners and customers from bad actors and want a candidate that shares that passion. Amazon is one of the world’s most trusted companies. Help us keep it that way. To achieve this, the ideal candidate should be passionate about use of advanced data analytics and technology approaches to identify patterns and establish connections to uncover process and technology gaps and prevent fraud across Amazon stores worldwide. Your decisions are not only fundamental to helping protect customers and selling partners but will help maintain the health of Amazon’s catalog and product listings ecosystem.
Key job responsibilities
• Complete risk analyses and manipulate data in complex data sets (SQL, Python, R etc.)
• Use high-level judgment to inform our most complex enforcement decisions
• Identify gaps and risks in Amazon's current mechanisms and policies and recommend solutions to product/policy owning teams.
• Use data and/or technical skills to discover new ways to scale deep dive signals resulting in the identification of many bad actors and sizing the issue
• Owning the complete life cycle of one or more complex problems - from identification through scaling the solutions
• Break problems into manageable pieces, ruthlessly prioritizing, and delivering results in an ambiguous environment
• Conduct large scale deep dives to derive insights about tactics used to conduct abuse on our stores, identifying gaps and risk in Amazon's current mechanisms, systems, and policies
• Write documents for partner teams and executives that identify problems, propose technical solutions, and drive alignment among stakeholders
• Own partnerships with stakeholder teams and guide appropriate trade-offs, clearly communicate goals, roles and responsibilities.
• Design and deploy agentic AI systems to automate complex workflows, scale pattern detection, and accelerate enforcement decisions across high-volume abuse scenarios
A day in the life
Your day might involve diving deep into data to uncover emerging fraud patterns, collaborating with teams across Amazon to implement protective solutions, or developing new detection methods. You'll balance independent analytical work with team collaboration, sharing insights and supporting colleagues in our shared mission.
About the team
Our team is comprised of practitioners of fraud and abuse, working to understand bad actor ecosystems using threat intelligence analytics and technical skills. We complement specialized industry skills with broad risk experiences gathered through years of experience to deliver results - we wear a lot of hats and take ownership of hard to solve problem areas whenever possible. We speak 12 languages, write code in 3 (mostly self-taught, on the job), and celebrate learning and taking risks. We encourage experimentation and curiosity while supporting each other to constantly learn and grow.
Our work is to solve hard puzzles and identify what hasn’t already been discovered - typically with data and always with a lot persistence and curiosity. If you like the sound of that, come join us.- Bachelor’s or postgraduate degree in Information Security, Computer Science, Data Science/Analytics, Engineering, Mathematics, Statistics or related discipline.
- 3+ years of relevant industry experience in risk or fraud investigations, regulatory compliance, ecommerce, analytics, or security
- Proficient with deriving insights from big data using SQL & experience manipulating/processing data with Python
- Proven ability to deliver complex projects across multiple teams- • Experience working in e-commerce organizations • Experience working within fraud, compliance, law enforcement, or intelligence organizations • Experience with AWS services like Redshift, Neptune or Sagemaker • Masters degree in or practical experience with data science or machine learning • Excellent written and verbal communication skills to communicate security and business risk to a broad range of technical and non-technical audiences. • High level of integrity and discretion to handle confidential information. • Exceptional ownership and bias for action: willing to move quickly and decisively • Proven ability to problem solve in large/complex/technical systems
Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status.
Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
The base salary range for this position is listed below. Your Amazon package will include sign-on payments and restricted stock units (RSUs). Final compensation will be determined based on factors including experience, qualifications, and location. Amazon also offers comprehensive benefits including health insurance (medical, dental, vision, prescription, Basic Life & AD&D insurance and option for Supplemental life plans, EAP, Mental Health Support, Medical Advice Line, Flexible Spending Accounts, Adoption and Surrogacy Reimbursement coverage), 401(k) matching, paid time off, and parental leave. Learn more about our benefits at , WA, Seattle - 102, ,400.00 USD annually
OZ – Databricks Architect/ Senior Data Engineer
Note: Only applications from U.S. citizens or lawful permanent residents (Green Card Holders) will be considered.
We believe work should be innately rewarding and a team-building venture. Working with our teammates and clients should be an enjoyable journey where we can learn, grow as professionals, and achieve amazing results. Our core values revolve around this philosophy. We are relentlessly committed to helping our clients achieve their business goals, leapfrog the competition, and become leaders in their industry. What drives us forward is the culture of creativity combined with a disciplined approach, passion for learning & innovation, and a ‘can-do’ attitude!
What We're Looking For:
We are seeking a highly experienced Databricks professional with deep expertise in data engineering, distributed computing, and cloud-based data platforms. The ideal candidate is both an architect and a hands-on engineer who can design scalable data solutions while actively contributing to development, optimization, and deployment.
This role requires strong technical leadership, a deep understanding of modern data architectures, and the ability to implement best practices in DataOps, performance optimization, and data governance.
Experience with modern AI/GenAI-enabled data platforms and real-time data processing environments is highly desirable.
Position Overview:
The Databricks Senior Data Engineer will play a critical role in designing, implementing, and optimizing enterprise-scale data platforms using the Databricks Lakehouse architecture. This role combines architecture leadership with hands-on engineering, focusing on building scalable, secure, and high-performance data pipelines and platforms. The ideal candidate will establish coding standards, define data architecture frameworks such as the Medallion Architecture, and guide the end-to-end development lifecycle of modern data solutions.
This individual will collaborate with cross-functional stakeholders, including data engineers, BI developers, analysts, and business leaders, to deliver robust data platforms that enable advanced analytics, reporting, and AI-driven decision-making.
Key Responsibilities:
- Architecture & Design: Architect and design scalable, reliable data platforms and complex ETL/ELT and streaming workflows for the Databricks Lakehouse Platform (Delta Lake, Spark).
- Hands-On Development: Write, test, and optimize code in Python, PySpark, and SQL for data ingestion, transformation, and processing.
- DataOps & Automation: Implement CI/CD, monitoring, and automation (e.g., with Azure DevOps, DBX) for data pipelines.
- Stakeholder Collaboration: Work with BI developers, analysts, and business users to define requirements and deliver data-driven solutions.
- Performance Optimization: Tune delta tables, Spark jobs, and SQL queries for maximum efficiency and scalability.
- GenAI Applications Development: It is a big plus to have experience in GenAI application development
Requirements:
- 8+ years of experience in data engineering, with strong hands-on expertise in Databricks and Apache Spark.
- Proven experience designing and implementing scalable ETL/ELT pipelines in cloud environments.
- Strong programming skills in Python and SQL; experience with PySpark required.
- Hands-on experience with Databricks Lakehouse, Delta Lake, and distributed data processing.
- Experience working with cloud platforms such as Microsoft Azure, AWS, or GCP (Azure preferred).
- Experience with CI/CD pipelines, Git, and DevOps practices for data engineering.
- Strong understanding of data architecture, data modeling, and performance optimization.
- Experience working with cross-functional teams to deliver enterprise data solutions.
- Tackles complex data challenges, ensuring data quality and reliable delivery.
Qualifications:
- Bachelor’s degree in Computer Science, Information Technology, Engineering, or a related field.
- Experience designing enterprise-scale data platforms and modern data architectures.
- Experience with data integration tools such as Azure Data Factory or similar platforms.
- Familiarity with cloud data warehouses such as Databricks, Snowflake, or Azure Fabric.
- Experience supporting analytics, reporting, or AI/ML workloads is highly desirable.
- Databricks, Azure, or cloud certifications are preferred.
- Strong problem-solving, communication, and technical leadership skills.
Technical Proficiency in:
- Databricks, Apache Spark, PySpark, Delta Lake
- Python, SQL, Scala (preferred)
- Cloud platforms: Azure (preferred), AWS, or GCP
- Azure Data Factory, Kafka, and modern data integration tools
- Data warehousing: Databricks, Snowflake, or Azure Fabric
- DevOps tools: Git, Azure DevOps, CI/CD pipelines
- Data architecture, ETL/ELT design, and performance optimization
What You’re Looking For:
Join a fast-growing organization that thrives on innovation and collaboration. You’ll work alongside talented, motivated colleagues in a global environment, helping clients solve their most critical business challenges. At OZ, your contributions matter – you’ll have the chance to be a key player in our growth and success. If you’re driven, bold, and eager to push boundaries, we invite you to join a company where you can truly make a difference.
About Us:
OZ is a 28-year-old global technology consulting, services, and solutions leader specializing in creating business-focused solutions for our clients by leveraging disruptive digital technologies and innovation.
OZ is committed to creating a continuum between work and life by allowing people to work remotely. We offer competitive compensation and a comprehensive benefits package. You’ll enjoy our work style within an incredible culture. We’ll give you the tools you need to succeed so you can grow and develop with us and become part of a team that lives by its core values.
Hi Rameez here from Beaconfire. I hope you're doing well! We’re currently hiring for an exciting MERN/MEAN Developer role, and I wanted to reach out to see if you or someone in your network might be interested. This is a fantastic opportunity to work on high-impact projects using modern technologies in a collaborative and growth-oriented environment.
About the Company
BeaconFire is based in Central NJ, specializing in Software Development, Web Development, and Business Intelligence; looking for candidates with a strong background in Software Engineering or Computer Science for a Python/Node Developer position.
About the Role
The role involves developing websites and writing scalable, secure, maintainable code while collaborating with team members to achieve project goals.
Responsibilities
- Develop websites using HTML, CSS, Node.js, React.js, and Angular2+, among other tools;
- Write scalable, secure, maintainable code that powers our clients’ platforms;
- Create, deploy, and maintain automated system tests;
- Work with Testers to understand defects opened and resolves them in a timely manner;
- Supports continuous improvement by investigating alternatives and technologies and presenting these for architectural review;
- Collaborate effectively with other team members to accomplish shared user story and sprint goals;
- Invest time in constant professional development to stay up to date with new technological development and programming languages;
- Discover and fix programming bugs;
- Other duties as assigned.
Qualifications
- Proficient understanding of HTML and CSS;
- Experience in programming language JavaScript or similar (e.g. Java, Python, C, C++, C#, etc.) and understanding of the software development life cycle;
- Basic knowledge of code versioning (e.g. Git, SVN);
- A passion for coding pixel perfect web pages;
- Good verbal communication and interpersonal skills.
Required Skills
- Proficient understanding of HTML and CSS;
- Experience in programming language JavaScript or similar (e.g. Java, Python, C, C++, C#, etc.) and understanding of the software development life cycle;
- A passion for coding pixel perfect web pages;
- Good verbal communication and interpersonal skills.
Preferred Skills
- Bachelor's degree or higher in Computer Science or related fields;
- 0-1 year of practical experience in JavaScript coding;
- Familiarity with at least one JavaScript framework (Angular2+, React.js, Express.js);
- Experience with unit and integration testing of code, with an understanding of JavaScript testing frameworks like Jasmine, Cucumber, Mocha, and Karma;
- Experience providing REST/SOAP APIs for user interface consumption;
- Experience working within an Agile development methodology Scrum.
BeaconFire is an E-verified company and provides equal employment opportunities (visa sponsorship provided).
```
Job Description
Role -: QA Automation Engineer
Location: Mount Laurel, NJ (Onsite)
We are looking for a highly skilled SDET / QA Automation Engineer with strong experience in Python, JavaScript, and modern automation frameworks to support automation solutions and end-to-end network validation.
Key Skills Required:
Python Automation
JavaScript
SDET / QA Automation
Automation Frameworks (PyTest / Selenium / Playwright / Cypress)
Microservices Testing
API Testing
Networking / Cable Technologies Knowledge
End-to-End System Validation
Responsibilities:
• Develop automation solutions and test scripts for network platforms
• Build and maintain automation frameworks using Python & JavaScript
• Validate end-to-end network components and behavior
• Develop automation microservices for testing infrastructure
• Collaborate with cross-functional teams and clients to ensure quality delivery
About the Role
We are seeking a Backend Engineer to help build and maintain the backend services and API’s that power our proprietary AI SaaS CRM and LMS platforms.
You will work directly with our CTO, collaborate with the engineering team, and partner closely with our Product Manager to design, implement, and maintain scalable backend systems.
Our backend services are built primarily with:
- NestJS (TypeScript)
- Python
- Deployed across multiple AWS environments
This is a hands-on backend engineering role focused on API development, cloud deployment, distributed systems, and production-grade reliability.The role has meaningful ownership - not just ticket execution.
What You’ll Do
- Work directly with the CTO on backend design and implementation decisions
- Partner closely with a Product Manager on sprint planning, backlog grooming, translating product requirements into technical solutions, and prioritizing customer-impacting improvements
- Design, build, and maintain backend API services using NestJS (TypeScript)
- Build and support backend services in Python
- Develop and maintain production-grade RESTful APIs
- Contribute to multi-environment deployments across AWS
- Use Terraform to manage our IAC
- Work with CI/CD workflows and structured deployment procedures
- Follow and contribute to engineering documentation including development guidelines, environment configuration standards, security practices, and versioning and changelog management
- Implement and support asynchronous and event-driven systems
- Write clean, maintainable, well-tested code
- Participate in code reviews and maintain high engineering standards
- Debug and resolve production issues across distributed cloud environments
What We’re Looking For (Required)
- 5+ years of backend engineering experience
- Strong proficiency in TypeScript and experience with NestJS
- Strong proficiency in Python
- Experience designing and implementing RESTful APIs
- Experience deploying and maintaining applications in AWS
- Familiarity with multi-environment deployments (dev, staging,UAT, production)
- Experience working with CI/CD pipelines
- Experience with relational databases (PostgreSQL)
- Familiarity with Docker or containerized workflows
- Experience working in GitHub-based workflows in a collaborative environment (pull requests, code reviews, branching strategies, and issue tracking)
- Comfortable working in an agile environment with JIRA and Monday
- Strong communication and problem-solving skills
- Experience building SaaS or multi-tenant platforms
Nice to Have / Strong Plus
- Familiarity with C# & C++
- Experience with Dentrix, OpenDental, or other dental integration PMS’s
- Experience building a greenfield SaaS or B2B software
- Experience with building on a Healthcare platform
- Familiarity with AI-enabled products or LLM integrations
- Experience with Redis or caching strategies
- Experience integrating third-party APIs
Why This Role Is Different
- Direct collaboration with the CTO on backend system design
- Close partnership with Product Management
- Opportunity to help shape a modern, AI SaaS platform for the healthcare industry
About Us
We're continuing to build a transformative healthcare accreditation platform that is revolutionizing how our clients and new hospitals manage compliance, quality improvement, and regulatory processes. Our platform combines cutting-edge technology with deep healthcare domain expertise to solve real problems for healthcare organizations nationwide.
The Opportunity
The goal is to have interns turn into full time employees; Therefore, you will be given full time responsibilities day one. To add onto that, you will be working in a high velocity growth startup and will be required to move fast. You'll work directly with our engineering team on a production healthcare platform, gaining hands-on experience with enterprise-grade systems while making real contributions that impact our product and customers.
Compensation Structure: Base position is unpaid, however qualified candidates may receive upfront equity compensation based on their experience level and demonstrated capabilities. We evaluate each applicant individually and offer equity packages commensurate with their potential contribution.
What You'll Do
*During the internship you may choose the area to focus on...
Application Testing & Quality Assurance
- Design and execute comprehensive test plans for our healthcare portal
- Perform manual testing across web applications, APIs, and integrations
- Identify and document bugs, usability issues, and edge cases
- Test healthcare compliance features (HIPAA, document security, audit trails)
Test Automation Development
- Build automated test suites using modern testing frameworks
- Develop API testing scripts for healthcare data integrations
- Create performance testing scenarios for document upload/processing
- Implement continuous testing pipelines with CI/CD integration
AI/ML Quality Support
- Collaborate with our AI team on document processing accuracy testing
- Help validate machine learning models for healthcare document extraction
- Design test datasets for training and validation of AI systems
- Analyze and report on AI/ML model performance and data quality
Data Engineering Quality Assurance
- Develop data quality monitoring and validation processes
- Create automated checks for data integrity across MongoDB systems
- Build dashboards and alerts for data quality metrics
- Support ETL pipeline testing and validation
Process Improvement & Strategy
- Analyze current QA processes and identify optimization opportunities
- Research and recommend new testing tools and methodologies
- Participate in technical decision-making and sprint planning
- Document QA best practices and create team knowledge base
What We're Looking For
Required Qualifications:
- Currently pursuing or recently completed degree in Computer Science, Engineering, or related field
- Strong understanding of software testing principles and methodologies
- Experience with at least one programming language (Python, JavaScript, Java, etc.)
- Basic knowledge of databases (SQL/NoSQL) and API testing
- Excellent problem-solving skills and attention to detail
- Strong communication skills and ability to work in a collaborative environment
Preferred Qualifications:
- Experience with test automation frameworks (Selenium, Pytest, Jest, etc.)
- Knowledge of healthcare IT, compliance requirements, or regulated industries
- Familiarity with cloud platforms (AWS) and DevOps practices
- Experience with data analysis, ETL processes, or machine learning
- Previous internship or project experience in QA/testing roles
Technical Skills We'd Love to See:
- Testing Tools: Selenium, Postman, Jest, Pytest, Cypress
- Programming: Python, JavaScript, SQL
- Databases: MongoDB, SQL databases
- Cloud/DevOps: AWS, Docker, CI/CD pipelines
- Data Tools: Pandas, data validation frameworks
- Version Control: Git, GitHub
What You'll Gain
Professional Development:
- Real Impact: Your work directly affects a production healthcare platform used by hospitals
- Mentorship: Work closely with senior engineers and receive structured feedback
- Healthcare Domain Knowledge: Learn about healthcare compliance, accreditation, and regulatory requirements
- Enterprise Technology: Gain experience with production-grade systems, security, and scalability
Technical Skills:
- Advanced testing methodologies and automation frameworks
- Healthcare data processing and compliance requirements
- AI/ML model testing and validation techniques
- Data engineering and quality assurance practices
- Modern DevOps and CI/CD practices
Career Opportunities:
- Immediate Value: Potential upfront equity compensation based on qualifications
- Strong potential for full-time conversion based on performance
- Network with healthcare technology professionals
- Portfolio of real-world healthcare technology projects
- Experience that's highly valued in the growing healthtech sector
Our Tech Stack
- Frontend: React, Modern CSS
- Backend: Node.js, TypeScript, Python, RESTful APIs
- Database: MongoDB, with future SQL integrations
- Cloud: AWS (EC2, S3, Lambda, RDS)
- AI/ML: Document processing, natural language processing
- Security: HIPAA compliance, encryption, audit logging
- DevOps: Docker, GitHub Actions, automated testing
Compensation & Equity
- Base Position: Unpaid educational internship
- Equity/Stock Compensation: Available upfront based on applicant qualifications and experience level
Our Hiring Process
We believe in a transparent and thorough selection process that respects your time while ensuring mutual fit:
- Initial Screening Call We'll discuss your background, experience, and career goals, while providing an overview of the role and our team culture.
- Technical Interview We'll have an in-depth discussion about your experience and explore related technical concepts. You should be prepared to walk through every aspect of quality assurance as it pertains to your resume.
Ready to apply? We look forward to hearing from you!
MedLaunch is an equal opportunity employer committed to diversity and inclusion.
We’re hiring a Data Insights Analyst to join a growing analytics team focused on turning large, complex datasets into clear, actionable insights that drive business decisions. This is a hands-on role for someone who enjoys digging into data, working with Python and SQL, and partnering with leaders to understand what’s really happening in the business. You’ll work across multiple functions and contribute directly to high-impact initiatives around forecasting, performance analysis, and strategic decision-making.
Keys to an Interview: Data Insights Analyst | CPG Manufacturing
- 2-5 years' Data Science and/or Business Analyst experience
- Master's Degree preferred
- Strong working experience with Python for data analysis (and exposure to machine learning is a major plus)
- Advanced SQL skills with the ability to pull and manipulate data from large data warehouses
- Ability to interpret existing dashboards and datasets and identify meaningful insights
- Clear communication skills and comfort explaining technical findings to non-technical stakeholders
- Comfortable working on-site, with flexibility
Key Responsibilities: Data Insights Analyst | CPG Manufacturing
- Analyze large, complex datasets to identify trends, opportunities, and risks across the business
- Leverage Python, SQL, Excel, and Power BI to deliver actionable insights and recommendations
- Build and enhance analytical models to support forecasting, budgeting, and strategic planning
- Develop, maintain, and improve dashboards and reporting used by leadership
- Clean, transform, and validate data to ensure accuracy and consistency
- Partner cross-functionally to understand business questions and translate them into data-driven solutions
- Present findings clearly and concisely to senior stakeholders
- Support automation and process improvements to increase analytical efficiency
- Contribute to high-visibility initiatives that influence growth and long-term strategy
As part of the KUBRA HQ team, the Analytics Engineer plays a key role in turning KHQ’s data into meaningful insights for both internal teams and external clients. This role helps shape how business leaders, operations teams, and clients make data-driven decisions using trusted, real-time information from the KUBRA HQ platform.
This is a hybrid opportunity in Tempe, AZ.
How You’ll Contribute
Own a curated catalog of business metrics and KPIs, ensuring consistent definitions and alignment across products and clients.
Design, build, and automate dashboards and reports in Power BI and Looker, backed by robust data models and clear analytical logic.
Partner with Data Engineering to define reporting datasets, enforce data quality checks, and uphold governance standards.
Deliver accurate, timely reports and dashboards to stakeholders with high reliability and attention to detail.
Conduct in-depth analysis to identify trends, drivers, and opportunities, providing actionable recommendations to business leaders.
Automate recurring reporting processes (e.g., QBRs, client packages) using reusable datasets, templates, and semantic layers.
Model, query, and transform data using SQL; maintain performant data pipelines, refresh schedules, and access controls.
Collaborate with Data Science to support experiments and track ML/AI outcomes through production dashboards.
Partner with cross-functional teams (Product, CXT, Finance, Client Success) to align metrics with company and client goals.
Document analytical logic, KPI definitions, data lineage, and assumptions to enable self-service and knowledge sharing.
Implement QA for data assets, including validation, regression testing, and monitoring for anomalies.
Stay current with BI and analytics tools and best practices; recommend and adopt improvements that enhance reliability, performance, or usability.
Strengths That Shine in This Role
3–5 years of experience in data analysis and reporting, preferably within product or SaaS environments.
Hands‑on proficiency with SQL and Looker.
Experience building end‑to‑end reporting solutions from data modeling to dashboard deployment and support.
Familiarity with LookML, model/view development and performance tuning, data warehousing concepts and ETL/ELT practices; exposure to cloud platforms (AWS/Azure) is preferred.
Experience collaborating with Data Engineering and Data Science teams; Python/ML/AI experience is an asset.
Undergraduate degree in a related discipline (e.g., Computer Science, Statistics, Analytics, Mathematics, Engineering, Economics) or equivalent experience.
Advanced certifications in analytics or BI tools are preferred.
Skills That Matter in This Role
Excellent problem-solving, communication (oral and written), and data storytelling skills; ability to translate analysis into clear narratives and actionable business recommendations.
Strong analytical skills with experience working with large and complex datasets; meticulous attention to detail.
Familiarity with Python for advanced analysis and data manipulation.
Ability to define and maintain KPIs and translate business questions into analytical requirements.
Solid understanding of data governance, cataloging, and metric standardization.
Excellent organizational and time management skills; able to manage multiple priorities under tight deadlines.
Proactive, collaborative, and client-focused mindset with strong influence and impact skills.
Why You’ll Love Working Here
Thrive in an award-winning culture that champions growth, embraces diversity, and fosters inclusion for all. See our awards
Earn annual performance-based bonuses recognizing your contributions
Enjoy generous benefit coverage with low premiums, plus a Healthcare Spending Account and Wellness Spending Account
Invest in your future with RRSP matching
Take time to recharge with paid vacation and sick days, and enjoy a paid day off for your birthday
Make a difference with two paid volunteer days to support causes you care about
Keep learning with free access to LinkedIn Learning and our education reimbursement program for continued development
Feel appreciated through our employee recognition programs
Support your mental health with a free premium Headspace membership
Stay refreshed with unlimited access to fully stocked beverage stations
Save more with exclusive Perkopolis retail discounts
KUBRA is an equal opportunity employer dedicated to building an inclusive and diverse workforce. We will provide accommodations during the recruitment process upon request by emailing Information received relating to accommodation will be addressed confidentially. We thank all applicants for their interest; however, only candidates under consideration will be contacted.
#LI-AA1
While we value the skills and experiences listed in our job requirements, we also recognize that talent comes in many forms, and welcome applications from candidates who meet most but not all specified requirements. If you possess a strong desire to learn and grow in a dynamic work environment, apply now!
KUBRA is a fast-growing company that delivers customer communications solutions to some of the largest utility, insurance, and government entities across North America. KUBRA offers billing and payments, mapping, mobile apps, proactive communications, and artificial intelligence solutions for customers. With more than 1.5 billion customer interactions annually, KUBRA services reach over 40% of households in the U.S. and Canada. KUBRA is an operating subsidiary of Hearst.
Our office is small enough to allow creative individuals to flourish, yet large enough to provide long-term stability. We place a tremendous amount of responsibility on our team members to be productive, focused and self-motivated. We offer a casual work environment, competitive compensation and a stellar benefits program.
KUBRA does not typically provide immigration-related assistance, including employment-based work visa (e.g. H-1B) sponsorship, work permit applications and extensions, permanent residence (green card) sponsorship, LMIA applications or permanent residency nominations. Candidates must ensure they have legal authorization to work in the U.S/ Canada. All sponsorship determinations are case by case based on business need.
Manage and optimize OpenShift deployments to support Artificial Intelligence (AI) and data-related solutions on Cloud Pak for Data.
Implement and maintain internal Watson OpenScale to monitor and interpret AI models performance in in support of customers' AI and machine learning objectives.
Leverage internal Cloud Pak along with Studio and components to manage data, perform analytics, and enhance AI capabilities.
Configure and use additional cartridges such as DataStage or Db2 to extend Cloud Pak for Data functionalities.
Develop and manage containerized applications and services with OpenShift on Cloud Paks to improve deployment efficiency, scalability, and application robustness.
Advise customers on applying AI Operations practices to ensure reliable and efficient AI system operations.
Optimize generative AI models and algorithms for better performance, accuracy, and confidence or ROUGE score optimization.
Design, develop, and implement AI solutions tailored to customer needs.
Engage with client executives to understand their requirements and provide suitable AI and data solutions and strategies.
Create and present tailored solutions that address client needs using the mentioned technologies.
Continuously monitor AI model performance and make necessary adjustments while ensuring compliance with security standards, particularly in the financial services sector.
Utilize: Python, Machine Learning, Pandas, NumPy, Scikit-learn, SQL.
Required: Bachelor's degree or equivalent in Computer Science, Data Science, Engineering, Information Systems, Mathematics or related (employer will accept Associates degree plus two (2) years of IT experience in lieu of a Bachelor's degree) and two (2) years of experience as an Analyst, Technical Specialist or related.
Two (2) years of experience must include utilizing Python, Machine Learning, Pandas, NumPy, Scikit-learn, SQL.
$199998
- $225000 per year.
Please send resumes to
Applicants must reference H270 in the subject line.
JobiqoTJN.
Keywords: Client Services Manager, Location: Armonk, NY
- 10504
You've done a ton of Leetcode.
You've racked up certificates, aced LeetCode challenges, and you know your way around system design like the back of your hand.
On paper, you're everything a tech company wants.
However tech stacks and requirement change every day.
Since 2010, we've helped thousands of candidates land full-time jobs at tech leaders like Google, Apple, PayPal, Visa, Western Union, Wells Fargo, Client, Paypal, Banking, Wayfair, Client, Client and hundreds more with Job offers of $95k to $154k.
Synergisticit focuses on closing the gap between your tech skills and what employers want now.
Open Roles We're Hiring For our clients: Entry-Level Software Programmers (Java/Python) Java Full Stack Developers Data Analysts & BI Engineers Data Scientists & ML Engineers All visa types and U.S.
citizens are encouraged to apply.
Note: Internships, freelance, or personal projects will not be considered toward experience requirements.
If you submit your resume, please be advised it may be entered into a central database shared by our JOPP team (our placement program).
You may unsubscribe if you receive emails.
Check the links below: Please check the below links: SynergisticIT USA Today Article Videos of Synergisticit At OCW, JAVAONE, GARTNER SUMMIT We Focus on Java /Full stack/Devops and Data Science /Data Engineers/Data analysts/BI Analysts/ Machine learning/AI candidates Ideal Candidates: Recent grads in CS, Engineering, Math, or Statistics with limited or no job experience Jobseekers who had layoffs due to Downsizing and want to get in demand tech stack Professionals seeking a career switch to tech Candidates with career gaps or lacking real-world experience Individuals looking to boost their skill portfolio for better job prospects Computer Science grads with limited or no job experience Students who recently finished their Bachelor's or Master's programs Those struggling to land interviews despite having experience Candidates on F1/OPT needing a job for STEM extension or H-1B filing Currently, We are looking for entry-level software programmers, Java Full stack developers, Python/Java developers, Data analysts/Data Engineers/ Data Scientists, Machine Learning engineers for full time positions with clients.
Top tech companies are flooded with smart grads.
What gets you in the door now is real-world application, confidence in delivery, and the soft skills to own a room—or a Zoom.
please check the below links Why do Tech Companies not Hire recent Computer Science Graduates | SynergisticIT Technical Skills or Experience? | Which one is important to get a Job? | SynergisticIT Backend vs.
Full Stack Development: Job Prospects | SynergisticIT What Recruiters Look for in Junior Developers | SynergisticIT Software engineering or Data Science as a career? How OPT Students Can Land Tech Jobs – SynergisticIT Is AI Going to Replace Software Programmers? | SynergisticIT The Market's Changed—Have You? Please note: Resume databases are shared with clients and interested clients will reach out directly if they find a qualified candidate for their req.
Resume submissions may be shared with our JOPP team database also.
Please unsubscribe if contacted or if you don't want to be contacted please don't submit your resume.