Colaboratory Python Download Jobs in Usa
1,574 positions found — Page 5
L3Harris is dedicated to recruiting and developing high-performing talent who are passionate about what they do. Our employees are unified in a shared dedication to our customers' mission and quest for professional growth. L3Harris provides an inclusive, engaging environment designed to empower employees and promote work-life success. Fundamental to our culture is an unwavering focus on values, dedication to our communities, and commitment to excellence in everything we do.
L3Harris is the Trusted Disruptor in defense tech. With customers' mission-critical needs always in mind, our employees deliver end-to-end technology solutions connecting the space, air, land, sea and cyber domains in the interest of national security.
Job Title: Lead, Software Engineering
Job Code: 33519
Job Location: Palm Bay, FL
Job Schedule: 9/80: Employees work 9 out of every 14 days – totaling 80 hours worked, and have every other Friday off
Job Description:
L3Harris is seeking Embedded Software Engineers in Palm Bay, FL to work on a dynamic team supporting design, development, integration, and test of embedded computing systems in support of the F-15 Aviate Navigate Communicate (ANC) and Mission Processor (MP) development and EMD production programs. The L3Harris team is developing the most advanced and secure mission architecture.
Will work closely with cross functional members of the engineering organization to develop and evaluate interfaces between hardware and software, and operational performance requirements and design of the overall system.
Essential Functions:
- Software development
- C/C++ Software Development using existing tools
- Requirements, test, systems architecture, software architecture, and design
- Testing tools may include test frameworks in Python
- Software Coverage Tools
- Development will support L3Harris mission systems integration architecture
- Development will support customer deployments
- No immediate travel expected but may require travel to customer site in St. Louis, MO, if in a software leadership role, to support program meetings
Qualifications:
- Bachelor's Degree and minimum 9 years of prior relevant experience. Graduate Degree and a minimum of 7 years of prior related experience. In lieu of a degree, minimum of 13 years of prior related experience.
- Secret US Security Clearance
Preferred Additional Skills:
- Experience with embedded security and encryption
- Ability to work well across multiple engineering disciplines
- The ability to understand and work with Network storage as well as an understanding of Gigabit Ethernet or InfiniBand.
- A functioning knowledge of authentication schemes from Radius to LDAP.
- Relevant integration and test experience on complex military systems.
- Working knowledge of interfaces and handoffs with digital engineering.
- Software leadership experience such as CSWE, IPTL, CAM experience
- Experience interacting with software development throughout the design cycle including major design reviews and architecture reviews.
- Experience with Hardware and Software integration and test
- DevSecOps and Test automation experience using Python or similar languages
- 8+ years of embedded development,
- Experience with APIs/BSPs/Drivers, OpenGL and/or Vulkan, health and status, Precision Time Protocol version 2 (PTPv2), H.264, ARINC-818
- 8 + years' experience with C++ systems development with understanding of Software/Hardware co-design, distributed systems design, and software/computer architecture.
- 6 + years' experience with avionics lifecycle execution.
- 5+ years' experience with safety-critical software development / DO-178 B or above industry standards
- Working knowledge of Bash/Python, Intel x86/ARM/FPGA/SoC, GPP, GPU, Embedded System Design, Release Engineering, Understanding of Change and Configuration Management, Debug/Trace Analysis, Real-time Operating Systems like VxWorks/LynxOS/Integrity, SELinux, Software Integration, Test, and Verification/Validation
- Experience on complex systems with a focus on mission computing, memory systems, and/or displays
- Excellent verbal and written communication skills in a technical information environment
- Full software development lifecycle experience including requirements flow-down and decomposition, allocation to design, translation to code, hw/sw integration, design verification/product qual, transition to production, fielded product support, including changes for component obsolescence.
- Utilizing tools/processes to support a structured work flow (requirements management, configuration management, workflow and change control, etc).
- Familiarity with Linux development environment and tools.
#LI-CS2
L3Harris Technologies is proud to be an Equal Opportunity Employer. L3Harris is committed to treating all employees and applicants for employment with respect and dignity and maintaining a workplace that is free from unlawful discrimination. All applicants will be considered for employment without regard to race, color, religion, age, national origin, ancestry, ethnicity, gender (including pregnancy, childbirth, breastfeeding or other related medical conditions), gender identity, gender expression, sexual orientation, marital status, veteran status, disability, genetic information, citizenship status, characteristic or membership in any other group protected by federal, state or local laws. L3Harris maintains a drug-free workplace and performs pre-employment substance abuse testing and background checks, where permitted by law.
Please be aware many of our positions require the ability to obtain a security clearance. Security clearances may only be granted to U.S. citizens. In addition, applicants who accept a conditional offer of employment may be subject to government security investigation(s) and must meet eligibility requirements for access to classified information.
By submitting your resume for this position, you understand and agree that L3Harris Technologies may share your resume, as well as any other related personal information or documentation you provide, with its subsidiaries and affiliated companies for the purpose of considering you for other available positions.
L3Harris Technologies is an E-Verify Employer. Please click here for the E-Verify Poster in English or Spanish. For information regarding your Right To Work, please click here for English or Spanish.
OZ – Databricks Architect/ Senior Data Engineer
Note: Only applications from U.S. citizens or lawful permanent residents (Green Card Holders) will be considered.
We believe work should be innately rewarding and a team-building venture. Working with our teammates and clients should be an enjoyable journey where we can learn, grow as professionals, and achieve amazing results. Our core values revolve around this philosophy. We are relentlessly committed to helping our clients achieve their business goals, leapfrog the competition, and become leaders in their industry. What drives us forward is the culture of creativity combined with a disciplined approach, passion for learning & innovation, and a ‘can-do’ attitude!
What We're Looking For:
We are seeking a highly experienced Databricks professional with deep expertise in data engineering, distributed computing, and cloud-based data platforms. The ideal candidate is both an architect and a hands-on engineer who can design scalable data solutions while actively contributing to development, optimization, and deployment.
This role requires strong technical leadership, a deep understanding of modern data architectures, and the ability to implement best practices in DataOps, performance optimization, and data governance.
Experience with modern AI/GenAI-enabled data platforms and real-time data processing environments is highly desirable.
Position Overview:
The Databricks Senior Data Engineer will play a critical role in designing, implementing, and optimizing enterprise-scale data platforms using the Databricks Lakehouse architecture. This role combines architecture leadership with hands-on engineering, focusing on building scalable, secure, and high-performance data pipelines and platforms. The ideal candidate will establish coding standards, define data architecture frameworks such as the Medallion Architecture, and guide the end-to-end development lifecycle of modern data solutions.
This individual will collaborate with cross-functional stakeholders, including data engineers, BI developers, analysts, and business leaders, to deliver robust data platforms that enable advanced analytics, reporting, and AI-driven decision-making.
Key Responsibilities:
- Architecture & Design: Architect and design scalable, reliable data platforms and complex ETL/ELT and streaming workflows for the Databricks Lakehouse Platform (Delta Lake, Spark).
- Hands-On Development: Write, test, and optimize code in Python, PySpark, and SQL for data ingestion, transformation, and processing.
- DataOps & Automation: Implement CI/CD, monitoring, and automation (e.g., with Azure DevOps, DBX) for data pipelines.
- Stakeholder Collaboration: Work with BI developers, analysts, and business users to define requirements and deliver data-driven solutions.
- Performance Optimization: Tune delta tables, Spark jobs, and SQL queries for maximum efficiency and scalability.
- GenAI Applications Development: It is a big plus to have experience in GenAI application development
Requirements:
- 8+ years of experience in data engineering, with strong hands-on expertise in Databricks and Apache Spark.
- Proven experience designing and implementing scalable ETL/ELT pipelines in cloud environments.
- Strong programming skills in Python and SQL; experience with PySpark required.
- Hands-on experience with Databricks Lakehouse, Delta Lake, and distributed data processing.
- Experience working with cloud platforms such as Microsoft Azure, AWS, or GCP (Azure preferred).
- Experience with CI/CD pipelines, Git, and DevOps practices for data engineering.
- Strong understanding of data architecture, data modeling, and performance optimization.
- Experience working with cross-functional teams to deliver enterprise data solutions.
- Tackles complex data challenges, ensuring data quality and reliable delivery.
Qualifications:
- Bachelor’s degree in Computer Science, Information Technology, Engineering, or a related field.
- Experience designing enterprise-scale data platforms and modern data architectures.
- Experience with data integration tools such as Azure Data Factory or similar platforms.
- Familiarity with cloud data warehouses such as Databricks, Snowflake, or Azure Fabric.
- Experience supporting analytics, reporting, or AI/ML workloads is highly desirable.
- Databricks, Azure, or cloud certifications are preferred.
- Strong problem-solving, communication, and technical leadership skills.
Technical Proficiency in:
- Databricks, Apache Spark, PySpark, Delta Lake
- Python, SQL, Scala (preferred)
- Cloud platforms: Azure (preferred), AWS, or GCP
- Azure Data Factory, Kafka, and modern data integration tools
- Data warehousing: Databricks, Snowflake, or Azure Fabric
- DevOps tools: Git, Azure DevOps, CI/CD pipelines
- Data architecture, ETL/ELT design, and performance optimization
What You’re Looking For:
Join a fast-growing organization that thrives on innovation and collaboration. You’ll work alongside talented, motivated colleagues in a global environment, helping clients solve their most critical business challenges. At OZ, your contributions matter – you’ll have the chance to be a key player in our growth and success. If you’re driven, bold, and eager to push boundaries, we invite you to join a company where you can truly make a difference.
About Us:
OZ is a 28-year-old global technology consulting, services, and solutions leader specializing in creating business-focused solutions for our clients by leveraging disruptive digital technologies and innovation.
OZ is committed to creating a continuum between work and life by allowing people to work remotely. We offer competitive compensation and a comprehensive benefits package. You’ll enjoy our work style within an incredible culture. We’ll give you the tools you need to succeed so you can grow and develop with us and become part of a team that lives by its core values.
Hi Rameez here from Beaconfire. I hope you're doing well! We’re currently hiring for an exciting MERN/MEAN Developer role, and I wanted to reach out to see if you or someone in your network might be interested. This is a fantastic opportunity to work on high-impact projects using modern technologies in a collaborative and growth-oriented environment.
About the Company
BeaconFire is based in Central NJ, specializing in Software Development, Web Development, and Business Intelligence; looking for candidates with a strong background in Software Engineering or Computer Science for a Python/Node Developer position.
About the Role
The role involves developing websites and writing scalable, secure, maintainable code while collaborating with team members to achieve project goals.
Responsibilities
- Develop websites using HTML, CSS, Node.js, React.js, and Angular2+, among other tools;
- Write scalable, secure, maintainable code that powers our clients’ platforms;
- Create, deploy, and maintain automated system tests;
- Work with Testers to understand defects opened and resolves them in a timely manner;
- Supports continuous improvement by investigating alternatives and technologies and presenting these for architectural review;
- Collaborate effectively with other team members to accomplish shared user story and sprint goals;
- Invest time in constant professional development to stay up to date with new technological development and programming languages;
- Discover and fix programming bugs;
- Other duties as assigned.
Qualifications
- Proficient understanding of HTML and CSS;
- Experience in programming language JavaScript or similar (e.g. Java, Python, C, C++, C#, etc.) and understanding of the software development life cycle;
- Basic knowledge of code versioning (e.g. Git, SVN);
- A passion for coding pixel perfect web pages;
- Good verbal communication and interpersonal skills.
Required Skills
- Proficient understanding of HTML and CSS;
- Experience in programming language JavaScript or similar (e.g. Java, Python, C, C++, C#, etc.) and understanding of the software development life cycle;
- A passion for coding pixel perfect web pages;
- Good verbal communication and interpersonal skills.
Preferred Skills
- Bachelor's degree or higher in Computer Science or related fields;
- 0-1 year of practical experience in JavaScript coding;
- Familiarity with at least one JavaScript framework (Angular2+, React.js, Express.js);
- Experience with unit and integration testing of code, with an understanding of JavaScript testing frameworks like Jasmine, Cucumber, Mocha, and Karma;
- Experience providing REST/SOAP APIs for user interface consumption;
- Experience working within an Agile development methodology Scrum.
BeaconFire is an E-verified company and provides equal employment opportunities (visa sponsorship provided).
```
Job Description
Role -: QA Automation Engineer
Location: Mount Laurel, NJ (Onsite)
We are looking for a highly skilled SDET / QA Automation Engineer with strong experience in Python, JavaScript, and modern automation frameworks to support automation solutions and end-to-end network validation.
Key Skills Required:
Python Automation
JavaScript
SDET / QA Automation
Automation Frameworks (PyTest / Selenium / Playwright / Cypress)
Microservices Testing
API Testing
Networking / Cable Technologies Knowledge
End-to-End System Validation
Responsibilities:
• Develop automation solutions and test scripts for network platforms
• Build and maintain automation frameworks using Python & JavaScript
• Validate end-to-end network components and behavior
• Develop automation microservices for testing infrastructure
• Collaborate with cross-functional teams and clients to ensure quality delivery
About the Role
We are seeking a Backend Engineer to help build and maintain the backend services and API’s that power our proprietary AI SaaS CRM and LMS platforms.
You will work directly with our CTO, collaborate with the engineering team, and partner closely with our Product Manager to design, implement, and maintain scalable backend systems.
Our backend services are built primarily with:
- NestJS (TypeScript)
- Python
- Deployed across multiple AWS environments
This is a hands-on backend engineering role focused on API development, cloud deployment, distributed systems, and production-grade reliability.The role has meaningful ownership - not just ticket execution.
What You’ll Do
- Work directly with the CTO on backend design and implementation decisions
- Partner closely with a Product Manager on sprint planning, backlog grooming, translating product requirements into technical solutions, and prioritizing customer-impacting improvements
- Design, build, and maintain backend API services using NestJS (TypeScript)
- Build and support backend services in Python
- Develop and maintain production-grade RESTful APIs
- Contribute to multi-environment deployments across AWS
- Use Terraform to manage our IAC
- Work with CI/CD workflows and structured deployment procedures
- Follow and contribute to engineering documentation including development guidelines, environment configuration standards, security practices, and versioning and changelog management
- Implement and support asynchronous and event-driven systems
- Write clean, maintainable, well-tested code
- Participate in code reviews and maintain high engineering standards
- Debug and resolve production issues across distributed cloud environments
What We’re Looking For (Required)
- 5+ years of backend engineering experience
- Strong proficiency in TypeScript and experience with NestJS
- Strong proficiency in Python
- Experience designing and implementing RESTful APIs
- Experience deploying and maintaining applications in AWS
- Familiarity with multi-environment deployments (dev, staging,UAT, production)
- Experience working with CI/CD pipelines
- Experience with relational databases (PostgreSQL)
- Familiarity with Docker or containerized workflows
- Experience working in GitHub-based workflows in a collaborative environment (pull requests, code reviews, branching strategies, and issue tracking)
- Comfortable working in an agile environment with JIRA and Monday
- Strong communication and problem-solving skills
- Experience building SaaS or multi-tenant platforms
Nice to Have / Strong Plus
- Familiarity with C# & C++
- Experience with Dentrix, OpenDental, or other dental integration PMS’s
- Experience building a greenfield SaaS or B2B software
- Experience with building on a Healthcare platform
- Familiarity with AI-enabled products or LLM integrations
- Experience with Redis or caching strategies
- Experience integrating third-party APIs
Why This Role Is Different
- Direct collaboration with the CTO on backend system design
- Close partnership with Product Management
- Opportunity to help shape a modern, AI SaaS platform for the healthcare industry
We're building safety-enhancing technology for aviation that will save lives. Automated aviation systems will enable a future where air transportation is safer, more convenient and fundamentally transformative to the way goods - and eventually people - move around the planet. We are a team of mission-driven engineers with experience across aerospace, robotics and self-driving cars working to make this future a reality.
As a Radar Systems Engineer, you will be a part of our Radar Engineering team. The Radar Engineering team is comprised of a small cross functional team of motivated and experienced engineers; we're responsible for designing, building, and testing cutting edge phased array radar systems from concept to certified product. We enjoy a culture of sharing information and collaboration. Phased array radar systems have historically been reserved for specialized applications, but we're making this technology affordable to enable Detect and Avoid for widespread commercial applications. This role will focus on new advanced operational modes. The passion for revolutionary technology to make aviation safer motivates us to come in every day.
Responsibilities
In your role as a Radar Systems Engineer, you will be responsible for analyzing phased array radar systems and developing radar processing algorithms to enable short, long range object tracking, and radar image generation. You will be involved in all phases of development from conception to production, designing your algorithms in Matlab, Python, and implementation in C++ on the target hardware. You will drive the system-level requirements of the phased array radar system, and drive trade studies collaborating with the cross functional teams. You will support the integration of the radar on the aircraft and support data collection and analysis during flight testing. You will be the owner of real-time processing of radar algorithms on an FPGA, DSP-based platform.
Basic Success Criteria
Electrical Engineering fundamentals, typically gained through a Bachelor's Degree of Science or Engineering in Mechanical, Electrical, Aerospace, or a related discipline
8+ years of professional hands-on experience with Radar algorithms, radar system design, radar signal processing, integration, and test of a radar system
Ability to use C++, Matlab and Python (Python preferred)
Ability to troubleshoot, find root cause, and resolve issues
Pulsed and FMCW processing algorithm development
Experience in airborne radar testing and development
Preferred Criteria
Advanced Degree of Science or Electrical Engineering
Experience developing system architectures and managing requirements for certification of Avionics
Creative problem solver that can bring multiple disciplines together, with the ability to assess risk and make design and development decisions without all available data
Experience integrating and troubleshooting various electronic sensors and components
Weather radar experience, airborne or ground-based
Real beam radar imaging, and or SAR processing
This role can be remote, or located at our facility in Mountain View, California.
Must be willing to travel 30% of the time.
The estimated salary range for this position is $180,000 to $260,000/annual salary + cash and stock option awards + benefits. At Reliable Robotics, we strive to provide competitive and rewarding compensation based on experience and expertise, as well as market conditions, location, and pay equity.
In addition to base compensation, Reliable Robotics offers stock options, employee medical, 401k contribution, great co-workers and a casual work environment.
This position requires access to information that is subject to U.S. export controls. An offer of employment will be contingent upon the applicant's capacity to perform in compliance with U.S. export control laws.
All applicants are asked to provide documentation that legally establishes status as a U.S. person or non-U.S. person (and nationalities in the case of a non-U.S. person). Where the applicant is not a U.S. person, meaning not a (i) U.S. citizen or national, (ii) U.S. lawful permanent resident, (iii) refugee under 8 U.S.C. * 1157, or (iv) asylee under 8 U.S.C. * 1158, or not otherwise permitted to access the export-controlled technology without U.S. government authorization, the Company reserves the right not to apply for an export license for such applicants whose access to export-controlled technology or software source code requires authorization and may decline to proceed with the application process and any offer of employment on that basis.
At Reliable Robotics, our goal is to be a diverse and inclusive workforce. As an Equal Opportunity Employer, we do not discriminate on the basis of race, religion, color, creed, ancestry, sex, gender (including pregnancy, childbirth, breastfeeding, or related medical conditions), gender identity, gender expression, sexual orientation, age, non-disqualifying physical or mental disability or medical conditions, national origin, military or veteran status, genetic information, marital status, or any other basis covered by applicable law. All employment and promotion is decided on the basis of qualifications, merit, and business need.
If you require reasonable accommodation in completing an application, interviewing, completing any pre-employment testing, or otherwise participating in the employee selection process, please direct your inquiries to
Compensation Range: $180K - $260K
Apply for this JobLength of Contract: 6 months
Location: Remote (Eastern time zone)
What are the top 3-5 skills, experience or education required for this position:
a. Proficiency in databases (SQL) and coding in R/Python
b. Experience with API development
c. Familiarity with AI techniques and strong curiosity for new technologies
d. Experience managing and curating bioinformatics datasets (BulkRNAseq, Proteomics, scRNAseq, CRISPR)
e. Code management, documentation, and version control (e.g., GitHub)
Job Overview: As a Data Analyst, you'll drive data quality and consistency in our central hub for storing OMICS data, address impactful data loading and curation projects and help improve and automate processes using agentic AI. Working closely with researchers, you'll ensure their data needs are met and help accelerate scientific discovery.
Key Responsibilities:
- Contribute to important data loading and curation projects for the departments Omics data server
- Address data quality and consistency issues in the CRISPR database.
- Apply agentic AI approaches for data loading and querying OMICS data
- Database Interaction: Use PostgreSQL to build, manage, and query large genomic datasets.
- API Development: Design and implement APIs for improved data accessibility and integration across platforms.
- Automation: Use Python and R to automate and optimize data workflows, prioritizing data quality and integrity.
- ETL Process Management: Develop and execute ETL processes to integrate high-value datasets in line with organizational standards.
- Collaboration: Work with cross-functional teams and research scientists to gather requirements, align to common data model standards, and facilitate effective data management.
- Documentation: Maintain comprehensive documentation and version control for reproducibility and teamwork.
Required qualifications:
- Master's degree in computer science, bioinformatics, or a related field, with 3+ years of relevant experience.
- Proven experience working with databases (PostgreSQL proficiency).
- Advanced skills in Python and R for automation and data manipulation.
- Experience handling and curating bioinformatics datasets (BulkRNAseq, Proteomics, scRNAseq, CRISPR).
- Code management, documentation, and usage of Github.
- Curiosity and basic knowledge of AI techniques applicable to data loading and querying.
- Excellent communication skills and a collaborative mindset.
- Demonstrated experience with AWS resources.
- Experience in API
Akkodis is seeking a Platform Software Engineer role is a Full-Time with a client located in Allentown, Cincinnati, Chicago, (Occasionally Onsite). Ideally looking for applicants to have a solid background in embedded systems/Payroll, Python/Django, AWS, API, React. would come as a big plus.
Salary Range: $165k-$190k/Annum + Benefits, The salary may be negotiable based on experience, education, geographic location, and other factors.
Key Responsibilities
Customer Collaboration & Developer Relations
- Serve as the primary technical interface for customers, partners, and indirect channel teams.
- Host technical discussions, requirement? Gathering sessions, and architecture reviews directly with customers.
- Provide developer support, guidance, and best practices for integration and implementation.
- Represent the embedded team in developer? Facing demos, product enablement, and partner technical workshops.
- Translate customer needs into actionable engineering tasks, ensuring platform capabilities align with business goals.
Platform Customization & Embedded Engineering
- Lead the design and implementation of custom features, enhancements, and integrations across our platform.
- Develop application? Layer components and service extensions using Python, Django, and REST? Based APIs.
- Build and integrate customer? Facing portal/UI components in React.
- Leverage AWS services to build scalable, cloud-backed integrations.
Architecture, API & Systems Integration
- Own the documentation and development of APIs, SDKs, and interface design documents (IDDs).
- Ensure seamless integration between on?device components, microservices, and customer systems.
Technical Troubleshooting & Tier?3 Support
- Diagnose and resolve complex issues.
- Provide senior? Level technical escalation support for customers and internal teams.
- Lead root?cause analysis and drive long? Term corrective improvements across the platform.
Project Delivery & Leadership
- Lead customer? Specific development initiatives from ideation through deployment.
- Own timelines, deliverables, and technical direction for customization projects.
- Mentor junior engineers and influence cross? Team engineering best practices.
Required Skills & Qualifications
- Expertise in Python, with experience building backend services or integration layers (Django strongly preferred).
- Proficiency in React for developing customer?facing interfaces or partner tools.
- Experience with AWS (Lambda, API Gateway, S3, CloudWatch, IAM, or related services).
- Strong experience with embedded systems development.
- Experience with API design, interface documentation, and integration workflows.
- Excellent communication skills with the ability to convey complex technical concepts to diverse audiences, including customers and indirect channel partners.
- Proven success in customer-facing engineering, developer support, or solutions engineering roles.
- Bachelor's degree in Computer Engineering, Electrical Engineering, Computer Science, or related field.
Preferred Qualifications
- Experience in Developer Relations, technical evangelism, or technical account management.
- Experience supporting indirect channel partners, VARs, ISVs, or system integrators.
- Background in pre?sales engineering, partner enablement, or customer?led solution design.
If you are interested in this role, then please click APPLY NOW. For other opportunities available at Akkodis, or any questions, feel free to contact me at .
Equal Opportunity Employer/Veterans/Disabled
Benefits offerings include but are not limited to:
• 401(k) with match
• Medical insurance
• Dental Insurance
• Vision assistance
• Paid Holidays Off
To read our Candidate Privacy Information Statement, which explains how we will use your information, please visit Company will consider qualified applicants with arrest and conviction records in accordance with federal, state, and local laws and/or security clearance requirements, including, as applicable:
· The California Fair Chance Act
· Los Angeles City Fair Chance Ordinance
· Los Angeles County Fair Chance Ordinance for Employers
· San Francisco Fair Chance Ordinance
Position: Lead JavaScript Engineer/Architect
Location: Charlotte, NC
Duration: 6 Months - CTH
Key skills: JavaScript, TypeScript, Node; Python (at least basic); to be able to design a trading system, performance tuning etc.
Job Description:
We are looking for an Architect/Lead Software Engineer to join our team. You will work with our product, design, and engineering teams to plan, design, and develop customer facing applications for credit cards. We offer an opportunity to work in a collaborative and inclusive environment with people who value their work and who welcome fresh ideas.
Key Responsibilities:
- Perform complex application programming activities with an emphasis on backend systems development: Node.JS, TypeScript, JavaScript, Python, RESTful APIs, Data Pipelines and more
- Lead the definition of system architecture and detailed solution design that are scalable and extensible
- Collaborate with Product Owners, Designers, and other engineers on different permutations to find the best solution possible
- Own the quality of code and do your own testing. Automate feature testing and contribute UI testing framework
- Become a subject matter expert for our mobile applications backend and middleware
- Deliver amazing solutions to production that knock everyone's socks off
- Mentor developers on the team
- Aid technical team as needed
- Assist in interviewing and building out technical team
- Suggest improvements to optimize delivery
Basic Qualifications:
- Minimum B.S. / M.S. Computer Science or related discipline from accredited college or University
- At least 8+ years of experience designing, developing, and delivering backend applications with Node.JS, TypeScript, JavaScript, Python, Restful APIs, Data Pipelines and related backend frameworks
- At least 8 years of experience building internet facing applications
- At least 8 years of experience with known Cloud and/or OpenShift, preferably AWS
- Proficient in following concepts: object-oriented programming, software engineering techniques, quality engineering, parallel programming, databases, etc.
- Proficient in building and consuming RESTful APIs
- Proficient in managing multiple tasks and consistently meet established timelines
- Experience integrating APIs with front-end and/or mobile-specific frameworks
- Strong collaboration skills
- Excellent written and verbal communications skills
We Are Hiring: Databricks Lead Data Engineer – Director Equivalent Role
Location: Atlanta, USA
Work Model: Hybrid – 3 to 4 days in office per week (mandatory)
Eligibility: US Citizens and Green Card (GC) holders only
How to Apply
If you are interested in this position and have the required skills, please send across your resume at:
; ;
Paves Technologies is seeking a highly experienced Databricks Lead Data Engineer – Lead Level (Director Equivalent Role) to drive enterprise-scale data architecture, governance, and advanced analytics initiatives on Azure Cloud. This is a senior leadership role requiring deep Databricks expertise, strong data modeling capabilities, and hands-on architectural ownership across PySpark based distributed systems.
Role Overview
The ideal candidate will bring 10-12 + years of overall data engineering experience, including strong hands-on expertise with Azure Databricks, PySpark, Python, and Azure Cloud data services. You will define architecture standards, lead modernization initiatives, and implement scalable Medallion Architecture (Bronze, Silver, Gold layers) to support enterprise analytics and business intelligence.
Key Responsibilities
- Lead end-to-end architecture and implementation of enterprise-scale data platforms using Azure Databricks on Azure Cloud.
- Design and implement Medallion Architecture (Bronze, Silver, Gold layers) using Delta Lake best practices.
- Build scalable PySpark-based ETL/ELT pipelines across ingestion (Bronze), transformation (Silver), and curated analytics (Gold) layers.
- Develop advanced data transformations using Python, PySpark, Spark SQL, and advanced SQL constructs.
- Architect robust data models (dimensional, star schema, normalized models) aligned to analytics and reporting needs.
- Drive adoption of advanced Databricks capabilities including Unity Catalog, Declarative Pipelines, Delta Lake optimization, and governance frameworks.
- Establish best practices for partitioning strategies, file compaction, Z-ordering, caching, broadcast joins, and query optimization.
- Define and standardize reusable Azure Cloud data platform tools, templates, CI/CD frameworks, and infrastructure automation.
- Work across Azure ecosystem components such as Azure Data Factory (ADF), Azure Data Lake Storage (ADLS), Azure DevOps, networking, and security services.
- Ensure high standards for data quality, RBAC, lineage tracking, governance, and production stability.
- Provide architectural leadership and mentorship to data engineering teams.
Required Experience & Skills
- 10–12+ years of overall experience in Data Engineering.
- Minimum 3+ years of strong hands-on Databricks experience.
- Mandatory Certifications:
- Databricks Certified Data Engineer Associate
- Databricks Certified Data Engineer Professional
- Deep hands-on expertise in PySpark, Python programming, and distributed Spark processing.
- Strong experience designing and implementing Medallion Architecture (Bronze/Silver/Gold layers).
- Advanced knowledge of Data Modeling, Data Analysis, and complex SQL (window functions, CTEs, execution plan tuning).
- Strong understanding of Delta Lake architecture, schema evolution, partition strategies, performance optimization, and data governance.
- Well-versed in enterprise Azure Cloud data platforms, reusable accelerators, CI/CD templates, and governance standards.
- Proven experience architecting scalable, secure, cloud-native data solutions.
- Strong leadership, stakeholder management, and executive communication skills.
How to Apply
If you are interested in this position and have the required skills, please send across your resume at:
; ;