Prometheus Query Syntax Jobs in Usa
844 positions found (advanced search) — Page 2
Doctor of Medicine | Hematology and Oncology
Location: Hartford, CT
Employer: GHR Healthcare
Pay: $14,460 to $15,500 per week
Shift Information: 5 days x 8 hours
Contract Duration: 13 Weeks
Start Date: ASAP
About the Position
LocumJobsOnline is working with GHR Healthcare to find a qualified Hematology and Oncology MD in Hartford, Connecticut, 06105!
Locum Tenens Hematology/Oncology Physician – Hartford, CT 06105 (On-Site)
Locum Tenens Hematology/Oncology Physician – Hartford, CT (Board Certified; active CT license required). Apply now for this on-site oncology locum tenens role.
Join a collaborative team as a locum tenens hematology/oncology physician in Hartford, CT. This Hartford (06105) opportunity offers a balanced schedule, meaningful patient care, and the chance to work in a vibrant urban center known for its rich history, cultural attractions, and beautiful parks. Ideal for locum hematology jobs and Connecticut oncology jobs seekers who want career growth and quality of life.
Locum Tenens Job Details – Hartford, CT
- Position Type: Locum tenens physician (travel assignment)
- Specialty: Hematology/Oncology (Hematologist/Oncologist)
- Location: On-site in Hartford, CT 06105 — Connecticut oncology job
- Estimated Weekly Pay: $14,460–$15,500 (competitive locum tenens pay)
- Start Date: March 30, 2026 (or as soon as credentialed)
- Assignment Length: 6 months to 1 year, with possibility to extend
- Hours per Week: 40
- Shift Duration: 8-hour shifts
- Schedule: 3–4 clinic days per week (Mon–Fri, 8:00 am–4:30 pm); some weekends and infrequent night call
- Practice Setting: Inpatient consults and outpatient clinic (adult and geriatric patients)
- Patient Population: Adults and geriatrics (approximately 80% Hematology, 20% Oncology)
- Team: 10 physicians, 8 APPs — collaborative multidisciplinary team
- Average Patient Volume: ~10/day inpatient, ~15/day outpatient
- EMR: Epic (experience with Epic EMR preferred)
- Facility: Level I Trauma Center — high-quality clinical environment
- Estimated Credentialing Timeframe: ~130 days from offer acceptance
Hematology/Oncology Job Requirements (Locum Tenens)
- Board Certification: Board Certified in Hematology/Oncology (required) — must be a board certified hematologist/oncologist
- Licensure: Active Connecticut medical license in hand at time of submission (required)
- NPDB Self-Query: NPDB self-query report pulled within the same month as submission (required)
- Call: Ability to participate in infrequent night and weekend call as scheduled
- Preferred: Local candidates (no flights) highly preferred for Hartford locum tenens coverage
Hematology/Oncology Responsibilities (Locum Tenens)
- Provide comprehensive hematology and oncology care for adult and geriatric patients in both inpatient and outpatient settings
- Conduct new patient evaluations, develop individualized treatment plans, and manage follow-up care
- Prescribe and manage chemotherapy and supportive care treatments according to institutional protocols
- Collaborate with a multidisciplinary team of physicians, advanced practice providers, nursing, pharmacy, and support staff
- Participate in inpatient consults and outpatient clinic visits, including treatment planning and transitions of care
- Document all patient encounters, treatment decisions, and care plans accurately in Epic EMR
- Participate in scheduled night and weekend call coverage as needed
Why Join This Connecticut Oncology Team
- Balanced schedule with a focus on patient care and work-life balance — ideal for locum tenens hematology jobs
- Join a collaborative, multidisciplinary oncology team at a Level I Trauma Center
- Competitive weekly pay for locum tenens physicians and a clear credentialing timeline
- Work with Epic EMR in a high-volume clinical setting serving adult and geriatric patients
How to Apply: Apply now — submit your CV/resume, a copy of your active Connecticut medical license, a current NPDB self-query (pulled within the same month), and your availability. Local candidates encouraged. Join our dedicated locum tenens hematology/oncology team in Hartford, CT and make a meaningful impact!
BenefitsGHR Healthcare offers 401K with Matching, Healthcare, Dental and Vision to Employees. Company paid malpractice is available for 1099 Contractors. Weekly Direct Deposit is a standard benefit for both employees and contractors.
Equal OpportunityWe are an equal opportunity employer and value diversity across our organization. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.
About GHR Healthcare
At GHR Healthcare Locums, we do more than fill jobs—we create opportunities that fit your life. With over 30 years of experience, we connect physicians, advanced practice providers and CRNA's with high-quality locum tenens assignments across the country. From top-tier pay to seamless support with licensing, travel, and credentialing, we make every step easy. Whether you're seeking flexibility, freedom, or a fresh start, we’re here to get you where you want to go—on your terms.
1715209EXPPLAT
We are seeking a talented Software Engineer 3 (Power BI Developer) to join a leading global financial institution on a long-term contract in Wilmington, DE. This role is ideal for someone with advanced Power BI skills, including DAX, Power Query/M, and complex data modeling, who has experience building executive dashboards and turning complex data into actionable insights. The position involves designing enterprise-level BI solutions, integrating data from multiple sources, and delivering analytics on toolchain adoption, productivity, and business impact. Candidates should have experience with platforms such as Jira, GitHub, Azure DevOps, and CI/CD tools, and be comfortable mentoring junior team members and collaborating with cross-functional teams. This is an exciting opportunity to influence decision-making and contribute to strategic initiatives at a senior level.
Job Title: Software Engineer 3 (Power BI Developer)
Job Location: Wilmington, Delaware 19803
Job Duration: 12 months (with possible extension)
Only W2 Candidates
Join a leading global financial institution and work with some of the brightest minds in the industry. This long-term contract opportunity offers a competitive benefits package and a chance to contribute to innovative solutions in the financial services space. If you’re passionate about leveraging data to drive business impact and enjoy creating insights that influence key decisions, this role is for you.
Required Skills & Experience
- 4+ years of software engineering experience, or equivalent through consulting, training, military service, or education.
- 6+ years of Power BI experience, with at least 3 years focused on advanced development in enterprise environments.
- Proven expertise in designing BI solutions for enterprise software development ecosystems, toolchain adoption, and DevOps maturity.
- Experience connecting Power BI to various toolchain platforms (e.g., Jira, GitHub, Azure DevOps, CI/CD tools) and designing KPIs for adoption, onboarding, and usage.
- Advanced proficiency in DAX, Power Query/M, and complex data modeling for management-level reporting.
- Experience building executive dashboards covering adoption, risk, compliance, automation, productivity, and cost savings.
- Strong data integration skills, including ETL, API extraction, direct query, and on-prem/cloud data source integration.
- Deep understanding of enterprise data governance, security, access controls, and reporting best practices.
- Excellent communication skills with experience collaborating with both technical and business stakeholders.
- Demonstrated leadership in project delivery, solution architecture, and mentoring junior team members.
Desired Skills & Experience
- Expertise in enterprise DevOps, SDLC/ALM toolchains, engineering productivity tooling, or related reporting domains.
- Experience supporting executive or board-level reporting initiatives.
- Microsoft Power BI and/or Power Platform certification.
- Experience in highly regulated or financial services environments.
Key Responsibilities
- Participate in moderately complex software engineering initiatives and contribute to planning and delivery of enterprise solutions.
- Review, analyze, and resolve complex software engineering and BI challenges.
- Collaborate with engineering, operations, and transformation teams to gather requirements, define key metrics, and ensure data accuracy for management reporting.
- Architect, develop, and maintain advanced Power BI dashboards and reports focused on toolchain adoption, process maturity, and business impact.
- Serve as the enterprise subject matter expert in toolchain reporting, with knowledge of common platforms such as Jira, GitHub, Azure DevOps, and CI/CD tools.
- Develop frameworks, data models, and methodologies to assess adoption and maturity metrics (e.g., tool usage, process adherence, automation coverage, delivery impact).
- Integrate data from multiple sources—including APIs, data lakes, internal databases, and vendor platforms—into Power BI using advanced transformations and DAX.
- Deliver meaningful executive and operational insights with robust drill-down capabilities for decision-making.
- Partner with business and IT leadership to present findings, recommend actions, and evolve analytics in alignment with strategic objectives.
- Define, document, and enforce best practices for management reporting, including data governance, security, and lifecycle management.
- Mentor and coach junior engineers and analysts on Power BI and toolchain reporting best practices.
- Maintain, monitor, and continuously enhance reporting solutions as enterprise needs evolve.
- Provide occasional after-hours support for critical reporting or deployment issues.
Loloi Rugs is a leading textile brand that designs and crafts rugs, pillows, and throws for the thoughtfully layered home. Family-owned and led since 2004, Loloi is growing more quickly than ever. To date, we’ve expanded our diverse team to hundreds of employees, invested in multiple distribution facilities, introduced thousands of products, and earned the respect and business of retailers and designers worldwide. A testament to our products and our team, Loloi has earned the ARTS Award for “Best Rug Manufacturer” in 2010, 2011, 2015, 2016, 2018, 2023, and 2025.
Security Advisory: Beware of Frauds
Protect yourself from potential fraud and verify the authenticity of any job offer you receive from Loloi. Rest assured that we never request payment or demand any sensitive personal information, such as bank details or social security numbers, at any stage of the recruiting process. To ensure genuine communication, our recruiters will solely reach out to applicants using an @ email address. Your security is of paramount importance to us at Loloi, and we are committed to maintaining a safe and trustworthy hiring experience for all candidates.
We are building a Business Operations Center of Excellence, and we need a Product Data Analyst to serve as the "Guardian of the Golden Record." In this role, you are the absolute owner of product data integrity as it relates to the digital customer experience. You ensure that every item we sell is accurately represented across every touchpoint—from our ERP and PIM to our website storefront and marketing feeds. This is not a data entry role; it is a high-impact technical logic and investigation role. You will work directly with our Data Platform and Software Engineering teams to define business rules, audit data health via complex SQL, and troubleshoot data transmission errors before they impact the customer.
Responsibilities
- Storefront Governance: Serve as the absolute owner of product data integrity within the PIM. Ensure that all storefront-critical attributes (pricing, dimensions, weights, image links) are accurate and standardized for a seamless customer experience.
- Technical Data Auditing: Write and run complex SQL queries against our centralized database to identify anomalies, "orphan" records, and data hygiene issues that need resolution. You will be expected to query across multiple schemas to validate data consistency between systems.
- Feed Logic & Mapping: You will manage the logic of how data translates from our PIM to external endpoints. You will ensure that our products appear correctly on Google Shopping, Meta, Amazon, and other marketplaces by managing feed rules and mapping definitions.
- API Payload Analysis: You will act as the first line of defense for data transmission errors. If a product isn't showing up on the site, you will review the JSON/XML response bodies to determine if it is a data payload error or a software code bug.
- Cross-Functional Impact Analysis: You will act as the gatekeeper for data changes, predicting downstream impacts (e.g., "If Merchandising changes this Category Name, it will break the Finance reporting filter").
- Hygiene Logic Definition: You will partner with our IT/Database team to define automated health checks. You identify the "rot" (bad data patterns), and they implement the database constraints to stop it.
What You Will NOT Do (The Boundaries)
- No Web Development: You are not a Front-End Developer. You do not write HTML, CSS, or React code. You ensure the data powering those components is 100% accurate.
- No Manual Data Entry: Your job is not to copy-paste descriptions. You build the systems, bulk processes, and logic that ensure data quality at scale.
- No Database Administration: You do not manage server uptime or schema changes (IT owns this). You own the quality of the records inside the database.
Intersection with Technical Teams
- With IT (Database Mgmt): IT owns the infrastructure and schema; you own the quality of the data within it. When you identify a systemic issue (e.g., "5,000 orphan records"), you partner with IT to implement the technical fix (scripts/constraints).
- With Software Engineering (Commerce): If a product is missing from the site, you check the data payload. If the data is correct, you hand off to Engineering, confirming it is a code/caching bug rather than a data error.
Experience, Skills, & Ability Requirements
- 5-8 years of experience in Data Management, PIM Administration, or technical eCommerce Operations.
- SQL Proficiency: You are comfortable writing queries beyond simple SELECT *. You should be proficient with CTEs (Common Table Expressions), Window Functions (e.g., Rank, Lead/Lag), Subqueries, and complex Joins to act as a forensic data investigator.
- API Fluency: You can read and understand JSON and XML. You know what a valid payload looks like and can spot formatting errors or missing keys.
- Data Manipulation: You are an expert at handling large datasets (CSVs, Excel) and understand data types, formatting standards, and normalization concepts.
- You love hunting down the root cause of an error. You don't just fix the wrong price; you find out why the price was wrong and build a rule to stop it from happening again.
- You have high standards for accuracy. You understand that a wrong weight in the system means a financial loss on shipping for the business.
Bonus Points (Nice-to-Haves)
- Familiarity with Visio/Lucidchart to visualize data flows.
- Ability to build simple dashboards in Tableau to track data health scores.
- Basic familiarity with Python or R for data manipulation.
What We Offer
- Health, dental, and vision benefits
- Paid parental leave
- 401(k) with employer match
- A culture of meritocracy that fosters ongoing growth opportunities
- A stable, growing family-owned company that looks after its employees
Loloi Rugs does not discriminate on the basis of race, sex, color, religion, age, national origin, marital status, disability, veteran status, genetic information, sexual orientation, gender identity or any other reason prohibited by law in provision of employment opportunities and benefits. We seek a diverse pool of applicants and consider all qualified candidates regardless of race, ancestry, color, gender identity or expression, sexual orientation, religion, national origin, citizenship, disability, Veteran status, marital status, or any other protected status. If you have a special need or disability that requires accommodation, please let us know.
AGE Solutions is a premier technology and professional services company, providing in-depth consulting, advanced technology solutions, and essential services throughout the U.S. government, defense, and intelligence sectors. Prioritizing innovation and client-focused solutions, we assist major agencies in addressing intricate issues and ensuring a more secure future.
AGE is seeking a Software Developer with strong C# and database expertise to join our development team. This role focuses on building and maintaining robust middle-tier services and data-driven applications. The candidate will work closely with cross-functional teams to design, develop, maintain and optimize scalable backend systems that power critical business functionality.
The ideal candidate combines solid middle-tier development experience with deep database development knowledge and a strong understanding of system design, performance, maintainability, and data integrity.
This role is hybrid in Philadelphia, PA, requiring onsite reporting at the customer's facility at least 1 day/week. Candidates must reside within a commutable distance of Philadelphia, PA in order to work onsite as required.
Responsibilities Include:
- Design, develop, and maintain middle-tier services and backend components using C# and .NET technologies.
- Apply SOLID principles and clean architecture practices in application design.
- Build robust APIs and business logic layers to support web and enterprise applications.
- Collaborate with front-end developers, architects, and DevOps teams to deliver integrated solutions.
- Design, develop, optimize, and maintain relational databases (Oracle preferred).
- Write efficient stored procedures, views, functions, and complex queries.
- Optimize database performance, indexing strategies, and query tuning.
- Participate in code reviews and enforce best practices for clean, maintainable code.
- Troubleshoot and resolve production issues related to application logic and data layers.
- Contribute to architectural decisions and technical design discussions.
- Document technical designs and implementation details.
Required Skills, Qualifications and Experience:
- BA/BS in technical discipline.
- 10 years of experience in middle-tier and database development.
- Experience in applying SOLID principles and object-oriented design patterns.
- Strong proficiency in C# and .NET (.NET Core/.NET 6+) and ASP.NET Web API.
- Experience designing and building RESTful APIs and middle-tier services.
- Experience with ORM frameworks (Entity Framework preferred, Dapper).
- Strong SQL skills and experience with relational databases (Oracle preferred, SQL Server, PostgreSQL).
- Experience writing and optimizing complex queries and stored procedures.
- Strong understanding of data modeling and database design principles.
- Experience with version control systems (TFVC and Git).
- Strong problem-solving skills and attention to detail.
- Must be a United States citizen with a DoD Secret clearance or ability to obtain a favorably adjudicated T3 investigation.
Preferred Qualifications:
- Experience with microservices architecture
- Experience with CI/CD pipelines and DevOps best practices
- Experience with cloud platforms (Azure preferred)
- Experience with caching strategies (Redis, in-memory caching)
- Experience with performance profiling and monitoring tools
- Experience with containerization (Docker, Kubernetes)
- Experience with automated testing frameworks
Work Environment and Physical Demand:
- Must be able to work for extended periods of time at a computer.
Compensation: $115,000+
At AGE Solutions, we reward performance, invest in growth, and share success. Our benefits support the whole person, professionally, financially, and personally.
- 26 Days Paid Leave: Includes vacation, sick, personal time, and holidays. You choose how to use it.
- Performance Bonuses: Performance bonuses are awarded based on individual contributions and company-wide results, aligning recognition with impact.
- 401(k) with Match: We match 3% of your contributions with immediate vesting.
- Financial Protection: Company-paid life insurance up to $300K and options for additional coverage for you and your dependents.
- Health Benefits: Multiple medical plans, dental, vision, FSA and HSA options to fit your needs.
- Parental Leave: 15 days of fully paid leave for new parents, because family matters.
- Military Differential Pay: We bridge the gap for employees on active duty, so they don't take a financial hit while serving.
- Professional Growth: Paid training and certifications, tuition reimbursement, and the tools and tech to get the job done right.
- Shared Success: In the event of a company sale, our CEO has committed to returning 80% of net proceeds to employees. This ensures our team shares in the long term value they help create.
At AGE, you'll do work that matters, supported by a company that delivers for its people.
Job Title – Tech Lead, SFDC
Location – SFO- Bay Area, CA - Hybrid
Duration – Long Term Contract (C2C, W2)
We are looking for a Salesforce Technical Lead with strong Healthcare Cloud experience who is highly hands-on and comfortable with Apex coding, complex SOQL queries, and custom development.
Key Responsibilities:
- Lead end-to-end Salesforce implementations with a focus on Salesforce Health Cloud
- Design and develop custom Apex classes, triggers, batch/queueable jobs, and complex SOQL/SOSL queries
- Build and optimize Lightning Web Components (LWC) and Aura components
- Perform hands-on development while mentoring junior developers and reviewing code
- Design and implement integrations using REST/SOAP APIs, Platform Events, and middleware
- Ensure data security, compliance, and HIPAA standards in healthcare solutions
- Own technical architecture, troubleshooting, and performance optimization
- Collaborate closely with business stakeholders, product owners, and cross-functional teams
Required Skills & Experience:
- 13+ years of Salesforce experience with 5+ years as a Technical Lead
- Strong expertise in Salesforce Health Cloud
- Expert-level Apex development, including triggers, async Apex, and governor limit optimization
- Advanced experience writing SOQL queries and handling large data volumes
- Hands-on experience with LWC, Visualforce (as needed), and Salesforce flows
- Experience integrating healthcare systems (FHIR, HL7, EHR/EMR preferred)
- Solid understanding of Salesforce security model, data sharing, and compliance (HIPAA)
- Experience with CI/CD, DevOps tools, and source control (Git)
Position Details
Lakeland Regional Health is a leading medical center located in Central Florida. With a legacy spanning over a century, we have been dedicated to serving our community with excellence in healthcare. As the only Level 2 Trauma center for Polk, Highlands, and Hardee counties, and the second busiest Emergency Department in the US, we are committed to providing high-quality care to our diverse patient population. Our facility is licensed for 910 beds and handles over 200,000 emergency room visits annually, along with 49,000 inpatient admissions, 21,000 surgical cases, 4,000 births, and 101,000 outpatient visits.
Lakeland Regional Health is currently seeking motivated individuals to join our team in various entry-level positions. Whether you're starting your career in healthcare or seeking new opportunities to make a difference, we have roles available across our primary and specialty clinics, urgent care centers, and upcoming standalone Emergency Department. With over 7,000 employees, Lakeland Regional Health offers a supportive work environment where you can thrive and grow professionally.
Work Hours per Biweekly Pay Period: 80.00
Shift:
Location: 1324 Lakeland Hills Blvd Lakeland, FL
Pay Rate: Min $161,200.00 Mid $215,300.80
Position Summary
The Physician Advisor serves as a liaison between the clinical document improvement (CDI) team, which includes hospital coders; members of the Hospital's administration; the Medical Staff of the hospital; and the hospital's Utilization Management to facilitate the development and implementation of clinical documentation improvement initiatives. The Physician Advisor is pivotal in leveraging his or her clinical position to demonstrate the association of care delivery with specificity in documentation. The Physician Advisor is responsible for conducting clinical reviews referred by the Utilization Management, Coding and Clinical Documentation Improvement departments. The Physician Advisor will assist with reviews and appeals of DRG and medical necessity denials.
Position Responsibilities
People At The Heart Of All We Do
- Fosters an inclusive and engaged environment through teamwork and collaboration.
- Ensures patients and families have the best possible experiences across the continuum of care.
- Communicates appropriately with patients, families, team members, and our community in a manner that treasures all people as uniquely created.
Stewardship
- Demonstrates responsible use of LRH's resources including people, finances, equipment and facilities.
- Knows and adheres to organizational and department policies and procedures.
Safety And Performance Improvement
- Behaves in a mindful manner focused on self, patient, visitor, and team safety.
- Demonstrates accountability and commitment to quality work.
- Participates actively in process improvement and adoption of standard work.
Supervisor/Team Lead Capabilities
- Demonstrates accountability for shift/team operations and care/service delivery to support achievement of organizational priorities.
- Coaches front line team members to support ongoing professional development and hardwire technical and professional capabilities.
- Creates a high performing team by building strong relationships, delegating work and nurturing commitment and engagement.
- Manages team conflict/issues implementing appropriate corrective actions, improvement plans and regular performance evaluations.
- Applies change management best practices and standard work to support departmental changes and ensure effective team transition.
- Promotes a healthy and safe culture to advance system, team and service experien
Standard Work: Physician Advisor
- Acts as a liaison between the CDI professionals, Health Information Management, and the hospital's medical staff to facilitate accurate and complete documentation for coding and abstracting of clinical data, capture of severity, acuity and risk of mortality, HCC/risk adjustment in addition to Diagnosis Related Group (DRG) assignment.
- Perform concurrent and retrospective reviews of selected health records as it pertains to CDI and coding validation, and participate in the development of clinically appropriate and compliant provider queries to further clarify documentation.
- Educates individual hospital staff physicians about International Classification of Diseases (ICD) coding guidelines and clinical terminology to improve their understanding of severity, acuity, risk of mortality, HCC/risk adjustment and DRG assignments on their individual patient records.
- Assists with the evaluation and appeal of concurrent and restrospective denials and retrospective DRG downgrades. May perform peer-to-peer meetings as required.
- Participates in the coding and CDI programs and identifies potential areas for improved documentation of services. Also participates in the Coding and CDI meetings and provides ongoing education to the team members.
- Provides peer to peer communication to affect the appropriate response for those cases where the physician fails to respond or questions the need for queries.
- Responsible for writing and submitting appeals (multiple levels as needed) specifically around medical necessity, non-covered services, authorizations, and inpatient/observation stay related denials. May perform peer-to-peer meetings as required.
- The Physician Advisor is pivotal in leveraging his or her clinical position to demonstrate the association of care delivery with specificity in documentation through effective communication and education of the respective parties.
- Provides his or her expert opinion in relation to clinical validity assessments, and, furthermore, the development of clinically robust and appropriate queries.
- Serves as second level reviewer for UM, providing guidance on appropriate/alternate levels of care based on InterQual guidelines and other appropriate criteria.
Competencies & Skills
Essential:
- Broad knowledge base of clinical medicine across all specialties.
- Basic coding guidelines regarding the selection of the principal diagnosis and reporting additional diagnoses and procedures; understanding the DRG system; levels of comorbidities; and concepts of risk adjustment, severity of illness, risk of mortality, case mix index, prospective payment, hospital acquired conditions, patient safety indicators.
- Organize tasks effectively and efficiently and the ability to act independently through the application of critical thinking skills.
- Computer skills appropriate to position
- Excellent written and verbal communication skills.
Qualifications & Experience
Essential:
- Medical Degree
Essential:
- Licensed to practice medicine in the state of Florida, shall be board certified in internal medicine, and shall meet any other reasonable professional criteria established by LRH or the hospital.
Other information:
Experience Essential:
- Minimum of two years of experience in conducting coding and CDI reviews.
- Knowledge of coding guidelines and how it translates from clinical documentation.
- Knowledge of DRGs, Risk of Mortality, Severity of Illness, Mortality Rate, HCC/risk adjustment, CMI and the impact of clinical documentation/coding in relation to these metrics.
- Excellent computer skills with prior exposure to use of Microsoft Office suite
Velocity Clinical Research is an owned and integrated research site organization, providing excellence in patient care, high quality data and fully integrated research sites. At Velocity, we align our values and behaviors to give our employees the best chance of delivering on our brand promise: to bring innovative medical treatments to patients. We are committed to making clinical trials succeed by generating high quality data from as many patients as possible, as quickly as possible while providing exemplary patient care at every step.
As an employee of Velocity, you are the most integral part of our mission. For talented candidates who perform at a high level, Velocity will invest to support career advancement and reward performance. Whether you are new to clinical research or are an industry veteran, we invite you to apply to Velocity.
Benefits include medical, dental and vision insurance, paid time off and company holidays, 401(k) retirement plan with company-match, and an annual incentive program.
Summary:
The Sub Investigator has responsibility for the clinical safety of the patients partaking in the clinical trial, collecting, and recording accurate clinical data while also ensuring that the well-being and interests of the subjects enrolled in the studies are being met. The Sub-I provides essential clinical support to the clinical research coordinators, principal investigators, and other clinical trials staff.
Duties/Responsibilities
- Serve as leader of a study team to execute clinical trials
- Mentor and train staff in the conduct of clinical trials, protocol requirements, communication, and trial management skills
- Create training strategies and mitigation plans
- Conduct and manage clinical trials in accordance with the study protocol, GCP, ICH Guidelines and Velocity’s SOPs
- Implement and coordinate assigned clinical trials including start up, vendor management, subject recruitment, source development review, scheduling subjects, protocol training, collection of regulatory documents, conducting visits, ensuring data is entered in a timely manner and all queries are resolved, managing and reporting adverse events, serious adverse events, and deviations, implementing new protocol amendments, providing all close out reports.
- Apply project management concepts to manage risk and improve quality in the conduct of a clinical research study
- Develop, coordinate, and implement research and administrative strategies to successfully manage assigned protocols.
- Communicate effectively and professionally with coworkers, leadership, study subjects, sponsors, CROs, and vendors.
- Ensure good documentation practices are applied by team members when collecting and correcting data, transferring data to sponsor/CRO data capture systems and resolving queries
- Ensure confidentiality of patient protected health information, sponsor confidential information and Velocity confidential information is maintained by all team members
- Develop communication and escalation strategies within teams to that ensure patient safety is upheld and all adverse events, serious adverse events, and adverse events of special interest are followed and reported in accordance with the protocol and Velocity SOPs
- Ensure all data is entered into the sponsor's data portal and all queries are resolved in a timely manner
- Ensure staff are delegated and trained appropriately and documented
- Ensure the creation, collection and submission of regulatory documents to Sponsors and IRBs as required per protocol, GCP/ICH regulations and IRB requirements.
- Evaluate potential subjects for participation in clinical trials including phone and in person prescreens.
- Create and execute recruitment strategies in conjunction with patient recruitment staff
- Incorporate key timelines, endpoints, required vendors, and patient population when planning for each assigned protocol.
- Incorporate understanding of how decisions affect the bottom-line including links between operations and company’s financial performance and how it is essential to create value of all stakeholders of the organization when planning for each assigned protocol.
- Incorporate understanding of product development lifecycle and significance of protocol design including critical data points when planning for each assigned protocol
- Develop Quality Control strategies for team member projects
- Perform clinical duties (e.g. Drug preparation and administration, fibroscan, phlebotomy, ECG, lab processing) within scope
- Promote respect for cultural diversity and conventions with all individuals.
- Understand the disease process or condition under study
- Other duties as assigned
Required Skills/Abilities:
- Must undertake all training and certification required by sponsors and CRO’s to carry out clinical trials within specified timelines.
- Safe handling of data and records regarding privacy and confidentiality, per HIPAA
- requirements.
- Practices professionalism and integrity in all actions – Demonstrated ability to foster
- concepts of teamwork, cooperation, self- control, and flexibility to get the work done
- Ability to communicate effectively in English (both verbal and written).
- Up to 10% travel, as needed, for project team meetings, client presentations and other
- professional meetings/conferences as needed.
- Other duties as assigned.
Education and Experience:
- Must be a licensed MD, DO, NP, or PA.
- 5+ years of clinical management experience or equivalent applicable experience in clinical
- research industry
Physical Requirements:
- Prolonged periods of sitting at a desk and working on a computer.
- Must be able to lift up to 15 pounds at times.
NOTE: The above Job Description is intended to communicate the general function of the mentioned position and by no means should be considered an exhaustive or complete outline of the specific tasks and functions that will be required. Additionally, specific tasks and duties of the position are subject to change as the Company, the department and circumstances change. All employees are expected to perform their duties within their ability as required by the job and/or as requested by management.
Associate Director of Communications Systems
Arlington, VA (On-Site)
About Us
Ennoble Care is a mobile primary care, palliative care, and hospice service provider with patients in New York, New Jersey, Maryland, DC, Virginia, Oklahoma, Kansas, Pennsylvania, and Georgia. Ennoble Care's clinicians go to the home of the patient, providing continuum of care for those with chronic conditions and limited mobility. Ennoble Care offers a variety of programs including remote patient monitoring, behavioral health management, and chronic care management, to ensure that our patients receive the highest quality of care by a team they know and trust. We seek individuals who are driven to make a difference and embody our motto, "To Care is an Honor." Join Ennoble Care today!
Overview
Ennoble Care is seeking an Associate Director of Communications Systems to own our Dialpad and Zoho CRM platforms end to end—from day-to-day administration to the analytics that drive operational decisions for clinical leadership and the C-suite.
This is not just a systems administration role. You'll inherit active automation projects in Zoho (workflow rules, field permissions, validation logic, cross-module integrations) and a growing analytics practice around Dialpad call data (transfer acceptance rates, queue performance, agent productivity). You'll be expected to build on both—and you'll have AI tools at your disposal to do it. We're actively using AI to automate workflows, analyze call data, and eliminate manual processes across both platforms. You'll be expected to leverage these tools to move faster than a traditional admin ever could.
You'll report directly to the CIO and have regular visibility with the COO and executive leadership. This position is on-site at our Arlington, VA corporate headquarters.
Key Responsibilities
Dialpad Administration & Analytics (~610 users across 15+ offices and 11 states)
• Manage user provisioning/deprovisioning, license management (Connect vs Contact Center), number assignment, and extensions
• Configure and optimize call routing, IVR structures, queues, and office/department setup
• Build and maintain performance dashboards for clinical operations leadership—transfer acceptance rates (warm vs cold), queue performance, agent productivity, ring timeout analysis, voicemail detection
• Leverage AI tools to automate call data analysis, anomaly detection, and recurring reporting
• Track and report on KPIs weekly: call answer rate, abandon rate, average speed to answer, queue wait time
• Conduct root cause analysis when performance dips—whether it's a routing issue, a training gap, or a staffing constraint
• Serve as primary technical contact with Dialpad support and account team
• Troubleshoot call quality issues, agent status problems, and routing errors
Zoho CRM Administration & Automation (~50+ liaisons, scaling to 100+)
• Manage user creation, role/profile management, field-level permissions, module configuration, and layout customization
• Own and extend existing workflow automations—bonus point calculations, pathway expiration enforcement, focused pathway caps, cross-module lookups (house call / hospice), referral-to-liaison mapping
• Drive data integrity: account deduplication, referral source accuracy, sync monitoring between Zoho, OA (OperationsAccel), and MatrixCare
• Build liaison performance dashboards and automate the pulse report
• Reduce bonus reconciliation from ~16 hours/month of manual work to near-zero through automation
• Use AI-assisted development to build and iterate on Zoho workflow rules, validation logic, and cross-module integrations faster
Integrations & Cross-Platform
• Coordinate user lifecycle (provisioning/deprovisioning) across Dialpad and Zoho as part of onboarding/offboarding workflows
• Maintain integrations between Dialpad, Zoho, CallRail, and other systems
• Monitor sync reliability between Zoho, OA, and the Dialpad data warehouse (Azure SQL)
• Support other no-code/low-code tools (Scribe, Keragon, Emitrr) as needed
Performance Monitoring & Reporting
• Track and report on Dialpad and Zoho KPIs weekly to leadership
• Identify trends and proactively address issues before they impact metrics
• Support Operations Analyst with data extraction for deeper analysis
Documentation & Training
• Create and maintain system documentation, runbooks, and SOPs
• Develop training resources to improve adoption and reduce errors
• Conduct end-user training for new hires and existing staff
What Success Looks Like
• You own Dialpad and Zoho administration completely—user provisioning, routing changes, and system configuration no longer route through the helpdesk or the CIO
• Leadership gets recurring, self-service visibility into call center performance and liaison productivity without asking for it
• Manual reconciliation work that currently takes 16+ hours/month is automated or eliminated
• When something breaks or trends in the wrong direction, you catch it before anyone else does
Qualifications
Required
• 3+ years of experience administering a cloud communications platform (Dialpad, RingCentral, 8x8, Five9, or similar)
• 2+ years of experience administering a CRM (Zoho CRM strongly preferred; Salesforce acceptable)
• Hands-on experience building CRM automations—workflow rules, validation rules, field-level security, cross-module lookups
• Comfortable writing SQL queries for analytics (you'll query an Azure SQL data warehouse—and use AI tools to accelerate query development and analysis)
• Experience building dashboards or reports in Power BI, Looker, or similar
• Strong analytical skills—able to interpret data and identify root causes
• Excellent communication skills with ability to present metrics to leadership
• Strong attention to detail—you'll reconcile bonus payments where errors directly impact employee compensation
• Ability to work on-site in Arlington, VA
Preferred
• Zoho CRM administration certification
• Experience with Zoho-to-external-system integrations (webhooks, APIs, middleware like Zoho Flow)
• Healthcare industry experience (home health, hospice, or multi-site provider groups)
• Experience with Dialpad specifically (API, webhooks, admin console, contact center configuration)
• Familiarity with data warehousing concepts and ETL pipelines
• Experience using AI/LLM tools (Claude, ChatGPT, Copilot) to accelerate technical work—writing automations, analyzing data, building integrations
• Background in contact center operations (not just IT administration)
What We Offer
• Ownership of two mission-critical platforms with direct impact on business performance
• Direct visibility with CIO, COO, and executive leadership
• AI-forward team—you'll have enterprise AI tools and an automation backlog with clear ROI from day one
• Growing organization—the systems you build now will scale with 2x liaison headcount and continued M&A expansion
• Competitive compensation and benefits package
• Career growth opportunities within IT and operations
Compensation
Salary Range: $90,000 - $110,000 with 10% Bonus based on Annual KPIs
Benefits
Full-time employees qualify for the following benefits:
• Medical, Dental, Vision and supplementary benefits such as Life Insurance, Short Term and Long Term Disability, Flexible Spending Accounts for Medical and Dependent Care, Accident, Critical Illness, and Hospital Indemnity
• Paid Time Off
• Paid Office Holidays
All employees qualify for these benefits:
• Paid Sick Time
• 401(k) with up to 3% company match
• Referral Program
• Payactiv: pay-on-demand — cash out earned money when and where you need it!
Candidates must disclose any current or future need for employment-based immigration sponsorship (including, but not limited to, OPT, STEM OPT, or visa sponsorship) before an offer of employment is extended.
Ennoble Care is an Equal Opportunity Employer, committed to hiring the best team possible, and does not discriminate against protected characteristics including but not limited to race, age, sexual orientation, gender identity and expression, national origin, religion, disability, and veteran status.
Company Description
At Titl, we simplify the real estate process by eliminating paperwork, legal obstacles, and delays associated with buying, owning, or selling a home. Our advanced technology ensures transparency and peace of mind throughout every transaction. We provide a modern and user-friendly way to handle property—designed for today and prepared for future needs.
Role Description
We're seeking an experienced Full-Stack Engineer to join our team working on a sophisticated property data research and report generation platform. This role involves building and maintaining enterprise-grade systems that automate property data extraction from government sources, generate comprehensive property reports, and manage complex business workflows including payments, authentication, and blockchain integration.
What You'll Work On
- Backend Services: Develop and maintain NestJS microservices handling property data scraping, PDF generation, report aggregation, and enterprise account management
- Frontend Applications: Build responsive Next.js applications with complex state management and real-time updates
- Data Pipeline: Work with automated scraping systems using Puppeteer and AI-powered document processing (Google Document AI, OpenAI)
- Integration Development: Implement OAuth flows, Stripe payment processing, webhook handling, and third-party API integrations
- Queue Management: Design and maintain Bull queue systems for background job processing and async workflows
- Blockchain Integration: Work with Polymesh blockchain for property ownership verification and asset tokenization
- Database Design: Create efficient Prisma schemas and optimize PostgreSQL queries for complex property data relationships
Required Technical Skills
Core Stack (Must Have)
- Backend: Advanced proficiency in NestJS with deep understanding of dependency injection, decorators, guards, and service patterns
- Frontend: Expert-level Next.js 14 (App Router) and React with TypeScript
- Database: Strong Prisma ORM experience and PostgreSQL optimization skills
- TypeScript: Production-level TypeScript across full stack
- API Design: RESTful API design, DTOs, validation, and Swagger documentation Infrastructure & DevOps
- Docker: Container orchestration and development environments
- Cloud Platforms: Google Cloud Platform (Cloud Storage, Cloud Run)
- Queue Systems: Bull or similar job queue systems (Redis-backed)
- Monorepo: Experience with pnpm workspaces or similar monorepo tooling Authentication & Payments
- OAuth 2.0: Multi-provider authentication (Google, Facebook, LinkedIn)
- JWT: Token-based authentication and authorization patterns
- Stripe: Payment processing, webhooks, subscription management, and usage-based billing Specialized Skills
- Web Scraping: Puppeteer or similar browser automation tools
- PDF Processing: PDF generation, manipulation, and data extraction
- AI/ML Integration: Experience with AI APIs (OpenAI, Google AI, etc.)
- Background Jobs: Async processing, retry logic, and error handling
Highly Desired Skills
- Blockchain: Polymesh or Ethereum blockchain integration experience
- Document Processing: OCR, document AI, or legal document processing
- Property/Real Estate Domain: Understanding of property records, deeds, liens, title commitments
- Legal Tech: Experience with legal document workflows or compliance systems
- Testing: Jest, testing-library, E2E testing frameworks
- Performance Optimization: Query optimization, caching strategies, lazy loading
- Security: OWASP best practices, rate limiting, encryption
Architecture & Design Requirements
You should be comfortable with:
- Design Patterns: Service-oriented architecture, repository pattern, factory pattern
- Dependency Injection: Understanding NestJS DI container and module system
- Database Relations: Complex multi-tenant data models with proper isolation
- State Management: React Context, server/client component patterns
- Error Handling: Comprehensive error handling, retry logic, fallback mechanisms, API Security: Rate limiting, API key management, webhook signature verification
Experience Requirements
- 5+ years of full-stack development experience
- 3+ years with TypeScript in production environments
- 2+ years with NestJS or similar enterprise Node.js frameworks
- 2+ years with modern React and Next.js
- Experience building production SaaS applications with multi-tenant architecture
- Track record of shipping complex features end-to-end
- Experience with third-party integrations and webhook systems
- Domain Knowledge (Preferred)
- Understanding of property data and real estate records
- Familiarity with government data systems and public records
- Knowledge of legal document structures (deeds, liens, mortgages, title commitments)
- Experience with regulated industries and compliance requirements
- Understanding of Miami-Dade County or similar municipal systems (bonus)
Development Practices
You should have experience with:
- Git workflows: Feature branches, pull requests, code review
- Documentation: Writing clear technical documentation and API specs
- Testing: Unit tests, integration tests, E2E tests
- CI/CD: Automated testing and deployment pipelines
- Agile: Working in iterative development cycles
- Code Quality: ESLint, Prettier, TypeScript strict mode
Problem-Solving Skills
We're looking for someone who can:
- Debug complex distributed systems across multiple services
- Optimize database queries and reduce API response times
- Design scalable architectures for high-volume data processing
- Handle edge cases in automated scraping and data extraction
- Troubleshoot integration issues with third-party services
- Implement robust error handling and monitoring
- Communication & Collaboration
- Clear written communication for documentation and code reviews
- Ability to explain technical concepts to non-technical stakeholders
- Collaborative approach to problem-solving
- Proactive in identifying and addressing technical debt
- Experience mentoring junior developers (preferred)
- Package Manager Note
- This project uses pnpm exclusively for monorepo management. Experience with pnpm workspaces is preferred, but npm/yarn monorepo experience transfers well.
What Makes You Stand Out
- Contributions to open-source projects
- Experience with LangChain or LangGraph for AI orchestration
- FastAPI or Python experience (for AI service integration)
- Understanding of title insurance or property ownership verification
- Experience with Puppeteer clusters and browser farm optimization
- Background in fintech or regulated industries
- Experience with multi-environment deployments (local, staging, production)
Working Style
This role requires:
- Attention to detail when working with legal and financial data
- Systematic approach to debugging complex systems
- Ability to work independently on ambiguous problems
- Comfort with reading and understanding existing codebases
- Pragmatic decision-making balancing speed and quality
- Tech Stack Summary: NestJS • Next.js • TypeScript • Prisma • PostgreSQL • Puppeteer • Bull • OAuth • Stripe • Google Document AI • OpenAI • Docker • GCP • Polymesh • pnpm
- This role offers the opportunity to work on challenging technical problems at the intersection of PropTech, LegalTech, and AI, building systems that handle real-world property data at scale.
RESPONSIBILITIES:
* Architect, design, and maintain scalable CI/CD pipelines using Azure/AWS DevSecOps.
* Build and optimize Docker-based microservices, images, and deployment pipelines.
* Lead deployments across Docker Swarm, Kubernetes/EKS, and multi-location environments.
* Develop infrastructure automation using Ansible, bash scripting, Terraform and Git-based workflow.
* Manage release pipelines using container registries, artifact feeds, template pipelines, and multi-stage workflows.
* Design multi-environment strategies for dev, QA, staging, and production deployment.
* Implement cloud-native services with AWS & Azure cloud platforms.
* Implement basic security practices, including IAM roles, secrets management, and access controls.
* Develop secure, modular, reusable build and release systems.
* Work closely with full-stack engineering teams (Angular, Java, Python , backend APIs, database engineers).
* Mentor junior DevOps engineers and lead DevOps roadmap decisions.
KNOWLEDGE REQUIREMENTS:
DevOps Expertise :
Azure DevOps pipelines, YAML templating, CI/CD strategy, Git branching models.
Containerization & Orchestration :
Docker images, Docker Compose, Docker Swarm, multi-node/multi-location deployments.
Cloud Technologies :
Azure deployments & infrastructure, AWS (IAM, Lambda, S3, CloudWatch).
Programming / Scripting Languages :
Python, Bash, Linux/Unix administration, awk, shell automation, groovy.
Infrastructure Automation :
Ansible playbooks, tasks/roles, inventory design, configuration management.
Distributed Deployment Architecture :
Multi-site replication, node selection by IP, dynamic service routing.
Database Stack Experience :
PostgreSQL, MySQL, MariaDB operations & migrations.
Observability & Logging :
CloudWatch monitoring, log collection, Prometheus, Grafana, reporting & metrics.
Version Control & Build Systems :
Azure Devops, Git, Git submodules, artifact storage, registry solutions, Secrets Management.
Nice to have AI knowledge/experience and willingness to learn.
EDUCATION & EXPERIENCE REQUIREMENTS
* BS degree in Electrical/Computer Engineering, Computer Science or related field. MS preferred.
* 7+ years experience in a software devops/development/test capacity with enterprise server, storage or networking products.
Job Description
At Boeing, we innovate and collaborate to make the world a better place. We’re committed to fostering an environment for every teammate that’s welcoming, respectful and inclusive, with great opportunity for professional growth. Find your future with us.
The Boeing Company is currently seeking a Software Engineer–DevSecOps (Senior or Lead) to support our E7A Software Development Environments team located in Tukwila, Washington.
This team is responsible for building and maintaining multiple secure development environments that enable continuous integration and secure software development for the US Air force E-7A Rapid Prototype Program. This position will focus on developing and maintaining RKE2 and EKS backed software development tools supporting Maven Gradle and Python development pipelines in both AWS and onsite disconnected environments.
Position Responsibilities:
Lead the design, development, testing and verification of E-7A software engineering tools
Develop and enhance our DevSecOps operations across multiple environments
Collaborate with cross-functional teams to integrate and implement solutions for software development
Provide end user support and troubleshooting to software development teams using program defined pipeline configurations and development environments
Monitor and maintain software development environments and tools
Participate in peer reviews and provide technical leadership to junior engineers
Utilize analytical problem-solving skills to resolve issues and improve processes and tools
Our team is currently hiring for a broad range of experience levels including: Senior and Lead Engineers.
This position is expected to be 100% onsite. The selected candidate will be required to work onsite at one of the listed location options.
Security Clearance and Export Control Requirements:
This position requires an active Secret U.S. Security Clearance. (A U.S. Security Clearance that has been active in the past 24 months is considered active.)
Basic Qualifications (Required Skills/ Experience):
3+ years of experience administering Linux systems
3+ years of experience administering Kubernetes EKS or RKE2 deployments
3+ years of experience with AWS cloud development and containerization
Programming or scripting experience in Java or C++ and Python
Preferred Qualifications (Desired Skills/Experience):
Bachelor of Science degree from an accredited course of study in engineering, engineering technology (includes manufacturing engineering technology), chemistry, physics, mathematics, data science, or computer science
Ability to set up, and manage a toolchain that uses GitLab, GitLab-CI, Gradle, Maven, Nexus/Artifactory, SonarQube, Prometheus, FluentBit, CloudWatch, Helm, Terraform, Ansible, OpenShift, Docker and more
AWS Certification is preferred
Containerization knowledge including Docker, Kubernetes, Helm, Rancher, Istio, Big Bang
Understanding of designing and implementing full stack/Microservice infrastructure
Understanding of secure software development methodologies and Security First mindset
Understanding of cloud architecture and design methodologies
Strong Working knowledge of the CI/CD process including debugging, test, and integration of software tools
Knowledge of general software development and testing tools, including compilers, linkers, debuggers, and requirements management tools
Experience with Confluence, Jira
Drug Free Workplace:
Boeing is a Drug Free Workplace (DFW) where post offer applicants and employees are subject to testing for marijuana, cocaine, opioids, amphetamines, PCP, and alcohol when criteria is met as outlined in our policies.
Union:
This is a union-represented position.
CodeVue Coding Challenge:
To be considered for this position you will be required to complete a technical assessment as part of the selection process. Failure to complete the assessment will remove you from consideration.
Pay & Benefits:
At Boeing, we strive to deliver a Total Rewards package that will attract, engage and retain the top talent. Elements of the Total Rewards package include competitive base pay and variable compensation opportunities.
The Boeing Company also provides eligible employees with an opportunity to enroll in a variety of benefit programs, generally including health insurance, flexible spending accounts, health savings accounts, retirement savings plans, life and disability insurance programs, and a number of programs that provide for both paid and unpaid time away from work.
The specific programs and options available to any given employee may vary depending on eligibility factors such as geographic location, date of hire, and the applicability of collective bargaining agreements.
Pay is based upon candidate experience and qualifications, as well as market and business considerations.
Summary pay range for Senior Level: $130,900 - $177,100.
Summary pay range for Lead Level: $161,500 - $218,500.
Applications for this position will be accepted until Mar. 30, 2026
Export Control Requirements:
This position must meet U.S. export control compliance requirements. To meet U.S. export control compliance requirements, a “U.S. Person” as defined by 22 C.F.R. §120.62 is required. “U.S. Person” includes U.S. Citizen, U.S. National, lawful permanent resident, refugee, or asylee.
Export Control Details:
US based job, US Person required
Relocation
This position offers relocation based on candidate eligibility.
Security Clearance
This position requires an active U.S. Secret Security Clearance (U.S. Citizenship Required). (A U.S. Security Clearance that has been active in the past 24 months is considered active)
Visa Sponsorship
Employer will not sponsor applicants for employment visa status.
Shift
This position is for 1st shift
Equal Opportunity Employer:
Boeing is an Equal Opportunity Employer. Employment decisions are made without regard to race, color, religion, national origin, gender, sexual orientation, gender identity, age, physical or mental disability, genetic factors, military/veteran status or other characteristics protected by law.
RESPONSIBILITIES:
- Architect, design, and maintain scalable CI/CD pipelines using Azure/AWS DevSecOps.
- Build and optimize Docker-based microservices, images, and deployment pipelines.
- Lead deployments across Docker Swarm, Kubernetes/EKS, and multi-location environments.
- Develop infrastructure automation using Ansible, bash scripting, Terraform and Git-based workflow.
- Manage release pipelines using container registries, artifact feeds, template pipelines, and multi-stage workflows.
- Design multi-environment strategies for dev, QA, staging, and production deployment.
- Implement cloud-native services with AWS & Azure cloud platforms.
- Implement basic security practices, including IAM roles, secrets management, and access controls.
- Develop secure, modular, reusable build and release systems.
- Work closely with full-stack engineering teams (Angular, Java, Python , backend APIs, database engineers).
- Mentor junior DevOps engineers and lead DevOps roadmap decisions.
KNOWLEDGE REQUIREMENTS:
DevOps Expertise:
Azure DevOps pipelines, YAML templating, CI/CD strategy, Git branching models.
Containerization & Orchestration:
Docker images, Docker Compose, Docker Swarm, multi-node/multi-location deployments.
Cloud Technologies:
Azure deployments & infrastructure, AWS (IAM, Lambda, S3, CloudWatch).
Programming / Scripting Languages:
Python, Bash, Linux/Unix administration, awk, shell automation, groovy.
Infrastructure Automation:
Ansible playbooks, tasks/roles, inventory design, configuration management.
Distributed Deployment Architecture:
Multi-site replication, node selection by IP, dynamic service routing.
Database Stack Experience:
PostgreSQL, MySQL, MariaDB operations & migrations.
Observability & Logging:
CloudWatch monitoring, log collection, Prometheus, Grafana, reporting & metrics.
Version Control & Build Systems:
Azure Devops, Git, Git submodules, artifact storage, registry solutions, Secrets Management.
Nice to have AI knowledge/experience and willingness to learn.
EDUCATION & EXPERIENCE REQUIREMENTS
- BS degree in Electrical/Computer Engineering, Computer Science or related field. MS preferred.
- 7+ years experience in a software devops/development/test capacity with enterprise server, storage or networking products.
RESPONSIBILITIES:
- Architect, design, and maintain scalable CI/CD pipelines using Azure/AWS DevSecOps.
- Build and optimize Docker-based microservices, images, and deployment pipelines.
- Lead deployments across Docker Swarm, Kubernetes/EKS, and multi-location environments.
- Develop infrastructure automation using Ansible, bash scripting, Terraform and Git-based workflow.
- Manage release pipelines using container registries, artifact feeds, template pipelines, and multi-stage workflows.
- Design multi-environment strategies for dev, QA, staging, and production deployment.
- Implement cloud-native services with AWS & Azure cloud platforms.
- Implement basic security practices, including IAM roles, secrets management, and access controls.
- Develop secure, modular, reusable build and release systems.
- Work closely with full-stack engineering teams (Angular, Java, Python , backend APIs, database engineers).
- Mentor junior DevOps engineers and lead DevOps roadmap decisions.
KNOWLEDGE REQUIREMENTS:
DevOps Expertise:
Azure DevOps pipelines, YAML templating, CI/CD strategy, Git branching models.
Containerization & Orchestration:
Docker images, Docker Compose, Docker Swarm, multi-node/multi-location deployments.
Cloud Technologies:
Azure deployments & infrastructure, AWS (IAM, Lambda, S3, CloudWatch).
Programming / Scripting Languages:
Python, Bash, Linux/Unix administration, awk, shell automation, groovy.
Infrastructure Automation:
Ansible playbooks, tasks/roles, inventory design, configuration management.
Distributed Deployment Architecture:
Multi-site replication, node selection by IP, dynamic service routing.
Database Stack Experience:
PostgreSQL, MySQL, MariaDB operations & migrations.
Observability & Logging:
CloudWatch monitoring, log collection, Prometheus, Grafana, reporting & metrics.
Version Control & Build Systems:
Azure Devops, Git, Git submodules, artifact storage, registry solutions, Secrets Management.
Nice to have AI knowledge/experience and willingness to learn.
EDUCATION & EXPERIENCE REQUIREMENTS
- BS degree in Electrical/Computer Engineering, Computer Science or related field. MS preferred.
- 7+ years experience in a software devops/development/test capacity with enterprise server, storage or networking products.
RESPONSIBILITIES:
- Architect, design, and maintain scalable CI/CD pipelines using Azure/AWS DevSecOps.
- Build and optimize Docker-based microservices, images, and deployment pipelines.
- Lead deployments across Docker Swarm, Kubernetes/EKS, and multi-location environments.
- Develop infrastructure automation using Ansible, bash scripting, Terraform and Git-based workflow.
- Manage release pipelines using container registries, artifact feeds, template pipelines, and multi-stage workflows.
- Design multi-environment strategies for dev, QA, staging, and production deployment.
- Implement cloud-native services with AWS & Azure cloud platforms.
- Implement basic security practices, including IAM roles, secrets management, and access controls.
- Develop secure, modular, reusable build and release systems.
- Work closely with full-stack engineering teams (Angular, Java, Python , backend APIs, database engineers).
- Mentor junior DevOps engineers and lead DevOps roadmap decisions.
KNOWLEDGE REQUIREMENTS:
DevOps Expertise:
Azure DevOps pipelines, YAML templating, CI/CD strategy, Git branching models.
Containerization & Orchestration:
Docker images, Docker Compose, Docker Swarm, multi-node/multi-location deployments.
Cloud Technologies:
Azure deployments & infrastructure, AWS (IAM, Lambda, S3, CloudWatch).
Programming / Scripting Languages:
Python, Bash, Linux/Unix administration, awk, shell automation, groovy.
Infrastructure Automation:
Ansible playbooks, tasks/roles, inventory design, configuration management.
Distributed Deployment Architecture:
Multi-site replication, node selection by IP, dynamic service routing.
Database Stack Experience:
PostgreSQL, MySQL, MariaDB operations & migrations.
Observability & Logging:
CloudWatch monitoring, log collection, Prometheus, Grafana, reporting & metrics.
Version Control & Build Systems:
Azure Devops, Git, Git submodules, artifact storage, registry solutions, Secrets Management.
Nice to have AI knowledge/experience and willingness to learn.
EDUCATION & EXPERIENCE REQUIREMENTS
- BS degree in Electrical/Computer Engineering, Computer Science or related field. MS preferred.
- 7+ years experience in a software devops/development/test capacity with enterprise server, storage or networking products.
At Boeing, we innovate and collaborate to make the world a better place. We're committed to fostering an environment for every teammate that's welcoming, respectful and inclusive, with great opportunity for professional growth. Find your future with us.
The Boeing Company is currently seeking a Software Engineer-DevSecOps (Senior or Lead) to support our E7A Software Development Environments team located in Tukwila, Washington .
This team is responsible for building and maintaining multiple secure development environments that enable continuous integration and secure software development for the US Air force E-7A Rapid Prototype Program. This position will focus on developing and maintaining RKE2 and EKS backed software development tools supporting Maven Gradle and Python development pipelines in both AWS and onsite disconnected environments.
Position Responsibilities:
* Lead the design, development, testing and verification of E-7A software engineering tools
* Develop and enhance our DevSecOps operations across multiple environments
* Collaborate with cross-functional teams to integrate and implement solutions for software development
* Provide end user support and troubleshooting to software development teams using program defined pipeline configurations and development environments
* Monitor and maintain software development environments and tools
* Participate in peer reviews and provide technical leadership to junior engineers
* Utilize analytical problem-solving skills to resolve issues and improve processes and tools
Our team is currently hiring for a broad range of experience levels including: Senior and Lead Engineers.
This position is expected to be 100% onsite. The selected candidate will be required to work onsite at one of the listed location options.
Security Clearance and Export Control Requirements:
This position requires an active Secret U.S. Security Clearance. (A U.S. Security Clearance that has been active in the past 24 months is considered active.)
Basic Qualifications (Required Skills/ Experience):
* 3+ years of experience administering Linux systems
* 3+ years of experience administering Kubernetes EKS or RKE2 deployments
* 3+ years of experience with AWS cloud development and containerization
* Programming or scripting experience in Java or C++ and Python
Preferred Qualifications (Desired Skills/Experience):
* Bachelor of Science degree from an accredited course of study in engineering, engineering technology (includes manufacturing engineering technology), chemistry, physics, mathematics, data science, or computer science
* Ability to set up, and manage a toolchain that uses GitLab, GitLab-CI, Gradle, Maven, Nexus/Artifactory, SonarQube, Prometheus, FluentBit, CloudWatch, Helm, Terraform, Ansible, OpenShift, Docker and more
* AWS Certification is preferred
* Containerization knowledge including Docker, Kubernetes, Helm, Rancher, Istio, Big Bang
* Understanding of designing and implementing full stack/Microservice infrastructure
* Understanding of secure software development methodologies and Security First mindset
* Understanding of cloud architecture and design methodologies
* Strong Working knowledge of the CI/CD process including debugging, test, and integration of software tools
* Knowledge of general software development and testing tools, including compilers, linkers, debuggers, and requirements management tools
* Experience with Confluence, Jira
Drug Free Workplace:
Boeing is a Drug Free Workplace (DFW) where post offer applicants and employees are subject to testing for marijuana, cocaine, opioids, amphetamines, PCP, and alcohol when criteria is met as outlined in our policies.
Union:
This is a union-represented position.
CodeVue Coding Challenge:
To be considered for this position you will be required to complete a technical assessment as part of the selection process. Failure to complete the assessment will remove you from consideration.
Pay & Benefits:
At Boeing, we strive to deliver a Total Rewards package that will attract, engage and retain the top talent. Elements of the Total Rewards package include competitive base pay and variable compensation opportunities.
The Boeing Company also provides eligible employees with an opportunity to enroll in a variety of benefit programs, generally including health insurance, flexible spending accounts, health savings accounts, retirement savings plans, life and disability insurance programs, and a number of programs that provide for both paid and unpaid time away from work.
The specific programs and options available to any given employee may vary depending on eligibility factors such as geographic location, date of hire, and the applicability of collective bargaining agreements.
Pay is based upon candidate experience and qualifications, as well as market and business considerations.
Summary pay range for Senior Level: $130,900 - $177,100.
Summary pay range for Lead Level: $161,500 - $218,500.
Applications for this position will be accepted until Mar. 30, 2026
Export Control Requirements:
This position must meet U.S. export control compliance requirements. To meet U.S. export control compliance requirements, a "U.S. Person" as defined by 22 C.F.R. §120.62 is required. "U.S. Person" includes U.S. Citizen, U.S. National, lawful permanent resident, refugee, or asylee.
Export Control Details:
US based job, US Person required
Relocation
This position offers relocation based on candidate eligibility.
Security Clearance
This position requires an active U.S. Secret Security Clearance (U.S. Citizenship Required). (A U.S. Security Clearance that has been active in the past 24 months is considered active)
Visa Sponsorship
Employer will not sponsor applicants for employment visa status.
Shift
This position is for 1st shift
Equal Opportunity Employer:
Boeing is an Equal Opportunity Employer. Employment decisions are made without regard to race, color, religion, national origin, gender, sexual orientation, gender identity, age, physical or mental disability, genetic factors, military/veteran status or other characteristics protected by law.
RESPONSIBILITIES:
- Architect, design, and maintain scalable CI/CD pipelines using Azure/AWS DevSecOps.
- Build and optimize Docker-based microservices, images, and deployment pipelines.
- Lead deployments across Docker Swarm, Kubernetes/EKS, and multi-location environments.
- Develop infrastructure automation using Ansible, bash scripting, Terraform and Git-based workflow.
- Manage release pipelines using container registries, artifact feeds, template pipelines, and multi-stage workflows.
- Design multi-environment strategies for dev, QA, staging, and production deployment.
- Implement cloud-native services with AWS & Azure cloud platforms.
- Implement basic security practices, including IAM roles, secrets management, and access controls.
- Develop secure, modular, reusable build and release systems.
- Work closely with full-stack engineering teams (Angular, Java, Python , backend APIs, database engineers).
- Mentor junior DevOps engineers and lead DevOps roadmap decisions.
KNOWLEDGE REQUIREMENTS:
DevOps Expertise:
Azure DevOps pipelines, YAML templating, CI/CD strategy, Git branching models.
Containerization & Orchestration:
Docker images, Docker Compose, Docker Swarm, multi-node/multi-location deployments.
Cloud Technologies:
Azure deployments & infrastructure, AWS (IAM, Lambda, S3, CloudWatch).
Programming / Scripting Languages:
Python, Bash, Linux/Unix administration, awk, shell automation, groovy.
Infrastructure Automation:
Ansible playbooks, tasks/roles, inventory design, configuration management.
Distributed Deployment Architecture:
Multi-site replication, node selection by IP, dynamic service routing.
Database Stack Experience:
PostgreSQL, MySQL, MariaDB operations & migrations.
Observability & Logging:
CloudWatch monitoring, log collection, Prometheus, Grafana, reporting & metrics.
Version Control & Build Systems:
Azure Devops, Git, Git submodules, artifact storage, registry solutions, Secrets Management.
Nice to have AI knowledge/experience and willingness to learn.
EDUCATION & EXPERIENCE REQUIREMENTS
- BS degree in Electrical/Computer Engineering, Computer Science or related field. MS preferred.
- 7+ years experience in a software devops/development/test capacity with enterprise server, storage or networking products.
Remote working/work at home options are available for this role.
RESPONSIBILITIES:
~ Architect, design, and maintain scalable CI/CD pipelines using Azure/AWS DevSecOps.
~ Build and optimize Docker-based microservices, images, and deployment pipelines.
~ Lead deployments across Docker Swarm, Kubernetes/EKS, and multi-location environments.
~ Develop infrastructure automation using Ansible, bash scripting, Terraform and Git-based workflow.
~ Manage release pipelines using container registries, artifact feeds, template pipelines, and multi-stage workflows.
~ Design multi-environment strategies for dev, QA, staging, and production deployment.
~ Implement cloud-native services with AWS & Azure cloud platforms.
~ Implement basic security practices, including IAM roles, secrets management, and access controls.
~ Develop secure, modular, reusable build and release systems.
~ Work closely with full-stack engineering teams (Angular, Java, Python , backend APIs, database engineers).
~ Mentor junior DevOps engineers and lead DevOps roadmap decisions.
KNOWLEDGE REQUIREMENTS:
DevOps Expertise :
Azure DevOps pipelines, YAML templating, CI/CD strategy, Git branching models.
Containerization & Orchestration :
Docker images, Docker Compose, Docker Swarm, multi-node/multi-location deployments.
Cloud Technologies :
Azure deployments & infrastructure, AWS (IAM, Lambda, S3, CloudWatch).
Programming / Scripting Languages :
Python, Bash, Linux/Unix administration, awk, shell automation, groovy.
Infrastructure Automation :
Ansible playbooks, tasks/roles, inventory design, configuration management.
Distributed Deployment Architecture :
Multi-site replication, node selection by IP, dynamic service routing.
Database Stack Experience :
PostgreSQL, MySQL, MariaDB operations & migrations.
Observability & Logging :
CloudWatch monitoring, log collection, Prometheus, Grafana, reporting & metrics.
Version Control & Build Systems :
Azure Devops, Git, Git submodules, artifact storage, registry solutions, Secrets Management.
Nice to have AI knowledge/experience and willingness to learn.
EDUCATION & EXPERIENCE REQUIREMENTS
~ BS degree in Electrical/Computer Engineering, Computer Science or related field. MS preferred.
~7+ years experience in a software devops/development/test capacity with enterprise server, storage or networking products.
Responsibilities The DevOps / SRE will: Build and maintain the infrastructure that supports the firm's trading systems Collaborate with development teams to design and implement automated build and deployment pipelines Drive the rapid adoption of new processes/systems Provide hands-on support to the trading team Qualifications BS/MS in Computer Science, Engineering, or related discipline 5+ years experience in the Platform, SRE, Production, or Systems Engineering fields Excellent knowledge of all aspects of the software engineering process, including Coding, Testing, Deployment, Scalability, Security, and Maintainability Ability to set-up andmanage CI/CD activities and tools (e.g.
Gitlab, Bitbucket), as well as build you own solutions (e.g.
Java/Gradle) Track record of working with distributed systems in a trading environment e.g.
Aeron, Kafka, and RabbitMQ Deep understanding of best practices, design patterns, and principles for highly decoupled and scalable systems Good knowledge of Unix systems / Bash / networks Experience with infrastructure and application observability tooling e.g.
Datadog, Prometheus, and Grafana Strong knowledge in coding/scripting (Java, Python, Go, or Bash) Experience with automation/configuration frameworks using Terraform, Kustomize, Ansible, Helm, or an equivalent Desired skills Experience with cloud platforms (ideally AWS) Experience in API Management (routing, gateways, versioning) with profound understanding of API Development aspects Ability to apply strategies for efficient communication, data consistency, and resilience across micro services, including experience with API design, message-based communication, and event-driven architectures Experience in defining and enforcing architectural patterns (SOA, CQRS, Event Sourcing etc.) Experience in performance/stress test and system tuning
Key Responsibilities:
- Design and deploy observability frameworks leveraging tools such as Grafana, Dynatrace, Prometheus, ELK, Splunk, etc. Define best practices for monitoring, alerting, and visualization across hybrid and multi-cloud environments.
- Develop strategies for monitoring KPIs tied to business outcomes (e.g., sales performance, supply chain efficiency, customer experience).
- Collaborate with business and IT teams to identify key metrics and integrate them into dashboards and alerting systems.
- Implement AIOps solutions using industry-leading platforms like OpenAI, AWS Bedrock, Google Gemini, Anthropic, and similar technologies.
- Develop predictive analytics and anomaly detection models to proactively identify and resolve operational issues.
- Integrate observability tools with ITSM platforms and automation workflows. Enable automated root cause analysis and remediation using AI/ML models.
- Provide observability strategies for infrastructure (servers, storage, cloud), applications (microservices, APIs), and networks (LAN/WAN, SD-WAN). Collaborate with DevOps, SRE, and IT operations teams to ensure end-to-end visibility and reliability.
- Establish observability standards, KPIs, and SLAs for performance and availability. Ensure compliance with security and regulatory requirements in monitoring solutions.
- Develop scalable architecture using LLMs, agentic frameworks, and multi-modal AI technologies.
- Build AI-powered analytics platforms for IT operations analysis, anomaly detection, and predictive insights.
- Architect and deploy intelligent chatbots for IT support and self-service capabilities.
- Integrate AI solutions with existing IT operations tools and workflows.
- Implement automated remediation and root cause analysis using AI/ML models.
Qualifications:
- 10-13 years of relevant experience
- Hands-on experience with Grafana, Dynatrace, and other monitoring platforms.
- Practical experience implementing AI-based solutions for anomaly detection, predictive maintenance, and automated remediation. Familiarity with OpenAI, Bedrock, Gemini, Anthropic, or similar AI platforms.
- Strong understanding of infrastructure, application architectures, and networking. Experience with cloud platforms (AWS, Azure, GCP) and container orchestration (Kubernetes).
- Proficiency in Python, Bash, or similar scripting languages for automation and integration.
- Strong experience with LLMs (OpenAI, Anthropic, Gemini, Bedrock) and agentic AI solutions.
- Hands-on experience in designing AI architectures for enterprise IT environments.
- Proficiency in Python or similar languages for AI model integration and automation.
Location: Alpharetta, GA (3 days a week onsite)
Duration: 6 months
Job Description:
We are seeking a skilled Site Reliability Engineer to join our team and help build, maintain, and scale our cloud-native infrastructure. You will work closely with development and operations teams to ensure our systems are reliable, scalable, and efficient. The ideal candidate is passionate about automation, observability, and infrastructure-as-code, and thrives in a collaborative, fast-paced environment.
Key Responsibilities
Design, implement, and manage cloud infrastructure on Azure using Terraform and Terragrunt.
Maintain and optimize Kubernetes clusters on Azure Kubernetes Service (AKS).
Build and manage CI/CD pipelines using GitHub Actions/Workflows and ArgoCD for GitOps deployments.
Enhance system reliability by implementing monitoring, alerting, and observability solutions with Grafana.
Automate operational tasks to reduce toil and improve team efficiency.
Participate in on-call rotations, incident response, and post-mortem analysis.
Collaborate with development teams to improve application performance, scalability, and resilience.
Implement and advocate for SRE best practices, including SLIs, SLOs, and error budgets.
Continuously improve system performance, cost efficiency, and security.
Required Skills & Qualifications
3+ years of experience in an SRE, DevOps, or cloud infrastructure role.
Strong experience with Azure cloud services and infrastructure.
Hands-on experience with java and Terraform and Terragrunt for infrastructure-as-code.
Proficiency with Kubernetes (preferably AKS and container orchestration.
Experience with CI/CD tools, especially GitHub Workflows/Actions and ArgoCD.
Solid understanding of observability tools like Grafana (Prometheus, Loki, Tempo experience is a plus).
Education Requirements Bachelor's degree required, (Masters preferred)