Enable job alerts via email!

Senior Data Engineer - Technical Implementation & Solutions Delivery

Gradient AI

United States

Remote

USD 100,000 - 130,000

Full time

Today
Be an early applicant

Job summary

A data solutions company is seeking a Senior Data Engineer to lead the design and deployment of data pipelines for customer implementations. This remote role requires expertise in ETL processes, particularly using Airflow, with responsibilities including technical leadership and collaboration with data teams. Ideal candidates have a BS degree and significant experience in data solutions and frameworks.

Benefits

Generous stock options
Flexible schedule
Full benefits package

Qualifications

  • 5+ years of experience with data solutions in a professional setting.
  • 3+ years using data orchestration frameworks such as Airflow or Dagster.
  • Strong proficiency in Python and SQL.

Responsibilities

  • Lead design and deployment of data pipelines for new customers.
  • Build infrastructure for ETL of data from various sources.
  • Collaborate with data scientists for health-related data processing.

Skills

Airflow
SQL
Python
AWS
Apache Spark

Education

BS in Computer Science, Bioinformatics, or a quantitative discipline

Tools

DataBricks
Snowflake
Job description
Overview

Senior Data Engineer - Technical Implementation & Solutions Delivery • Remote, USA

Gradient AI is revolutionizing Group Health and P&C insurance with AI-powered solutions that help insurers predict risk more accurately, improve profitability, and automate underwriting and claims. Our SaaS platform taps into one of the industry’s largest data lakes—tens of millions of policies and claims—to deliver deep, actionable insights. Trusted by leading carriers, MGAs, TPAs, and self-insured employers, Gradient AI has grown rapidly since our founding in 2018. Backed by $56M in Series C funding, we\'re scaling fast—and it\'s an exciting time to join the team.

About the Role

We are looking for a Senior Data Engineer to join our Technical Implementations & Solutions Delivery team to lead the design, build, and deployment of data pipelines for new customer implementations and to support existing customer activity. This role requires expertise in using Airflow to orchestrate ETL pipelines, ensuring the efficient, reliable movement of healthcare data across systems. You’ll work closely with our engineering and client teams to ensure smooth data integration and top-notch customer support. You’ll also help shape our vision and play a key role in transforming the entire industry (really!). This is a fully remote opportunity.

What You Will Do
  • Own the technical implementation process for new customers, from ingestion to deployment, ensuring accuracy, consistency, and performance with an eye for scalable and repeatable processes.
  • Build and maintain infrastructure for the extraction, transformation, and loading (ETL) of data from a variety of sources using SQL, AWS, and healthcare-specific big data technologies and analytics platforms.
  • Innovate new tools to quickly extract, process, and validate client data from different sources and platforms.
  • Collaborate with data scientists to transform large volumes of health-related and bioinformatics data into modeling-ready formats, prioritizing data quality, integrity, and reliability in healthcare applications.
  • Apply health and bioinformatics expertise to design data pipelines that translate complex medical concepts into actionable requirements.
Skills & Qualifications
  • BS in Computer Science, Bioinformatics, or another quantitative discipline
  • 5+ years of experience implementing, managing, or optimizing data solutions in a professional setting
  • 3+ years of experience using data orchestration frameworks such as Airflow, Dagster, Prefect
  • Experience serving as a technical lead, setting coding standards, and mentoring other engineers is strongly preferred
  • Ability to work with and visualize health and/or medical data, with Insurtech industry exposure, is considered a plus
  • Knowledge of healthcare data standards and a solid understanding of healthcare data privacy and security regulations (such as HIPAA) are highly desirable
  • Strong proficiency in Python and SQL within a professional environment
  • Hands-on knowledge of big data tools like Apache Spark (PySpark), DataBricks, Snowflake, or similar platforms
  • Skilled in using data orchestration frameworks such as Airflow, Dagster, or Prefect
  • Comfortable working within cloud computing environments, preferably AWS, along with Linux systems
What We Offer
  • A fun, team-oriented startup culture
  • Generous stock options - we all get to own a piece of what we’re building
  • Flexible schedule that supports working from home
  • Full benefits package includes medical, dental, vision, 401k, paid paternal leave, and more
  • Ample opportunities to learn and take on new responsibilities

We are an equal opportunity employer.

Other

#LI-REMOTE

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.