Job Search and Career Advice Platform

Enable job alerts via email!

"Python Software Engineer, Data"

Genzeon

United States

On-site

USD 90,000 - 140,000

Full time

30+ days ago

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

An innovative firm is seeking a skilled Python Software Engineer to enhance healthcare outcomes through data. In this pivotal role, you will design and maintain data pipelines, ensuring the reliability and efficiency of our data infrastructure. Collaborating with data scientists and analysts, you'll integrate diverse healthcare datasets, optimize performance, and ensure compliance with regulations. If you have a passion for leveraging data to improve healthcare and possess strong technical and soft skills, this is an exciting opportunity to make a significant impact in a dynamic environment.

Qualifications

  • 5+ years in software engineering focused on data processing and ETL.
  • Proven experience with Apache Airflow in production environments.

Responsibilities

  • Design and develop scalable data pipelines using Apache Airflow.
  • Integrate various healthcare data sources for analytics.

Skills

Python
SQL
Apache Airflow
Data Processing
ETL Processes
Problem-Solving
Communication

Education

Bachelor’s degree in Computer Science
Master’s degree in Data Engineering

Tools

Apache Airflow
AWS
Azure
GCP
Job description

Job Title: Python Software Engineer, Data

Remote

Full time (10+ years exp)

Our role requires a lot more core Python engineering experience.

Job Description:

We are seeking a skilled and motivated Python Software Engineer with expertise in Apache Airflow to join our healthcare team. The ideal candidate will have a strong background in software engineering, a passion for improving healthcare outcomes through data, and experience working in a healthcare environment. You will play a critical role in building, maintaining, and optimizing our data pipeline to ensure the availability, reliability, and efficiency of our data infrastructure.

Key Responsibilities:

  1. Design and Develop Data Pipelines: Build and maintain scalable, reliable, and efficient data pipelines using Apache Airflow to automate data processing tasks.
  2. Data Integration: Integrate various data sources, including electronic health records (EHR), claims data, and other healthcare-related datasets, to provide a unified view for analytics and reporting.
  3. Data Management: Ensure data accuracy, consistency, and completeness by implementing robust data quality checks and monitoring processes.
  4. Collaboration: Work closely with data scientists, analysts, and other stakeholders to understand data needs and translate them into technical requirements.
  5. Optimization: Continuously improve the performance of the data pipelines by identifying bottlenecks and implementing optimizations.
  6. Documentation: Create and maintain comprehensive documentation of the data pipelines, workflows, and architecture to ensure transparency and ease of maintenance.
  7. Compliance: Ensure all data handling processes comply with healthcare regulations such as HIPAA and data privacy standards.
  8. Support and Troubleshooting: Provide ongoing support and troubleshooting for data pipelines, including responding to issues and ensuring timely resolution.

Qualifications:

Education: Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field.

Experience:

  1. 5+ years of experience in software engineering, with a focus in data processing and designing and implementing ETL processes.
  2. Proven experience with Apache Airflow in a production environment.
  3. Experience working with healthcare data (e.g., EHR, claims data) and understanding of healthcare regulations like HIPAA.

Technical Skills:

  1. Fluency in Python, including automated testing.
  2. Proficiency in SQL and experience with relational databases.
  3. Experience with one or more orchestration frameworks (Airflow, Prefect, Dagster, or similar).
  4. Experience with cloud platforms (e.g., AWS, Azure, GCP).

Soft Skills:

  1. Strong problem-solving skills and attention to detail.
  2. Excellent communication and collaboration skills.
  3. Ability to work independently and manage multiple tasks in a fast-paced environment.

Preferred Qualifications:

  1. Experience with additional data orchestration tools.
  2. Knowledge of big data technologies (e.g., Hadoop, Spark) is a plus.
  3. Familiarity with data governance and data security best practices.
  4. Knowledge of healthcare data standards like HL7 or FHIR.

Seniority level: Mid-Senior level

Employment type: Full-time

Job function: Information Technology

Industries: IT Services and IT Consulting

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.