Job Search and Career Advice Platform

¡Activa las notificaciones laborales por email!

Data Engineer (hybrid)

Baxter Planning

Barcelona

Híbrido

EUR 50.000 - 70.000

Jornada completa

Hace 16 días

Genera un currículum adaptado en cuestión de minutos

Consigue la entrevista y gana más. Más información

Descripción de la vacante

A forward-thinking data solutions company located in Barcelona is seeking a Data Engineer to enhance their data lake platform. The successful candidate will work on building reliable data pipelines with AWS and Python, ensuring high data quality and supporting analytics across teams. This role offers a competitive salary, private health insurance, gym membership, and opportunities for professional development in a hybrid work environment.

Servicios

Competitive salary
Private health insurance
Gym membership
Learning opportunities
Flexible benefits
Team events

Formación

  • 4+ years of experience in building and operating production data pipelines.
  • Strong proficiency in Python, with async/concurrency experience as a plus.
  • Experience with AWS services like S3, Glue, and Redshift.

Responsabilidades

  • Build and maintain the data-serving layer with curated datasets.
  • Develop and support near-real-time ingestion pipelines.
  • Design and manage BI-friendly data models.

Conocimientos

Production data pipelines
Python
AWS (S3, Glue, Athena)
Data modeling (star schema)
CI/CD practices

Herramientas

AWS DMS
Airflow
Redshift
Polars
Descripción del empleo

As a Data Engineer at Baxter Planning you will play a key role in building and evolving our data lake platform, supporting analytics and data-driven products across the business. You will design and operate reliable, production-grade data pipelines and curated datasets using modern AWS services and Python. This role focuses on data modeling, data quality, and near-real-time ingestion to ensure trustworthy, BI-ready data. You will work closely with engineers and stakeholders while contributing to architecture, automation, and best practices.

What you’ll do
  • Build the data-serving layer: curated datasets, marts, and product-ready tables
  • Develop incremental / micro-batch pipelines and support CDC near-real-time ingestion (AWS DMS)
  • Design BI-friendly data models (star schema) and manage schemas
  • Build ETL/ELT in Python (Polars) and serve/query via Athena and/or Redshift
  • Implement data quality + observability (freshness, completeness, duplicates, schema drift, anomalies)
  • Orchestrate with Airflow and AWS-native tools (e.g., Step Functions)
  • Contribute to CI/CD, IaC, architecture discussions, and best practices
What we’re looking for
  • 4+ years building and operating production data pipelines
  • Strong Python (async/concurrency is a plus)
  • Strong AWS across services like: S3, Glue, Athena, Redshift, Lake Formation, CloudWatch, DMS, Lambda, Step Functions, SQS/SNS, ECS, DynamoDB (+ CloudFormation)
  • Experience with lakehouse tables (Delta or similar), schema evolution, partitioning, compaction, upserts/merge
  • Solid data modeling skills (star schema) and commitment to testing & data quality
  • Experience running AWS DMS in production (monitoring/troubleshooting)
What we offer
  • A competitive salary
  • Work in a friendly and diverse team
  • private health insurance
  • gym membership
  • learning opportunities
  • hybrid model of work
  • flexible benefits
  • team events
Consigue la evaluación confidencial y gratuita de tu currículum.
o arrastra un archivo en formato PDF, DOC, DOCX, ODT o PAGES de hasta 5 MB.