¡Activa las notificaciones laborales por email!

Senior Data Engineer - Fintech, hibrido

Jordan martorell s.l.

Madrid

Híbrido

COP 251.405.000 - 342.826.000

Jornada completa

Hoy
Sé de los primeros/as/es en solicitar esta vacante

Descripción de la vacante

A global fintech company is seeking a Senior Data Engineer to develop and maintain data pipelines in a hybrid work environment. The role involves collaborating with various teams to ensure accurate data for analysis. Candidates should have over 3 years of experience with Python, SQL, and Airflow. This position offers competitive salary and benefits.

Servicios

Competitive salary and benefits package
Discretionary bonus based on performance
Continued personal development through training

Formación

  • 3+ years of experience building and optimizing data pipelines.
  • Proficiency in Python, SQL, and Airflow.
  • Knowledge of software engineering practices in data.

Responsabilidades

  • Design, develop, and maintain ELT/ETL data pipelines.
  • Manage overall pipeline orchestration using Airflow.
  • Help implement data governance policies.

Conocimientos

Python
SQL
Airflow
Data/analytics engineering
Collaboration
Descripción del empleo

Senior Data Engineer - Fintech

Ebury is a leading global fintech company that empowers businesses to trade and grow internationally. It offers a comprehensive suite of products, including international payments and collections, FX risk management, trade finance, and API integrations. Founded in 2009 by Juan Lobato and Salvador García, Ebury is one of the fastest-growing global fintechs, with over 1,700 employees and 38 offices in more than 25 countries.

Our Madrid Office is seeking a Senior Data Engineer to join our Data Engineering team on a hybrid schedule: 4 days in the office, 1 day working from home.

Our data mission is to develop and maintain Ebury's Data Warehouse and serve it to the whole company, where Data Scientists, Data Engineers, Analytics Engineers, and Data Analysts work collaboratively to:

  • Build ETLs and data pipelines to serve data in our platform
  • Provide clean, transformed data ready for analysis and used by our BI tool
  • Develop department and project-specific data models and serve these to teams across the company to drive decision making
  • Automate end solutions so we can all spend time on high-value analysis rather than running data extracts

What we offer:

  • Competitive salary and benefits package
  • Discretionary bonus based on performance
  • Continued personal development through training and certification
  • We are Open Source friendly, following Open Source principles in our internal projects and encouraging contributions to external projects

Responsibilities:

  • Be mentored by one of our outstanding performance team members along a 30/60/90 plan designed only for you
  • Participate in data modelling reviews and discussions to validate the models' accuracy, completeness, and alignment with business objectives
  • Design, develop, deploy, and maintain ELT/ETL data pipelines from a variety of data sources (transactional databases, REST APIs, file-based endpoints)
  • Serve hands-on delivery of data models using solid software engineering practices (e.g., version control, testing, CI/CD)
  • Manage overall pipeline orchestration using Airflow (hosted in Cloud Composer), as well as execution using GCP hosted services such as Container Registry, Artifact Registry, Cloud Run, Cloud Functions, and GKE
  • Work on reducing technical debt by addressing code that is outdated, inefficient, or no longer aligned with best practices or business needs
  • Collaborate with team members to reinforce best practices across the platform, encouraging a shared commitment to quality
  • Help implement data governance policies, including data quality standards, data access control, and data classification
  • Identify opportunities to optimize and refine existing processes

About you:

  • 3+ years of data/analytics engineering experience building, maintaining, and optimizing data pipelines and ETL processes on big data environments
  • Proficiency in Python, SQL, and Airflow
  • Knowledge of software engineering practices in data (SDLC, RFC, etc.)
  • Stay informed about the latest developments and industry standards in Data
  • Fluency in English

If you're excited about this job opportunity but your background doesn't match exactly the requirements in the job description, we strongly encourage you to apply anyway. You may be just the right candidate for this or other positions we have.

About Us

Ebury is a FinTech success story, positioned among the fastest-growing international companies in its sector. Founded in 2009, we are headquartered in London and have more than 1700 staff with a presence in more than 25 countries worldwide. Cultural diversity is part of what makes Ebury a special place to be.

Consigue la evaluación confidencial y gratuita de tu currículum.
o arrastra un archivo en formato PDF, DOC, DOCX, ODT o PAGES de hasta 5 MB.