¡Activa las notificaciones laborales por email!

Data Engineer - Work from home

Nearsure

Madrid

A distancia

EUR 50.000 - 70.000

Jornada completa

Hoy
Sé de los primeros/as/es en solicitar esta vacante

Descripción de la vacante

A leading tech company is seeking a Senior Data Engineer to design ETL/ELT pipelines and optimize data architectures. This fully remote position offers competitive benefits, autonomy, and a focus on well-being. Ideal candidates have significant experience with Python, SQL, AWS, and Apache Spark. Join a supportive team that prioritizes employee wellness and productivity.

Servicios

Competitive salary
100% remote work
Paid time off
Team-building activities
Birthday day off

Formación

  • 5+ years of experience designing and developing large-scale data solutions.
  • 5+ years of experience with Python for data manipulation and scripting.
  • 3+ years of experience with AWS data services like S3 and Glue.

Responsabilidades

  • Design and maintain ETL/ELT pipelines and workflows.
  • Implement data lakehouse architectures using Apache Iceberg.
  • Ensure data quality and compliance through monitoring and validation.

Conocimientos

Python
SQL
Apache Spark
Data Pipeline Development
Cloud Computing
Data Modeling
Apache Airflow

Educación

Bachelor's Degree in Computer Science or related field

Herramientas

AWS
Git/GitHub
Apache Iceberg
Descripción del empleo
Senior Data Engineer – Nearsure

Explore the Nearsure experience! Join our close-knit LATAM remote team. Connect through fun activities like coffee breaks, tech talks, and games with your team-mates and management.

Say goodbye to micromanagement! We champion autonomy, open communication, and respect for diversity as our core values.

Your well-being matters
  • Our People Care team supports you from day one with time-off requests and wellness check-ins.
  • Accounts Management team ensures smooth, effective client relationships, so you can focus on what you do best.
Benefits
  • Competitive USD salary
  • 100% remote work; work from anywhere, but connect at coworking spaces across LATAM.
  • Paid time off—full salary, rest, recharge.
  • National holidays celebrated.
  • Sick leave.
  • Refundable Annual Credit.
  • Team-building activities—coffee breaks, tech talks, after-work gatherings.
  • Birthday day off—extra day off during your birthday week.
About the project

Senior Data Engineer.

How you'll contribute
  • Design, develop, and maintain batch ETL/ELT pipelines and data workflows for large-scale datasets in AWS.
  • Implement and optimize data lakehouse architectures using Apache Iceberg on S3, ensuring schema evolution, partitioning strategies, and table maintenance.
  • Build and tune distributed data processing jobs with Apache Spark (PySpark or Spark) for performance and cost efficiency.
  • Orchestrate workflows using Apache Airflow, including DAG design, scheduling, and SLA monitoring.
  • Apply best practices in code quality, version control (Git/GitHub), and CI/CD for data engineering projects.
  • Ensure data quality, security, and compliance through validation, monitoring, and governance frameworks (Glue Catalog, IAM, encryption).
  • Collaborate with cross-functional teams (data scientists, analysts, architects) to deliver scalable and reliable data solutions.
  • Contribute to the development and optimization of analytics applications, ensuring they are powered by well-structured, high-quality data pipelines.
  • Continuously evaluate and adopt emerging technologies and AI-powered tools to improve productivity and maintain technical excellence.
Ideal candidate
  • Bachelor's Degree in Computer Science, Engineering, or related field.
  • 5+ years of experience designing and developing large-scale data solutions.
  • 5+ years of experience with Python for data manipulation, scripting, and integration tasks.
  • 5+ years of experience with SQL & DBMS (PostgreSQL, MySQL, SQL Server) and data modeling (Star Schema, Snowflake Schema) and query tuning.
  • 3+ years of experience with Apache Spark (PySpark preferred).
  • 3+ years of experience building batch ETL/ELT pipelines for large-scale datasets.
  • 3+ years of experience with AWS data services (S3, Athena/Presto, Glue, Lambda, CloudWatch).
  • 2+ years of experience with Apache Iceberg (table design, partition strategies, schema evolution, maintenance, ingestion pipelines into Apache Iceberg on S3).
  • 2+ years of experience with AWS EMR as the execution platform for big data workloads.
  • 2+ years of experience orchestrating data workflows with Apache Airflow.
  • 2+ years of experience with Git/GitHub (branching strategies, pull request reviews, CI/CD).
  • Experience designing efficient ingestion pipelines into analytical systems.
  • Proficiency in logging, auditing, and monitoring for data pipelines.
  • Experience with data cleansing, validation, and transformation for analytical/rereporting systems.
  • Familiarity with data security and privacy practices.
  • Solid understanding of cloud-native analytics architectures (data lake/lakehouse, ELT patterns).
  • Proven ability to leverage AI-powered assistants (e.g., GitHub Copilot).
  • Advanced English Level required; effective communication in English essential.
What to expect from our hiring process
  1. Let’s chat about your experience!
  2. Impress our recruiters, move on to a technical interview with our top developers.
  3. Nail that, meet our client—final step to joining our amazing team!
Apply now!

By applying to this position, you authorize Nearsure to collect, store, transfer, and process your personal data in accordance with our Privacy Policy. For more information, please review our Privacy Policy.

Consigue la evaluación confidencial y gratuita de tu currículum.
o arrastra un archivo en formato PDF, DOC, DOCX, ODT o PAGES de hasta 5 MB.