Job Search and Career Advice Platform

¡Activa las notificaciones laborales por email!

Senior Data Engineer (Databricks Specialist)

Medium

Región de Murcia

A distancia

EUR 50.000 - 70.000

Jornada completa

Hoy
Sé de los primeros/as/es en solicitar esta vacante

Genera un currículum adaptado en cuestión de minutos

Consigue la entrevista y gana más. Más información

Descripción de la vacante

A leading tech company is seeking an experienced data engineer for a remote role focusing on large-scale data pipeline solutions. You will take ownership of Lakehouse architectures and optimize platform performance in a collaborative international environment. Candidates should have a strong background in Databricks, data engineering, and programming languages like Python and PySpark. This position offers autonomy and opportunities for growth in a dynamic engineering culture.

Servicios

100% remote
Long-term commitment
Clear path to growth
Collaborative team

Formación

  • 3+ years of experience in the Databricks ecosystem including Workflows and Cluster Configuration.
  • 4+ years in Data Engineering principles such as ETL processes and pipeline orchestration.
  • Strong background in Python, PySpark, and SQL with production-grade data solutions.
  • 3+ years working with Azure services.

Responsabilidades

  • Architect and implement scalable Lakehouse solutions using Delta Tables.
  • Design complex data workflows using Databricks Workflows and Jobs.
  • Manage platform internals and optimize through the Databricks CLI.
  • Implement secure data governance using Unity Catalog.
  • Develop production-grade Python and PySpark code.

Conocimientos

Databricks ecosystem
Data Engineering fundamentals
Python
PySpark
SQL
Azure services
GitHub Actions
Descripción del empleo
About Coderio

Coderio designs and delivers scalable digital solutions for global businesses. With a strong technical foundation and a product mindset, our teams lead complex software projects from architecture to execution. We value autonomy, clear communication, and. We work closely with international teams and partners, building technology that makes a difference.

🌍 Learn more: http://coderio.com

In this role, you will take ownership of large-scale data pipeline solutions for a Fortune 500 client, working with extremely high-volume datasets in a fully Azure Databricks environment. You will lead the design and evolution of Lakehouse architectures, optimize platform performance end-to-end, and guide teams through all phases of the SDLC. You will operate as a technical leader, solving problems with high ambiguity and continuously improving simplicity, reliability, and performance across the platform.

What to Expect in This Role (Responsibilities)
  • Architect and implement scalable Lakehouse solutions using Delta Tables and Delta Live Tables to ensure performance, consistency, and reliability.
  • Design and orchestrate complex data workflows using Databricks Workflows and Jobs, leveraging Databricks Asset Bundles for deployment management.
  • Manage platform internals, including low-level Cluster Configuration, optimization through the Databricks CLI, and environment tuning.
  • Implement secure data governance and sharing using Unity Catalog and Delta Sharing.
  • Develop production-grade Python and PySpark code, including custom Python libraries to standardize logic across pipelines.
  • Collaborate with engineering and platform teams to drive improvements in scalability, maintainability, and operational efficiency.
  • Act as a technical leader who supports decision-making, provides clarity under ambiguity, and drives continuous improvement.
Requirements

3+ years of hands‑on experience across the full Databricks ecosystem, including Notebooks, Workflows, Jobs, Asset Bundles, Cluster Configuration, CLI usage, Unity Catalog, Delta Sharing, and Delta Live Tables.

4+ years of deep experience in Data Engineering fundamentals, including Data Modeling, Data Warehouse principles, ETL processes, and pipeline orchestration.

4+ years of proficiency in Python, PySpark, and SQL building production‑grade data solutions.

3+ years working with Azure services such as Storage Accounts, Azure Active Directory, and Azure Data Factory.

2+ years of experience using GitHub SaaS andHub Actions for version control and deployment.

Strong communication, collaboration, and problem‑solving abilities, with a demonstrated capacity to operate effectively under ambiguity.

Nice to Have
  • Experience developing internal Python libraries for reuse across pipelines.
  • Experience implementing automated testing strategies for data pipelines.
  • Experience collaborating with cross‑functional teams to establish best practices and technical standards.
Benefits
  • 100% remote
  • Long‑term commitment, with autonomy and impact
  • Strategic and high‑visibility role in a modern engineering culture
  • Collaborative international team and strong technical leadership
  • Clear path to growth and leadership within Coderio
Why join Coderio?

At Coderio, we value talent regardless of location. We are a remote‑first company, passionate about technology, collaborative work, and fair compensation. We offer an inclusive, challenging environment with real opportunities for growth. If you are motivated to build solutions with impact, we are waiting for you.

Apply now.

Consigue la evaluación confidencial y gratuita de tu currículum.
o arrastra un archivo en formato PDF, DOC, DOCX, ODT o PAGES de hasta 5 MB.