Job Search and Career Advice Platform

¡Activa las notificaciones laborales por email!

Senior Data Engineer (Databricks Specialist)

Coderio

A distancia

EUR 60.000 - 80.000

Jornada completa

Hoy
Sé de los primeros/as/es en solicitar esta vacante

Genera un currículum adaptado en cuestión de minutos

Consigue la entrevista y gana más. Más información

Descripción de la vacante

A technology solutions company is seeking a Senior Data Engineer specializing in Databricks. This remote position involves architecting scalable data solutions and leading technical initiatives. Key responsibilities include optimizing platform performance and developing production-grade code in Python and PySpark. Candidates should have strong experience in data engineering fundamentals and Azure services, alongside excellent communication skills. This role offers significant autonomy and impact within an international team.

Servicios

100% remote work
Long-term commitment with autonomy
Strategic role in a modern engineering culture
Collaborative international team
Clear path to growth

Formación

  • 3+ years of hands-on experience with the Databricks ecosystem.
  • 4+ years of experience in Data Engineering fundamentals.
  • 4+ years proficiency in Python, PySpark, and SQL.
  • 3+ years working with Azure services.
  • 2+ years experience using GitHub for version control.

Responsabilidades

  • Architect scalable Lakehouse solutions using Delta Tables.
  • Design data workflows using Databricks Workflows.
  • Manage platform internals and optimize configurations.
  • Implement secure data governance.
  • Develop production-grade Python and PySpark code.
  • Collaborate with teams to improve operational efficiency.

Conocimientos

Databricks ecosystem
Data Engineering fundamentals
Python
PySpark
SQL
Azure services
GitHub
Problem-solving

Herramientas

Azure Data Factory
Unity Catalog
Delta Sharing
Descripción del empleo
Senior Data Engineer (Databricks Specialist)

About Coderio

Coderio designs and delivers scalable digital solutions for global businesses. With a strong technical foundation and a product mindset, our teams lead complex software projects from architecture to execution. We value autonomy, clear communication, and technical excellence. We work closely with international teams and partners, building technology that makes a difference.

Learn more: http : / / coderio.com

In this role, you will take ownership of large-scale data pipeline solutions for a Fortune 500 client, working with extremely high-volume datasets in a fully Azure Databricks environment. You will lead the design and evolution of Lakehouse architectures, optimize platform performance end-to-end, and guide teams through all phases of the SDLC. You will operate as a technical leader, solving problems with high ambiguity and continuously improving simplicity, reliability, and performance across the platform.

What to Expect in This Role (Responsibilities)
  • Architect and implement scalable Lakehouse solutions using Delta Tables and Delta Live Tables to ensure performance, consistency, and reliability.
  • Design and orchestrate complex data workflows using Databricks Workflows and Jobs, leveraging Databricks Asset Bundles for deployment management.
  • Manage platform internals, including low-level Cluster Configuration, optimization through the Databricks CLI, and environment tuning.
  • Implement secure data governance and sharing using Unity Catalog and Delta Sharing.
  • Develop production-grade Python and PySpark code, including custom Python libraries to standardize logic across pipelines.
  • Collaborate with engineering and platform teams to drive improvements in scalability, maintainability, and operational efficiency.
  • Act as a technical leader who supports decision‑making, provides clarity under ambiguity, and drives continuous improvement.
Requirements
  • 3+ years of hands‑on experience across the full Databricks ecosystem, including Notebooks, Workflows, Jobs, Asset Bundles, Cluster Configuration, CLI usage, Unity Catalog, Delta Sharing, and Delta Live Tables.
  • 4+ years of deep experience in Data Engineering fundamentals, including Data Modeling, Data Warehouse principles, ETL processes, and pipeline orchestration.
  • 4+ years of proficiency in Python, PySpark, and SQL building production‑grade data solutions.
  • 3+ years working with Azure services such as Storage Accounts, Azure Active Directory, and Azure Data Factory.
  • 2+ years of experience using GitHub SaaS and GitHub Actions for version control and deployment.
  • Strong communication, collaboration, and problem‑solving abilities, with a demonstrated capacity to operate effectively under ambiguity.
Nice to Have
  • Experience developing internal Python libraries for reuse across pipelines.
  • Experience implementing automated testing strategies for data pipelines.
  • Experience collaborating with cross‑functional teams to establish best practices and technical standards.
Benefits
  • 100% remote
  • Long‑term commitment, with autonomy and impact
  • Strategic and high‑visibility role in a modern engineering culture
  • Collaborative international team and strong technical leadership
  • Clear path to growth and leadership within Coderio
Why join Coderio?

At Coderio, we value talent regardless of location. We are a remote‑first company, passionate about technology, collaborative work, and fair compensation. We offer an inclusive, challenging environment with real opportunities for growth. If you are motivated to build solutions with impact, we are waiting for you.

Apply now.

PI96809456f373-30511-39198056

Consigue la evaluación confidencial y gratuita de tu currículum.
o arrastra un archivo en formato PDF, DOC, DOCX, ODT o PAGES de hasta 5 MB.