¡Activa las notificaciones laborales por email!

Data Engineer (Databricks, Neo4j)

Datamatics Technologies

Madrid

A distancia

EUR 55.000 - 75.000

Jornada completa

Hoy
Sé de los primeros/as/es en solicitar esta vacante

Genera un currículum adaptado en cuestión de minutos

Consigue la entrevista y gana más. Más información

Descripción de la vacante

A leading technology firm is looking for a Data Engineer experienced in Databricks, Teradata, and Neo4j. This remote role requires candidates based in Europe. The ideal candidate has 5–7 years of experience in data engineering, strong skills in Python, and knowledge of cloud platforms. Responsibilities include designing scalable data pipelines and integrating complex datasets. Join a global team focused on cutting-edge data technologies.

Formación

  • 5–7 years of experience as a Data Engineer.
  • Strong, hands-on experience with Databricks (Spark, PySpark, Delta Lake).
  • Mandatory expertise in Neo4j (graph modeling, Cypher queries).
  • Solid experience with Teradata (SQL, performance tuning, data modeling).
  • Strong scripting and coding experience in Python.
  • Experience working with cloud platforms (Azure/AWS/GCP).
  • Strong understanding of ETL/ELT concepts and distributed data processing.

Responsabilidades

  • Design, develop, and optimize scalable data pipelines using Databricks.
  • Integrate structured and unstructured datasets for analytics.
  • Develop and optimize data models on Teradata.
  • Implement graph-based data solutions using Neo4j.
  • Collaborate with solution architects to design robust solutions.

Conocimientos

Databricks
Teradata
Neo4j
Python
ETL/ELT
Data modeling
Cloud platforms (Azure/AWS/GCP)
Descripción del empleo

Job Title: Data Engineer (Databricks, Teradata & Neo4j)

Location: Remote (Candidates must be based in Europe)

Experience: 5–7 Years

Employment Type: Full-Time

Client Location: Sweden

Position Overview

We are looking for an experienced Data Engineer with strong hands‑on expertise in Databricks, Teradata, and Neo4j to join a leading technology‑driven team in Sweden. This is a remote role, but we require candidates who are currently residing in Europe due to project compliance and collaboration needs.

The ideal candidate will have a solid background in building scalable data pipelines, integrating complex data sources, and working with modern data platforms.

Key Responsibilities
Data Engineering & Development
  • Design, develop, and optimize scalable data pipelines using Databricks (PySpark/Spark).
  • Build, maintain, and enhance ETL/ELT processes across multiple data environments.
  • Integrate structured and unstructured datasets for downstream analytics and consumption.
  • Develop and optimize data models on Teradata for performance and reliability.
  • Implement graph‑based data solutions using Neo4j.
Solution Design & Architecture
  • Collaborate with solution architects and business teams to understand data needs and design robust solutions.
  • Participate in system design sessions and contribute to architecture improvements.
  • Ensure data quality, validation, and governance throughout the data lifecycle.
Performance & Optimization
  • Troubleshoot and optimize Spark jobs, Teradata SQL queries, and data workflows.
  • Ensure highly available and high‑performance data pipelines.
  • Monitor data operations and automate workflows where possible.
Collaboration & Communication
  • Work with cross‑functional teams including BI, Data Science, and Platform Engineering.
  • Document technical designs, pipelines, and solutions clearly and thoroughly.
  • Communicate effectively with remote stakeholders in a multicultural environment.
Required Skills & Qualifications
  • 5–7 years of experience as a Data Engineer.
  • Strong, hands‑on experience with Databricks (Spark, PySpark, Delta Lake).
  • Mandatory expertise in Neo4j (graph modeling, Cypher queries).
  • Solid experience with Teradata (SQL, performance tuning, data modelling).
  • Strong scripting and coding experience in Python.
  • Experience working with cloud platforms (Azure/AWS/GCP) is preferred—Azure is a plus.
  • Strong understanding of ETL/ELT concepts, data modelling, and distributed data processing.
  • Excellent analytical, problem‑solving, and communication skills.
  • Ability to work independently in remote, cross‑cultural teams.
Preferred Qualifications
  • Experience with CI/CD pipelines for data workflows.
  • Knowledge of data governance, data quality frameworks, and metadata management.
  • Exposure to real‑time data processing technologies (Kafka, Event Hub, etc.) is an advantage.
Additional Information
  • Remote role – Europe-based candidates only due to project requirements.
  • Opportunity to work with a global team on cutting‑edge data technologies.
Consigue la evaluación confidencial y gratuita de tu currículum.
o arrastra un archivo en formato PDF, DOC, DOCX, ODT o PAGES de hasta 5 MB.