Job Search and Career Advice Platform

Aktiviere Job-Benachrichtigungen per E-Mail!

Data Engineer (Databricks, Neo4j)

Datamatics Technologies

Remote

EUR 60.000 - 80.000

Vollzeit

Vor 27 Tagen

Erstelle in nur wenigen Minuten einen maßgeschneiderten Lebenslauf

Überzeuge Recruiter und verdiene mehr Geld. Mehr erfahren

Zusammenfassung

A leading data technology firm is seeking a skilled Data Engineer to design scalable data pipelines and work with cutting-edge technologies like Databricks and Neo4j. The ideal candidate will have 5–7 years of experience, strong coding skills in Python, and the ability to work independently in remote teams. This remote position is exclusively for candidates based in Europe, providing an exciting opportunity to collaborate on innovative projects.

Leistungen

Opportunity to work with a global team
Cutting-edge data technologies

Qualifikationen

  • 5–7 years of experience as a Data Engineer.
  • Strong, hands-on experience with Databricks (Spark, PySpark, Delta Lake).
  • Mandatory expertise in Neo4j (graph modeling, Cypher queries).
  • Solid experience with Teradata (SQL, performance tuning, data modelling).
  • Strong scripting and coding experience in Python.

Aufgaben

  • Design, develop, and optimize scalable data pipelines using Databricks.
  • Build, maintain, and enhance ETL/ELT processes across multiple data environments.
  • Integrate structured and unstructured datasets for downstream analytics.
  • Collaborate with solution architects to understand data needs.
  • Troubleshoot and optimize Spark jobs and Teradata SQL queries.

Kenntnisse

Databricks
Neo4j
Teradata
Python
ETL/ELT concepts
Distributed data processing
Data modelling
Problem-solving
Communication skills

Tools

Azure
AWS
GCP
Kafka
Jobbeschreibung

Job Title: Data Engineer (Databricks, Teradata & Neo4j)

Location: Remote (Candidates must be based in Europe)

Experience: 5–7 Years

Employment Type: Full-Time

Client Location: Sweden

Position Overview

We are looking for an experienced Data Engineer with strong hands‑on expertise in Databricks, Teradata, and Neo4j to join a leading technology‑driven team in Sweden. This is a remote role, but we require candidates who are currently residing in Europe due to project compliance and collaboration needs.

The ideal candidate will have a solid background in building scalable data pipelines, integrating complex data sources, and working with modern data platforms.

Key Responsibilities
Data Engineering & Development
  • Design, develop, and optimize scalable data pipelines using Databricks (PySpark/Spark).
  • Build, maintain, and enhance ETL/ELT processes across multiple data environments.
  • Integrate structured and unstructured datasets for downstream analytics and consumption.
  • Develop and optimize data models on Teradata for performance and reliability.
  • Implement graph‑based data solutions using Neo4j.
Solution Design & Architecture
  • Collaborate with solution architects and business teams to understand data needs and design robust solutions.
  • Participate in system design sessions and contribute to architecture improvements.
  • Ensure data quality, validation, and governance throughout the data lifecycle.
Performance & Optimization
  • Troubleshoot and optimize Spark jobs, Teradata SQL queries, and data workflows.
  • Ensure highly available and high‑performance data pipelines.
  • Monitor data operations and automate workflows where possible.
Collaboration & Communication
  • Work with cross‑functional teams including BI, Data Science, and Platform Engineering.
  • Document technical designs, pipelines, and solutions clearly and thoroughly.
  • Communicate effectively with remote stakeholders in a multicultural environment.
Required Skills & Qualifications
  • 5–7 years of experience as a Data Engineer.
  • Strong, hands‑on experience with Databricks (Spark, PySpark, Delta Lake).
  • Mandatory expertise in Neo4j (graph modeling, Cypher queries).
  • Solid experience with Teradata (SQL, performance tuning, data modelling).
  • Strong scripting and coding experience in Python.
  • Experience working with cloud platforms (Azure/AWS/GCP) is preferred—Azure is a plus.
  • Strong understanding of ETL/ELT concepts, data modelling, and distributed data processing.
  • Excellent analytical, problem‑solving, and communication skills.
  • Ability to work independently in remote, cross‑cultural teams.
Preferred Qualifications
  • Experience with CI/CD pipelines for data workflows.
  • Knowledge of data governance, data quality frameworks, and metadata management.
  • Exposure to real‑time data processing technologies (Kafka, Event Hub, etc.) is an advantage.
Additional Information
  • Remote role – Europe-based candidates only due to project requirements.
  • Opportunity to work with a global team on cutting‑edge data technologies.
Hol dir deinen kostenlosen, vertraulichen Lebenslauf-Check.
eine PDF-, DOC-, DOCX-, ODT- oder PAGES-Datei bis zu 5 MB per Drag & Drop ablegen.