Aktiviere Job-Benachrichtigungen per E-Mail!

Data Engineer (f / m / x) (EN)

NETCONOMY

Dortmund

Vor Ort

EUR 60.000 - 80.000

Vollzeit

Vor 7 Tagen
Sei unter den ersten Bewerbenden

Erhöhe deine Chancen auf ein Interview

Erstelle einen auf die Position zugeschnittenen Lebenslauf, um deine Erfolgsquote zu erhöhen.

Zusammenfassung

A leading company in data solutions is seeking a Data Engineer to design and maintain robust data pipelines. The ideal candidate will have extensive experience with Databricks and cloud platforms, ensuring data quality and collaborating with teams to deliver actionable insights. This role offers the opportunity to work in an agile environment and contribute to innovative data solutions.

Qualifikationen

  • 3+ years of hands-on experience as a Data Engineer.
  • Strong programming skills in Python and SQL.
  • Experience with Databricks and cloud platforms.

Aufgaben

  • Designing and maintaining data pipelines using Databricks and Python.
  • Building scalable ETL processes for cloud-based data solutions.
  • Collaborating with data scientists to meet data needs.

Kenntnisse

Data Engineering
Python
SQL
Collaboration
Communication

Tools

Databricks
Apache Spark
Azure
AWS
GCP

Jobbeschreibung

Minimum Requirements:

  1. 3+ years of hands-on experience as a Data Engineer working with Databricks and Apache Spark
  2. Strong programming skills in Python, with experience in data manipulation libraries (e.g., PySpark, Spark SQL)
  3. Experience with core components of the Databricks ecosystem: Databricks Workflows, Unity Catalog, and Delta Live Tables
  4. Solid understanding of data warehousing principles, ETL/ELT processes, data modeling and techniques, and database systems
  5. Proven experience with at least one major cloud platform (Azure, AWS, or GCP)
  6. Excellent SQL skills for data querying, transformation, and analysis
  7. Excellent communication and collaboration skills in English and German (min. B2 levels)
  8. Ability to work independently as well as part of a team in an agile environment

Responsibilities:

  1. Designing, developing, and maintaining robust data pipelines using Databricks, Spark, and Python
  2. Building efficient and scalable ETL processes to ingest, transform, and load data from various sources (databases, APIs, streaming platforms) into cloud-based data lakes and warehouses
  3. Leveraging the Databricks ecosystem (SQL, Delta Lake, Workflows, Unity Catalog) to deliver reliable and performant data workflows
  4. Integrating with cloud services such as Azure, AWS, or GCP to enable secure, cost-effective data solutions
  5. Contributing to data modeling and architecture decisions to ensure consistency, accessibility, and long-term maintainability of the data landscape
  6. Ensuring data quality through validation processes and adherence to data governance policies
  7. Collaborating with data scientists and analysts to understand data needs and deliver actionable solutions
  8. Staying up to date with advancements in Databricks, data engineering, and cloud technologies to continuously improve tools and approaches
Hol dir deinen kostenlosen, vertraulichen Lebenslauf-Check.
eine PDF-, DOC-, DOCX-, ODT- oder PAGES-Datei bis zu 5 MB per Drag & Drop ablegen.