Aktiviere Job-Benachrichtigungen per E-Mail!

Data Engineer (f / m / x) (EN)

NETCONOMY

Essen

Vor Ort

EUR 60.000 - 80.000

Vollzeit

Vor 28 Tagen

Erhöhe deine Chancen auf ein Interview

Erstelle einen auf die Position zugeschnittenen Lebenslauf, um deine Erfolgsquote zu erhöhen.

Zusammenfassung

A leading company in the tech industry is seeking a skilled Data Engineer with expertise in Databricks and cloud platforms. The successful candidate will be responsible for developing scalable data solutions and collaborating across teams to deliver high-quality data services. This role requires strong programming skills in Python and experience with data engineering principles.

Qualifikationen

  • 3+ years of hands-on experience as a Data Engineer.
  • Strong programming skills in Python with data manipulation libraries.
  • Proven experience with at least one major cloud platform (Azure, AWS, GCP).

Aufgaben

  • Design, develop, and maintain robust data pipelines using Databricks, Spark, and Python.
  • Build scalable ETL processes to ingest, transform, and load data.
  • Collaborate with data scientists and analysts to meet data needs.

Kenntnisse

Data Engineering
Python
SQL
Databricks
Apache Spark
Cloud Platforms
Data Warehousing
ETL/ELT Processes
Data Modeling
Communication

Jobbeschreibung

Job Description: Data Engineer (Databricks & Cloud)

We are seeking a skilled Data Engineer with expertise in Databricks and cloud platforms to join our dynamic team. You will be responsible for developing scalable data solutions, building data pipelines, and collaborating across teams to deliver high-quality data services.

Minimum Requirements:
  1. 3+ years of hands-on experience as a Data Engineer working with Databricks and Apache Spark
  2. Strong programming skills in Python, including experience with data manipulation libraries (e.g., PySpark, Spark SQL)
  3. Experience with core components of the Databricks ecosystem: Databricks Workflows, Unity Catalog, Delta Live Tables
  4. Solid understanding of data warehousing principles, ETL/ELT processes, data modeling, and database systems
  5. Proven experience with at least one major cloud platform (Azure, AWS, or GCP)
  6. Excellent SQL skills for data querying, transformation, and analysis
  7. Excellent communication and collaboration skills in English and German (min. B2 levels)
  8. Ability to work independently and in an agile team environment
Responsibilities:
  1. Design, develop, and maintain robust data pipelines using Databricks, Spark, and Python
  2. Build scalable ETL processes to ingest, transform, and load data from diverse sources into cloud-based data lakes and warehouses
  3. Leverage Databricks ecosystem components to create reliable and high-performance data workflows
  4. Integrate with cloud services such as Azure, AWS, or GCP to ensure secure and cost-effective data solutions
  5. Participate in data modeling and architecture decisions for long-term maintainability
  6. Ensure data quality and compliance with governance policies
  7. Collaborate with data scientists and analysts to meet data needs and deliver insights
  8. Stay updated on advancements in Databricks, data engineering, and cloud technologies
Hol dir deinen kostenlosen, vertraulichen Lebenslauf-Check.
eine PDF-, DOC-, DOCX-, ODT- oder PAGES-Datei bis zu 5 MB per Drag & Drop ablegen.