Aktiviere Job-Benachrichtigungen per E-Mail!

Data Engineer (f / m / x) (EN)

NETCONOMY

Duisburg

Hybrid

EUR 55.000 - 75.000

Vollzeit

Vor 25 Tagen

Erhöhe deine Chancen auf ein Interview

Erstelle einen auf die Position zugeschnittenen Lebenslauf, um deine Erfolgsquote zu erhöhen.

Zusammenfassung

A leading company in data engineering is seeking a skilled Data Engineer with expertise in Databricks and Apache Spark. The successful candidate will design scalable data pipelines and build efficient ETL processes, contributing to data architecture and quality. This role requires strong technical skills, particularly in Python and SQL, and offers flexible working models and opportunities for professional growth.

Leistungen

Flexible working models and hybrid options
Structured onboarding and mentoring
Annual company summit and social events
Meal allowances
Wellbeing discounts
Mobility support

Qualifikationen

  • 3+ years of experience in data engineering with Databricks and Spark.
  • Proficiency in Python and data manipulation libraries like PySpark.
  • Strong SQL skills for data querying and transformation.

Aufgaben

  • Design and maintain scalable data pipelines using Databricks and Spark.
  • Build ETL processes to load data into cloud data lakes.
  • Collaborate with data scientists and analysts to meet data needs.

Kenntnisse

Python
SQL
Data Engineering
Databricks
Apache Spark
Data Manipulation
ETL/ELT Processes
Cloud Platforms
Communication

Tools

Databricks
Azure
AWS
GCP
Terraform
Power BI

Jobbeschreibung

Job Description: Data Engineer (Databricks)

We are seeking a skilled Data Engineer with expertise in Databricks and Apache Spark to join our team. The ideal candidate will have:

  • 3+ years of hands-on experience in data engineering with Databricks and Spark
  • Proficiency in Python and data manipulation libraries such as PySpark and Spark SQL
  • Knowledge of Databricks ecosystem components: Workflows, Unity Catalog, Delta Live Tables
  • Understanding of data warehousing, ETL/ELT processes, data modeling, and database systems
  • Experience with at least one cloud platform: Azure, AWS, or GCP
  • Strong SQL skills for data querying and transformation
  • Excellent communication skills in English and German (min. B2 level)
  • Ability to work independently and in an agile team environment
Responsibilities
  1. Design, develop, and maintain scalable data pipelines using Databricks, Spark, and Python
  2. Build efficient ETL processes to load data from various sources into cloud data lakes and warehouses
  3. Utilize Databricks tools (SQL, Delta Lake, Workflows, Unity Catalog) for reliable data workflows
  4. Integrate cloud services (Azure, AWS, GCP) for secure and cost-effective data solutions
  5. Contribute to data modeling and architecture decisions
  6. Ensure data quality and compliance with governance policies
  7. Collaborate with data scientists and analysts to meet data needs
  8. Stay updated with advancements in data engineering and cloud technologies
Technologies
  • Azure, CI/CD, Cloud, Databricks, DevOps, Support, Machine Learning, Power BI, Python, PySpark, Spark, Terraform, Unity, GameDev, Looker
Additional Information

NETCONOMY has grown from a startup to a 500-employee company across Europe, emphasizing agile and diverse collaboration.

Our Offer
  • Flexible working models and hybrid options
  • Structured onboarding, mentoring, and training
  • Annual company summit and social events
  • Meal allowances, wellbeing discounts, mobility support
Contact

Brauquartier 2, 8055 Graz, Austria

Phone: 43 316 81 55 44

Email: [emailprotected]

Hol dir deinen kostenlosen, vertraulichen Lebenslauf-Check.
eine PDF-, DOC-, DOCX-, ODT- oder PAGES-Datei bis zu 5 MB per Drag & Drop ablegen.