Senior Data Engineer (f/m/d)

Sei unter den ersten Bewerbenden.
Nur für registrierte Mitglieder
Würzburg
EUR 60.000 - 100.000
Sei unter den ersten Bewerbenden.
Vor 4 Tagen
Jobbeschreibung

Social network you want to login/join with:

The chapter Technology & Engineering - Data & AI Engineering represents all experts who deliver data and AI engineering capabilities to our product teams. These experts are organized together to enhance those functional skillsets, improve E.ON’s MLOps & AIOps tech stack and deliver a high level of data delivery & model automation.

The products you will be working on belong to our team Digital Solutions | Future Energy Home. This team develops and operates software to manage home energy devices such as photovoltaic systems, inverters, batteries, EV charging wall boxes, heat pumps, and meters as a cloud-based solution for our central and local units, enabling rollout to end-customers. We integrate with numerous vendors’ devices and apply centralized insights, analytics, and control mechanisms for these devices.

Your tasks

  • Integrate cross-cloud Data Platform Pipelines (AWS & Azure), using Data Mesh and Data Fabric architecture concepts.
  • Implement data sharing interfaces or connectors to share business data (e.g., solar telemetry, electric vehicle charging data) with regional business data consumers.
  • Build robust data pipeline applications with AWS and Azure data services, following software principles such as Clean Code and SOLID.
  • Work closely with Data Solutions Architects to understand and shape overarching Data Architecture for data sharing interfaces and connectors.
  • Mentor and coach others, conduct pair programming sessions, review merge requests, and actively contribute to the 'Community of Practice'.

Your profile

  • At least 5 years of experience building enterprise-grade Python data pipeline (ETL) applications using software best practices such as Clean Code/SOLID principles in AWS/Azure.
  • At least 3 years of experience with relevant AWS Services for Data Engineering (e.g., Athena, Lambda, Glue, IAM, CloudWatch) and a background in Azure.
  • Profound knowledge in Databricks (e.g., PySpark), Snowflake, Python Pandas, Python-Pytest, and Python Behave.
  • Experience building DataOps pipelines with GitLab, CloudFormation, Terraform or CDK, and orchestration tools (e.g., AWS Step Functions).
  • Preferable experience in Data Modeling Concepts such as Data Vault 2.0 and Dimensional Data Modeling.
  • Excellent communication skills and the ability to mentor and coach other developers.