Job Description: Data Engineer (Databricks & Cloud)
We are seeking a skilled Data Engineer with expertise in Databricks and cloud platforms to join our dynamic team. You will be responsible for developing scalable data solutions, building data pipelines, and collaborating across teams to deliver high-quality data services.
Minimum Requirements:
- 3+ years of hands-on experience as a Data Engineer working with Databricks and Apache Spark
- Strong programming skills in Python, including experience with data manipulation libraries (e.g., PySpark, Spark SQL)
- Experience with core components of the Databricks ecosystem: Databricks Workflows, Unity Catalog, Delta Live Tables
- Solid understanding of data warehousing principles, ETL/ELT processes, data modeling, and database systems
- Proven experience with at least one major cloud platform (Azure, AWS, or GCP)
- Excellent SQL skills for data querying, transformation, and analysis
- Excellent communication and collaboration skills in English and German (min. B2 levels)
- Ability to work independently and in an agile team environment
Responsibilities:
- Design, develop, and maintain robust data pipelines using Databricks, Spark, and Python
- Build scalable ETL processes to ingest, transform, and load data from diverse sources into cloud-based data lakes and warehouses
- Leverage Databricks ecosystem components to create reliable and high-performance data workflows
- Integrate with cloud services such as Azure, AWS, or GCP to ensure secure and cost-effective data solutions
- Participate in data modeling and architecture decisions for long-term maintainability
- Ensure data quality and compliance with governance policies
- Collaborate with data scientists and analysts to meet data needs and deliver insights
- Stay updated on advancements in Databricks, data engineering, and cloud technologies