Erstelle einen auf die Position zugeschnittenen Lebenslauf, um deine Erfolgsquote zu erhöhen.
Erstelle einen auf die Position zugeschnittenen Lebenslauf, um deine Erfolgsquote zu erhöhen.
Zusammenfassung
A leading company in data solutions is seeking a Data Engineer to design and maintain robust data pipelines. The ideal candidate will have extensive experience with Databricks and cloud platforms, ensuring data quality and collaborating with teams to deliver actionable insights. This role offers the opportunity to work in an agile environment and contribute to innovative data solutions.
Qualifikationen
3+ years of hands-on experience as a Data Engineer.
Strong programming skills in Python and SQL.
Experience with Databricks and cloud platforms.
Aufgaben
Designing and maintaining data pipelines using Databricks and Python.
Building scalable ETL processes for cloud-based data solutions.
Collaborating with data scientists to meet data needs.
Kenntnisse
Data Engineering
Python
SQL
Collaboration
Communication
Tools
Databricks
Apache Spark
Azure
AWS
GCP
Jobbeschreibung
Minimum Requirements:
3+ years of hands-on experience as a Data Engineer working with Databricks and Apache Spark
Strong programming skills in Python, with experience in data manipulation libraries (e.g., PySpark, Spark SQL)
Experience with core components of the Databricks ecosystem: Databricks Workflows, Unity Catalog, and Delta Live Tables
Solid understanding of data warehousing principles, ETL/ELT processes, data modeling and techniques, and database systems
Proven experience with at least one major cloud platform (Azure, AWS, or GCP)
Excellent SQL skills for data querying, transformation, and analysis
Excellent communication and collaboration skills in English and German (min. B2 levels)
Ability to work independently as well as part of a team in an agile environment
Responsibilities:
Designing, developing, and maintaining robust data pipelines using Databricks, Spark, and Python
Building efficient and scalable ETL processes to ingest, transform, and load data from various sources (databases, APIs, streaming platforms) into cloud-based data lakes and warehouses
Leveraging the Databricks ecosystem (SQL, Delta Lake, Workflows, Unity Catalog) to deliver reliable and performant data workflows
Integrating with cloud services such as Azure, AWS, or GCP to enable secure, cost-effective data solutions
Contributing to data modeling and architecture decisions to ensure consistency, accessibility, and long-term maintainability of the data landscape
Ensuring data quality through validation processes and adherence to data governance policies
Collaborating with data scientists and analysts to understand data needs and deliver actionable solutions
Staying up to date with advancements in Databricks, data engineering, and cloud technologies to continuously improve tools and approaches
Hol dir deinen kostenlosen, vertraulichen Lebenslauf-Check.
eine PDF-, DOC-, DOCX-, ODT- oder PAGES-Datei bis zu 5 MB per Drag & Drop ablegen.
Meine Jobsuche war ins Stocken geraten und meine Bewerbungen blieben erfolglos. JobLeads half mir, einen Lebenslauf zu erstellen, den Recruiter einfach nicht übersehen konnten.
Sophie Reynolds
Der Lebenslauf-Check von JobLeads half mir, kritische Fehler zu beseitigen. Fast sofort erhielt ich Einladungen zu Job-Interviews!
Daniel Fischer
Dank des Lebenslauf-Checks von JobLeads wurde mein Lebenslauf nicht mehr übersehen und ich erhielt sofort Einladungen zu Interviews!