Job Description: Data Engineer (Databricks)
We are seeking a skilled Data Engineer with expertise in Databricks and Apache Spark to join our team. The ideal candidate will have:
- 3+ years of hands-on experience in data engineering with Databricks and Spark
- Proficiency in Python and data manipulation libraries such as PySpark and Spark SQL
- Knowledge of Databricks ecosystem components: Workflows, Unity Catalog, Delta Live Tables
- Understanding of data warehousing, ETL/ELT processes, data modeling, and database systems
- Experience with at least one cloud platform: Azure, AWS, or GCP
- Strong SQL skills for data querying and transformation
- Excellent communication skills in English and German (min. B2 level)
- Ability to work independently and in an agile team environment
Responsibilities
- Design, develop, and maintain scalable data pipelines using Databricks, Spark, and Python
- Build efficient ETL processes to load data from various sources into cloud data lakes and warehouses
- Utilize Databricks tools (SQL, Delta Lake, Workflows, Unity Catalog) for reliable data workflows
- Integrate cloud services (Azure, AWS, GCP) for secure and cost-effective data solutions
- Contribute to data modeling and architecture decisions
- Ensure data quality and compliance with governance policies
- Collaborate with data scientists and analysts to meet data needs
- Stay updated with advancements in data engineering and cloud technologies
Technologies
- Azure, CI/CD, Cloud, Databricks, DevOps, Support, Machine Learning, Power BI, Python, PySpark, Spark, Terraform, Unity, GameDev, Looker
Additional Information
NETCONOMY has grown from a startup to a 500-employee company across Europe, emphasizing agile and diverse collaboration.
Our Offer
- Flexible working models and hybrid options
- Structured onboarding, mentoring, and training
- Annual company summit and social events
- Meal allowances, wellbeing discounts, mobility support
Contact
Brauquartier 2, 8055 Graz, Austria
Phone: 43 316 81 55 44
Email: [emailprotected]