Enable job alerts via email!

Data Engineer Azure Databricks

Morgan McKinley

Dublin

Hybrid

EUR 70,000 - 100,000

Full time

14 days ago

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Start fresh or import an existing resume

Job summary

A leading provider of innovative data solutions in Ireland is seeking a skilled Data Engineer with expertise in Microsoft Azure and Databricks. This role offers a chance to join a dynamic team, focusing on designing secure and efficient data pipelines to support analytics and machine learning initiatives.

Qualifications

  • 5+ years of experience in data engineering, 3+ in Azure.
  • Strong hands-on expertise in Databricks.
  • Knowledge in ETL/ELT processes and data modelling.

Responsibilities

  • Design and implement scalable data pipelines using Azure and Databricks.
  • Collaborate with teams to understand data requirements.
  • Mentor junior data engineers.

Skills

Data Engineering
Microsoft Azure
Databricks
Python
SQL
Data Governance
Performance Optimisation
CI/CD
Agile Methodologies

Tools

Terraform
Azure DevOps

Job description

Social network you want to login/join with:

col-narrow-left

Location:

Dublin North, Ireland

Job Category:

Information Technology

EU work permit required:

Yes

col-narrow-right

Job Reference:

JN -062025-1983934_1750695838

Job Views:

10

Posted:

23.06.2025

Expiry Date:

07.08.2025

col-wide

Job Description:

Position: Data Engineer (Azure & Databricks)
Location: Dublin North, Hybrid (3 days on site, 2 from home)
Contract Type: 12‑month Fixed‑Term Contract

About the Company
A leading provider of innovative data solutions is seeking a highly skilled and motivated Data Engineer with expertise in Microsoft Azure and Databricks. This role is an excellent opportunity for a talented professional to join a growing data analytics team and contribute to the design and delivery of robust, secure and efficient data pipelines.

The Role
As a Data Engineer, you will design and implement robust, secure and efficient data pipelines using Azure and Databricks. You will play a pivotal role in ensuring that data platforms support both real‑time and batch processing, advanced analytics and machine learning. You will work closely with cross‑functional teams - including data scientists, analysts and other engineers - to understand requirements and deliver end‑to‑end data solutions that drive business value.

Key Responsibilities

*

Design, develop and implement scalable and resilient data pipelines for both real‑time and batch workloads within Azure and Databricks.
*

Implement ELT processes to integrate data from various sources into the central data platform.
*

Develop and maintain data quality checks, monitoring and alerting to ensure pipeline health.
*

Optimise data workloads within the data platform, focusing on performance, cost efficiency, resilience and security.
*

Collaborate with stakeholders to understand data needs and deliver optimal solutions from ingestion to visualisation.
*

Ensure data solutions adhere to data governance, privacy and security best practices and regulations.
*

Utilise Infrastructure‑as‑Code (IaC) tools such as Terraform for cloud infrastructure provisioning.
*

Maintain data integrity, security, governance and compliance across all data solutions.
*

Mentor junior data engineers and champion best practices and technical excellence within the team.
*

Troubleshoot and resolve data-related issues to maintain data accuracy and performant pipelines.
*

Stay current with the latest trends and technologies in cloud computing and data engineering, especially within Azure and Databricks.

Required Qualifications

*

5+ years of experience in data engineering, including 3+ years working within Microsoft Azure.
*

Strong experience with Azure services (Data Factory, SQL Database, Storage, Key Vault, Function and Logic Apps, Cost Analysis).
*

Hands‑on experience with Databricks, including data engineering, processing and analytics development.
*

Strong understanding of performance optimisation, data governance frameworks (e.g., Unity Catalog) and best practices.
*

Strong knowledge of ETL/ELT processes, data modelling, data warehousing concepts and Medallion Architecture.
*

Experience with real‑time data processing frameworks (e.g., Apache Kafka).
*

Proficient in Python programming and PySpark, SQL and working with large‑scale datasets.
*

Knowledge of CI/CD and DevOps practices and tools (e.g., Git, Azure DevOps).
*

Experience working with Agile delivery methods.
*

Understanding of Infrastructure‑as‑Code (IaC) using Terraform (desirable).
*

Knowledge of machine learning integration and MLOps (desirable).

Soft Skills

*

Excellent problem‑solving and analytical capabilities.
*

Strong communication and collaboration abilities across both technical and business stakeholders.
*

Skilled at managing multiple priorities in a fast‑paced environment.
*

Strong mentoring and team‑building abilities.
*

Solid understanding of data governance, security protocols and compliance standards.
*

Commitment to team values and collaborative culture.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.