Job Search and Career Advice Platform

Ativa os alertas de emprego por e-mail!

Data Engineer

Zunzun Solutions

Parnamirim

Presencial

BRL 80.000 - 120.000

Tempo integral

Hoje
Torna-te num dos primeiros candidatos

Cria um currículo personalizado em poucos minutos

Consegue uma entrevista e ganha mais. Sabe mais

Resumo da oferta

A leading data solutions company is seeking an experienced Data Engineer specializing in Azure Databricks to design and optimize data pipelines. The ideal candidate has over 5 years of hands-on experience with Azure technologies and strong skills in Python, SQL Server, and data governance. You will collaborate with cross-functional teams to ensure efficient data solutions and contribute to the modernization of the data platform on Azure Cloud. This role offers an opportunity to work with cutting-edge technology in a dynamic environment.

Qualificações

  • 5+ years of hands-on expertise with Azure Databricks, Python, PySpark, and Delta Lake.
  • 5+ years of proven experience with Azure Data Factory for orchestrating and monitoring pipelines.
  • Strong SQL Server/T-SQL experience with a focus on query optimization, indexing strategies, and coding best practices.

Responsabilidades

  • Design and optimize ETL/ELT pipelines using Azure Databricks and Azure Data Factory.
  • Develop pipelines and transformations with ADF, PySpark, and T-SQL.
  • Optimize database performance through SQL query tuning and code improvements.

Conhecimentos

Azure Databricks
Python
SQL Server
PySpark
Azure Data Factory
T-SQL

Ferramentas

Git (Azure Repos)
Azure DevOps
SSIS
Descrição da oferta de emprego
Summary

We are seeking a highly skilled Data Engineer (Azure Databricks) to design, implement, and optimize enterprise-grade data pipelines.

In this role, you will leverage Azure Databricks, Azure Data Factory, SQL Server, and Python to enable scalable, governed, and performant data solutions.

You will play a key role in modernizing our data platform on the Azure Cloud, ensuring reliability, efficiency, and compliance across the full data lifecycle.

Key Responsibilities
  • Data Pipeline Development: Design, build, and optimize ETL/ELT pipelines using Azure Databricks (PySpark, Delta Lake) and Azure Data Factory (ADF).
  • Data Flows & Transformations: Develop pipelines, data flows, and complex transformations with ADF, PySpark, and T-SQL for seamless data extraction, transformation, and loading.
  • Data Processing: Develop Databricks Python notebooks for tasks such as joining, filtering, and pre-aggregation.
  • Database & Query Optimization: Optimize database performance through SQL query tuning, index optimization, and code improvements to ensure efficient data retrieval and manipulation.
  • SSIS & Migration Support: Maintain and enhance SSIS package design and deployment for legacy workloads; contribute to migration and modernization into cloud-native pipelines.
  • Collaboration & DevOps: Work with cross-functional teams using Git (Azure Repos) for version control and Azure DevOps pipelines (CI/CD) for deployment.
  • Data Governance & Security: Partner with governance teams to integrate Microsoft Purview and Unity Catalog for cataloging, lineage tracking, and role-based security.
  • API & External Integration: Implement REST APIs to retrieve analytics data from diverse external data feeds, enhancing accessibility and interoperability.
  • Automation: Automate ETL processes and database maintenance tasks using SQL Agent Jobs, ensuring data integrity and operational reliability.
  • Advanced SQL Expertise: Craft and optimize complex T-SQL queries to support efficient data processing and analytical workloads.
Required Qualifications
  • 5+ years of hands‑on expertise with Azure Databricks, Python, PySpark, and Delta Lake.
  • 5+ years of proven experience with Azure Data Factory for orchestrating and monitoring pipelines.
  • Strong SQL Server/T‑SQL experience with a focus on query optimization, indexing strategies, and coding best practices.
  • Demonstrated experience in SSIS package design, deployment, and performance tuning.
  • Hands‑on knowledge of Unity Catalog for governance.
  • Experience with Git (Azure DevOps Repos) and CI/CD practices in data engineering projects.
Nice to Have
  • Exposure to Change Data Capture (CDC), Change Data Feed (CDF), and Temporal Tables.
  • Experience with Microsoft Purview, Power BI, and Azure-native integrations.
  • Familiarity with Profisee Master Data Management (MDM).
  • Working in Agile / Scrum environments.
Preferred Qualifications
  • Microsoft Certified: Azure Data Engineer Associate (DP-203)
  • Microsoft Certified: Azure Solutions Architect Expert or equivalent advanced Azure certification
  • Databricks Certified Data Engineer Associate or Professional
  • Additional Microsoft SQL Server or Azure certifications demonstrating advanced database and cloud expertise
Obtém a tua avaliação gratuita e confidencial do currículo.
ou arrasta um ficheiro em formato PDF, DOC, DOCX, ODT ou PAGES até 5 MB.