Job Search and Career Advice Platform

Ativa os alertas de emprego por e-mail!

Data Engineer

Zunzun Solutions

Belo Horizonte

Presencial

BRL 30.000 - 40.000

Tempo integral

Hoje
Torna-te num dos primeiros candidatos

Cria um currículo personalizado em poucos minutos

Consegue uma entrevista e ganha mais. Sabe mais

Resumo da oferta

A technology solutions provider in Brazil is seeking a skilled Data Engineer (Azure Databricks) to design and optimize data pipelines. The ideal candidate will have over 5 years of experience with Azure Databricks, SQL Server, and Python, enabling the modernization of data platforms on Azure Cloud. Responsibilities include developing ETL processes, optimizing database queries, and collaborating with cross-functional teams to ensure security and compliance across data operations. This is a full-time role based in Belo Horizonte.

Qualificações

  • 5+ years of experience with Azure Databricks, focusing on data pipeline development.
  • Strong SQL Server experience with an emphasis on query optimization.
  • Experience with Azure Data Factory for pipeline orchestration.

Responsabilidades

  • Design and build ETL / ELT pipelines using Azure Databricks and Azure Data Factory.
  • Develop Python notebooks for data processing tasks.
  • Optimize database performance through SQL query tuning.

Conhecimentos

Azure Databricks
Python
PySpark
SQL Server
Azure Data Factory
T-SQL
SSIS
Git
DevOps
Data Governance

Formação académica

Microsoft Certified: Azure Data Engineer Associate (DP-203)
Microsoft Certified: Azure Solutions Architect Expert

Ferramentas

Azure DevOps
Power BI
Descrição da oferta de emprego
Summary

We are seeking a highly skilled Data Engineer (Azure Databricks) to design, implement, and optimize enterprise‑grade data pipelines. In this role, you will leverage Azure Databricks, Azure Data Factory, SQL Server, and Python to enable scalable, governed, and performant data solutions. You will play a key role in modernizing our data platform on the Azure Cloud, ensuring reliability, efficiency, and compliance across the full data lifecycle.

Key Responsibilities
  • Data Pipeline Development : Design, build, and optimize ETL / ELT pipelines using Azure Databricks (PySpark, Delta Lake) and Azure Data Factory (ADF).
  • Data Flows & Transformations : Develop pipelines, data flows, and complex transformations with ADF, PySpark, and T‑SQL for seamless data extraction, transformation, and loading.
  • Data Processing : Develop Databricks Python notebooks for tasks such as joining, filtering, and pre‑aggregation.
  • Database & Query Optimization : Optimize database performance through SQL query tuning, index optimization, and code improvements to ensure efficient data retrieval and manipulation.
  • SSIS & Migration Support : Maintain and enhance SSIS package design and deployment for legacy workloads; contribute to migration and modernization into cloud‑native pipelines.
  • Collaboration & DevOps : Work with cross‑functional teams using Git (Azure Repos) for version control and Azure DevOps pipelines (CI / CD) for deployment.
  • Data Governance & Security : Partner with governance teams to integrate Microsoft Purview and Unity Catalog for cataloging, lineage tracking, and role‑based security.
  • API & External Integration : Implement REST APIs to retrieve analytics data from diverse external data feeds, enhancing accessibility and interoperability.
  • Automation : Automate ETL processes and database maintenance tasks using SQL Agent Jobs, ensuring data integrity and operational reliability.
  • Advanced SQL Expertise : Craft and optimize complex T‑SQL queries to support efficient data processing and analytical workloads.
Required Califications
  • 5+ years of hands‑on expertise with Azure Databricks, Python, PySpark, and Delta Lake.
  • 5+ years of proven experience with Azure Data Factory for orchestrating and monitoring pipelines.
  • Strong SQL Server / T‑SQL experience with a focus on query optimization, indexing strategies, and coding best practices.
  • Demonstrated experience in SSIS package design, deployment, and performance tuning.
  • Hands‑on knowledge of Unity Catalog for governance.
  • Experience with Git (Azure DevOps Repos) and CI / CD practices in data engineering projects.
Nice to Have
  • Exposure to Change Data Capture (CDC), Change Data Feed (CDF), and Temporal Tables.
  • Experience with Microsoft Purview, Power BI, and Azure‑native integrations.
  • Familiarity with Profisee Master Data Management (MDM).
  • Working in Agile / Scrum environments.
Preferred Qualifications
  • Microsoft Certified : Azure Data Engineer Associate (DP-203)
  • Microsoft Certified : Azure Solutions Architect Expert or equivalent advanced Azure certification
  • Databricks Certified Data Engineer Associate or Professional
  • Additional Microsoft SQL Server or Azure certifications demonstrating advanced database and cloud expertise
Obtém a tua avaliação gratuita e confidencial do currículo.
ou arrasta um ficheiro em formato PDF, DOC, DOCX, ODT ou PAGES até 5 MB.