Job Search and Career Advice Platform

Ativa os alertas de emprego por e-mail!

Data Engineer

Zunzun Solutions

Aracaju

Híbrido

BRL 80.000 - 120.000

Tempo integral

Ontem
Torna-te num dos primeiros candidatos

Cria um currículo personalizado em poucos minutos

Consegue uma entrevista e ganha mais. Sabe mais

Resumo da oferta

A technology company in Brazil is seeking a highly skilled Data Engineer (Azure Databricks) to design, implement, and optimize enterprise-grade data pipelines. The ideal candidate will have over 5 years of experience with Azure Databricks, Python, and SQL Server, focusing on creating scalable and governed data solutions on the Azure Cloud. This role requires advanced skills in data processing and governance, collaboration with teams using Azure DevOps, and a commitment to data integrity and operational reliability.

Serviços

Professional development opportunities
Remote work options
Flexible hours

Qualificações

  • 5+ years of hands-on experience with Azure Databricks, Python, and PySpark.
  • Proven track record with Azure Data Factory for monitoring pipelines.
  • Strong experience in SQL Server with a focus on query optimization.

Responsabilidades

  • Design and optimize ETL/ELT pipelines using Azure Databricks.
  • Develop data flows and complex transformations for seamless data management.
  • Optimize database performance through SQL query tuning.

Conhecimentos

Azure Databricks
Python
SQL Server
T-SQL
PySpark
Data Factory
SSIS
Git
CI/CD

Formação académica

Microsoft Certified: Azure Data Engineer Associate (DP-203)
Azure Solutions Architect Expert
Databricks Certified Data Engineer Associate

Ferramentas

Azure DevOps
Microsoft Purview
Power BI
Descrição da oferta de emprego
Summary :

We are seeking a highly skilled Data Engineer (Azure Databricks) to design, implement, and optimize enterprise‑grade data pipelines. In this role, you will leverage Azure Databricks, Azure Data Factory, SQL Server, and Python to enable scalable, governed, and performant data solutions. You will play a key role in modernizing our data platform on the Azure Cloud, ensuring reliability, efficiency, and compliance across the full data lifecycle.

Key Responsibilities :
  • Data Pipeline Development : Design, build, and optimize ETL / ELT pipelines using Azure Databricks (PySpark, Delta Lake) and Azure Data Factory (ADF).
  • Data Flows & Transformations : Develop pipelines, data flows, and complex transformations with ADF, PySpark, and T‑SQL for seamless data extraction, transformation, and loading.
  • Data Processing : Develop Databricks Python notebooks for tasks such as joining, filtering, and pre‑aggregation.
  • Database & Query Optimization : Optimize database performance through SQL query tuning, index optimization, and code improvements to ensure efficient data retrieval and manipulation.
  • SSIS & Migration Support : Maintain and enhance SSIS package design and deployment for legacy workloads; contribute to migration and modernization into cloud‑native pipelines.
  • Collaboration & DevOps : Work with cross‑functional teams using Git (Azure Repos) for version control and Azure DevOps pipelines (CI / CD) for deployment.
  • Data Governance & Security : Partner with governance teams to integrate Microsoft Purview and Unity Catalog for cataloging, lineage tracking, and role‑based security.
  • API & External Integration : Implement REST APIs to retrieve analytics data from diverse external data feeds, enhancing accessibility and interoperability.
  • Automation : Automate ETL processes and database maintenance tasks using SQL Agent Jobs, ensuring data integrity and operational reliability.
  • Advanced SQL Expertise : Craft and optimize complex T‑SQL queries to support efficient data processing and analytical workloads.
Required Califications :
  • 5+ years of hands‑on expertise with Azure Databricks, Python, PySpark, and Delta Lake.
  • 5+ years of proven experience with Azure Data Factory for orchestrating and monitoring pipelines.
  • Strong SQL Server / T‑SQL experience with a focus on query optimization, indexing strategies, and coding best practices.
  • Demonstrated experience in SSIS package design, deployment, and performance tuning.
  • Hands‑on knowledge of Unity Catalog for governance.
  • Experience with Git (Azure DevOps Repos) and CI / CD practices in data engineering projects.
Nice to Have :
  • Exposure to Change Data Capture (CDC), Change Data Feed (CDF), and Temporal Tables.
  • Experience with Microsoft Purview, Power BI, and Azure‑native integrations.
  • Familiarity with Profisee Master Data Management (MDM).
  • Working in Agile / Scrum environments.
Preferred Qualifications :

Microsoft Certified : Azure Data Engineer Associate (DP-203)

Microsoft Certified : Azure Solutions Architect Expert or equivalent advanced Azure certification

Databricks Certified Data Engineer Associate or Professional

Additional Microsoft SQL Server or Azure certifications demonstrating advanced database and cloud expertise

Obtém a tua avaliação gratuita e confidencial do currículo.
ou arrasta um ficheiro em formato PDF, DOC, DOCX, ODT ou PAGES até 5 MB.