Job Search and Career Advice Platform

Ativa os alertas de emprego por e-mail!

Data Engineer

Zunzun Solutions

Manaus

Presencial

BRL 80.000 - 120.000

Tempo integral

Ontem
Torna-te num dos primeiros candidatos

Cria um currículo personalizado em poucos minutos

Consegue uma entrevista e ganha mais. Sabe mais

Resumo da oferta

A data solutions company in Recife seeks a highly skilled Data Engineer to design and optimize data pipelines using Azure Databricks and Azure Data Factory. Responsibilities include developing ETL/ELT processes, optimizing SQL queries, and collaborating with cross-functional teams. Ideal candidates will have over 5 years of experience with data engineering tools and practices, including Python, PySpark, and SQL Server, and possess relevant Azure certifications.

Qualificações

  • 5+ years of hands-on experience with Azure Databricks, Python, and PySpark.
  • Strong SQL skills focused on query optimization and coding best practices.
  • Demonstrated experience in SSIS package design and deployment.

Responsabilidades

  • Design and optimize ETL/ELT pipelines using Azure Databricks.
  • Develop data flows and complex transformations with Azure Data Factory.
  • Collaborate with cross-functional teams for deployments using Azure DevOps.

Conhecimentos

Azure Databricks
Python
PySpark
SQL Server
Data Factory
T-SQL
DevOps
Git

Formação académica

Microsoft Certified: Azure Data Engineer Associate (DP-203)
Microsoft Certified: Azure Solutions Architect Expert

Ferramentas

Azure Data Factory
Microsoft Purview
Descrição da oferta de emprego
Summary

We are seeking a highly skilled Data Engineer (Azure Databricks) to design, implement, and optimize enterprise-grade data pipelines. In this role, you will leverage Azure Databricks, Azure Data Factory, SQL Server, and Python to enable scalable, governed, and performant data solutions. You will play a key role in modernizing our data platform on the Azure Cloud, ensuring reliability, efficiency, and compliance across the full data lifecycle.

Key Responsibilities
  • Data Pipeline Development: Design, build, and optimize ETL / ELT pipelines using Azure Databricks (PySpark, Delta Lake) and Azure Data Factory (ADF).
  • Data Flows & Transformations: Develop pipelines, data flows, and complex transformations with ADF, PySpark, and T-SQL for seamless data extraction, transformation, and loading.
  • Data Processing: Develop Databricks Python notebooks for tasks such as joining, filtering, and pre-aggregation.
  • Database & Query Optimization: Optimize database performance through SQL query tuning, index optimization, and code improvements to ensure efficient data retrieval and manipulation.
  • SSIS & Migration Support: Maintain and enhance SSIS package design and deployment for legacy workloads; contribute to migration and modernization into cloud‑native pipelines.
  • Collaboration & DevOps: Work with cross‑functional teams using Git (Azure Repos) for version control and Azure DevOps pipelines (CI / CD) for deployment.
  • Data Governance & Security: Partner with governance teams to integrate Microsoft Purview and Unity Catalog for cataloging, lineage tracking, and role‑based security.
  • API & External Integration: Implement REST APIs to retrieve analytics data from diverse external data feeds, enhancing accessibility and interoperability.
  • Automation: Automate ETL processes and database maintenance tasks using SQL Agent Jobs, ensuring data integrity and operational reliability.
  • Advanced SQL Expertise: Craft and optimize complex T‑SQL queries to support efficient data processing and analytical workloads.
Required Qualifications
  • 5+ years of hands‑on expertise with Azure Databricks, Python, PySpark, and Delta Lake.
  • 5+ years of proven experience with Azure Data Factory for orchestrating and monitoring pipelines.
  • Strong SQL Server / T‑SQL experience with a focus on query optimization, indexing strategies, and coding best practices.
  • Demonstrated experience in SSIS package design, deployment, and performance tuning.
  • Hands‑on knowledge of Unity Catalog for governance.
  • Experience with Git (Azure DevOps Repos) and CI / CD practices in data engineering projects.
Nice to Have
  • Exposure to Change Data Capture (CDC), Change Data Feed (CDF), and Temporal Tables.
  • Experience with Microsoft Purview, Power BI, and Azure‑native integrations.
  • Familiarity with Profisee Master Data Management (MDM).
  • Working in Agile / Scrum environments.
Preferred Qualifications
  • Microsoft Certified: Azure Data Engineer Associate (DP‑203)
  • Microsoft Certified: Azure Solutions Architect Expert or equivalent advanced Azure certification
  • Databricks Certified Data Engineer Associate or Professional
  • Additional Microsoft SQL Server or Azure certifications demonstrating advanced database and cloud expertise
Obtém a tua avaliação gratuita e confidencial do currículo.
ou arrasta um ficheiro em formato PDF, DOC, DOCX, ODT ou PAGES até 5 MB.