Job Search and Career Advice Platform

Ativa os alertas de emprego por e-mail!

Data Engineer

Zunzun Solutions

Cotia

Presencial

BRL 80.000 - 120.000

Tempo integral

Hoje
Torna-te num dos primeiros candidatos

Cria um currículo personalizado em poucos minutos

Consegue uma entrevista e ganha mais. Sabe mais

Resumo da oferta

A data management firm in Cotia, Brazil is seeking an experienced Data Engineer (Azure Databricks) to design and implement data pipelines. This role involves leveraging Azure Databricks, Azure Data Factory, SQL Server, and Python to create efficient data solutions while ensuring data integrity and compliance. Candidates should have strong expertise in data processing and cloud technologies with over 5 years of relevant experience. Multiple certifications in Azure technologies are preferred.

Qualificações

  • 5+ years of hands-on expertise with Azure Databricks, Python, PySpark, and Delta Lake.
  • 5+ years of proven experience with Azure Data Factory for orchestrating and monitoring pipelines.
  • Strong SQL Server / T-SQL experience with a focus on query optimization.

Responsabilidades

  • Design, build, and optimize ETL / ELT pipelines using Azure Databricks.
  • Develop pipelines and data flows with ADF and PySpark.
  • Optimize database performance through SQL query tuning.

Conhecimentos

Azure Databricks
Python
PySpark
SQL Server
Data Factory
Git

Formação académica

Microsoft Certified: Azure Data Engineer Associate
Microsoft Certified: Azure Solutions Architect Expert
Databricks Certified Data Engineer Associate

Ferramentas

Azure Data Factory
SQL Server Management Studio
Descrição da oferta de emprego
Summary :

We are seeking a highly skilled Data Engineer (Azure Databricks) to design, implement, and optimize enterprise‑grade data pipelines. In this role, you will leverage Azure Databricks, Azure Data Factory, SQL Server, and Python to enable scalable, governed, and performant data solutions. You will play a key role in modernizing our data platform on the Azure Cloud, ensuring reliability, efficiency, and compliance across the full data lifecycle.

Key Responsibilities :
  • Data Pipeline Development : Design, build, and optimize ETL / ELT pipelines using Azure Databricks (PySpark, Delta Lake) and Azure Data Factory (ADF).
  • Data Flows & Transformations : Develop pipelines, data flows, and complex transformations with ADF, PySpark, and T‑SQL for seamless data extraction, transformation, and loading.
  • Data Processing : Develop Databricks Python notebooks for tasks such as joining, filtering, and pre‑aggregation.
  • Database & Query Optimization : Optimize database performance through SQL query tuning, index optimization, and code improvements to ensure efficient data retrieval and manipulation.
  • SSIS & Migration Support : Maintain and enhance SSIS package design and deployment for legacy workloads; contribute to migration and modernization into cloud‑native pipelines.
  • Collaboration & DevOps : Work with cross‑functional teams using Git (Azure Repos) for version control and Azure DevOps pipelines (CI / CD) for deployment.
  • Data Governance & Security : Partner with governance teams to integrate Microsoft Purview and Unity Catalog for cataloging, lineage tracking, and role‑based security.
  • API & External Integration : Implement REST APIs to retrieve analytics data from diverse external data feeds, enhancing accessibility and interoperability.
  • Automation : Automate ETL processes and database maintenance tasks using SQL Agent Jobs, ensuring data integrity and operational reliability.
  • Advanced SQL Expertise : Craft and optimize complex T‑SQL queries to support efficient data processing and analytical workloads.
Required Califications :
  • 5+ years of hands‑on expertise with Azure Databricks, Python, PySpark, and Delta Lake.
  • 5+ years of proven experience with Azure Data Factory for orchestrating and monitoring pipelines.
  • Strong SQL Server / T‑SQL experience with a focus on query optimization, indexing strategies, and coding best practices.
  • Demonstrated experience in SSIS package design, deployment, and performance tuning.
  • Hands‑on knowledge of Unity Catalog for governance.
  • Experience with Git (Azure DevOps Repos) and CI / CD practices in data engineering projects.
Nice to Have :
  • Exposure to Change Data Capture (CDC), Change Data Feed (CDF), and Temporal Tables.
  • Experience with Microsoft Purview, Power BI, and Azure‑native integrations.
  • Familiarity with Profisee Master Data Management (MDM).
  • Working in Agile / Scrum environments.
Preferred Qualifications :

Microsoft Certified : Azure Data Engineer Associate (DP-203)

Microsoft Certified : Azure Solutions Architect Expert or equivalent advanced Azure certification

Databricks Certified Data Engineer Associate or Professional

Additional Microsoft SQL Server or Azure certifications demonstrating advanced database and cloud expertise

Obtém a tua avaliação gratuita e confidencial do currículo.
ou arrasta um ficheiro em formato PDF, DOC, DOCX, ODT ou PAGES até 5 MB.