Job Search and Career Advice Platform

Ativa os alertas de emprego por e-mail!

Databricks Data Engineering Specialist

Bebeedatascientist

Porto Alegre

Presencial

BRL 80.000 - 120.000

Tempo integral

Ontem
Torna-te num dos primeiros candidatos

Cria um currículo personalizado em poucos minutos

Consegue uma entrevista e ganha mais. Sabe mais

Resumo da oferta

A leading data science company in Porto Alegre is searching for a Senior Data Scientist to lead the development of scalable data pipelines using Databricks. This role involves designing high-quality datasets and visualizations with a strong focus on data ingestion and transformation from diverse sources. The ideal candidate will develop reusable pipeline frameworks and ensure data accuracy through insights delivered via Power BI dashboards. A perfect fit for experienced professionals looking to leverage their skills in a dynamic environment.

Qualificações

  • Experience with scalable data pipelines and data lakehouse architecture.
  • Strong knowledge of data ingestion from diverse sources like APIs and SQL databases.
  • Ability to design and implement effective data transformations.

Responsabilidades

  • Lead development of scalable data pipelines using Databricks.
  • Ensure data accuracy, reliability, and quality through validation and insights.
  • Design reusable pipeline frameworks and deliver insights through dashboards.

Conhecimentos

Data pipeline development
Databricks expertise
ETL / ELT optimization
SQL proficiency
Data validation

Ferramentas

PySpark
Spark SQL
Power BI
Delta Lake
Descrição da oferta de emprego
Senior Data Scientist - Databricks Expert

We’re seeking an experienced Senior Data Scientist to lead the development of scalable data pipelines using Spark, PySpark, SQL, and Delta Lake.

This role focuses on designing and implementing high‑quality datasets and visualizations, leveraging expertise in data ingestion, transformation, and publication. Design and optimize ETL / ELT pipelines in Databricks using PySpark, Spark SQL, and Delta Lake. Ingest, clean, and transform data from diverse sources (APIs, SQL databases, cloud storage, SAP / legacy systems, streaming).

Develop reusable pipeline frameworks, data validation logic, and performance‑tuned transformations. Deliver insights through Power BI dashboards, ensuring data accuracy, reliability, and quality. Implement best practices for lakehouse development, orchestration, and version control.

Obtém a tua avaliação gratuita e confidencial do currículo.
ou arrasta um ficheiro em formato PDF, DOC, DOCX, ODT ou PAGES até 5 MB.