Job Search and Career Advice Platform

Ativa os alertas de emprego por e-mail!

Senior data engineer

Netvagas

Porto Alegre

Presencial

BRL 120.000 - 160.000

Tempo integral

Ontem
Torna-te num dos primeiros candidatos

Cria um currículo personalizado em poucos minutos

Consegue uma entrevista e ganha mais. Sabe mais

Resumo da oferta

A technology firm in Porto Alegre is seeking an experienced Data Engineer to lead data migration efforts. This role involves designing and optimizing data extraction and ingestion pipelines, ensuring data quality, and collaborating with stakeholders. Ideal candidates will have 4+ years of experience, proficiency in Python and Databricks, and strong communication skills in English. The position offers opportunities for professional growth and the chance to work on innovative data projects.

Qualificações

  • 4+ years of professional experience in data engineering or related roles.
  • Strong proficiency in Python for data engineering.
  • Experience with CI/CD for data pipelines.

Responsabilidades

  • Design, build, and maintain scalable data pipelines using Python and SQL.
  • Collaborate with stakeholders to understand data requirements.
  • Mentor junior engineers and contribute to best practices.

Conhecimentos

Python
Databricks
SQL
Data ingestion

Ferramentas

Delta Lake
Spark
Pandas
Descrição da oferta de emprego
Overview

We are seeking an experienced Data Engineer to join our team and lead data migration efforts through the development and optimization of our data platform. This role is critical to designing and executing dynamic, scalable data extraction and ingestion pipelines, implementing robust ingestion frameworks, and enabling analytics and data science initiatives across the organization.

The ideal candidate will have a deep expertise in Python development, hands-on experience with modern big data platforms (preferably Databricks), and a proven track record of designing and implementing enterprise-grade data ingestion solutions. It is important to have the ability to have a strong, proactive communication skills in English to demonstrate technical functionality to technical and non-technical audiences.

Responsibilities
  • Design, build, and maintain scalable data pipelines and ETL/ELT workflows using Python and SQL;
  • Architect and implement data ingestion patterns for batch, streaming, and hybrid workloads;
  • Develop and optimize solutions on Databricks big data platforms, leveraging Delta Lake, Spark, and related technologies;
  • Establish and enforce data quality standards, monitoring, and alerting across the data platform;
  • Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver reliable data products;
  • Define and implement medallion architecture (Bronze/Silver/Gold) patterns for data transformation and governance;
  • Mentor junior engineers and contribute to engineering best practices and documentation.
Qualifications
  • Big Data Platforms: Hands-on experience with enterprise big data platforms. Strong preference for Databricks experience including Unity Catalog, Delta Lake, DLT, Workflows, and Spark optimization;
  • Python: Strong to Expert level proficiency required. Deep understanding of Python for data engineering including PySpark, pandas, data validation libraries (Pydantic, Great Expectations), and production-grade coding practices;
  • Data Ingestion: Expert-level experience designing and implementing ingestion patterns including Auto Loader, Change Data Capture (CDC), streaming ingestion, API integrations, and file-based batch processing;
  • SQL: Advanced SQL skills including complex joins, window functions, CTEs, and query optimization for both OLTP and analytical workloads (SparkSQL, T-SQL, or similar);
  • Platform Architecture: Understanding of lakehouse architecture patterns, data modeling principles (Kimball, Data Vault), and data governance frameworks;
  • 4+ years of professional experience in data engineering or related roles;
  • Proficiency in English;
  • Track record of delivering production data pipelines at scale;
  • Experience with CI/CD for data pipelines and infrastructure-as-code practices.
Nice to have
  • Databricks certifications (Data Engineer Associate/Professional);
  • Experience with cloud platforms (AWS or Azure) and their native data services;
  • Familiarity with orchestration tools (Airflow, Prefect, Databricks Workflows);
  • Experience with data migration projects or multi-source integration scenarios;
  • Knowledge of real-time streaming technologies (Kafka, Event Hubs, Kinesis);
  • Background in SaaS, ERP, or property management software domains.
Obtém a tua avaliação gratuita e confidencial do currículo.
ou arrasta um ficheiro em formato PDF, DOC, DOCX, ODT ou PAGES até 5 MB.