Job Search and Career Advice Platform

Ativa os alertas de emprego por e-mail!

Senior Data Engineer

TownSq

Porto Alegre

Presencial

BRL 80.000 - 120.000

Tempo integral

Ontem
Torna-te num dos primeiros candidatos

Cria um currículo personalizado em poucos minutos

Consegue uma entrevista e ganha mais. Sabe mais

Resumo da oferta

A leading property management company in Porto Alegre is seeking an experienced Data Engineer to lead data migration efforts and optimize our data platform. The ideal candidate will have deep expertise in Python, modern big data platforms like Databricks, and a proven track record of implementing data ingestion solutions. Key responsibilities include designing scalable data pipelines and collaborating with stakeholders to ensure data quality. Strong communication skills in English are essential for this role.

Qualificações

  • Hands-on experience with enterprise big data platforms, particularly Databricks.
  • Strong proficiency in Python for data engineering applications.
  • Expertise in designing ingestion patterns and ETL/ELT workflows.

Responsabilidades

  • Design, build, and maintain scalable data pipelines and ETL/ELT workflows.
  • Collaborate with data scientists, analysts, and business stakeholders.
  • Define and implement medallion architecture patterns for data transformation.

Conhecimentos

Python
Databricks
SQL
Data Ingestion Patterns
Data Quality Standards
Collaboration with Stakeholders

Formação académica

4+ years of professional experience in data engineering

Ferramentas

Airflow
Kafka
AWS
Azure
Descrição da oferta de emprego
Overview

TownSq is hiring!!

We are seeking an experienced Data Engineer to join our team and lead data migration efforts through the development and optimization of our data platform. This role is critical to designing and executing dynamic, scalable data extraction and ingestion pipelines, implementing robust ingestion frameworks, and enabling analytics and data science initiatives across the organization.

The ideal candidate will have a deep expertise in Python development, hands-on experience with modern big data platforms (preferably Databricks) and a proven track record of designing and implementing enterprise-grade data ingestion solutions. It is important to have a strong, proactive communication skills in English to demonstrate technical functionality to technical and non-technical audiences.

What your day-to-day will look like:

  • Design, build, and maintain scalable data pipelines and ETL/ELT workflows using Python and SQL;
  • Architect and implement data ingestion patterns for batch, streaming, and hybrid workloads;
  • Develop and optimize solutions on Databricks big data platforms, leveraging Delta Lake, Spark, and related technologies;
  • Establish and enforce data quality standards, monitoring, and alerting across the data platform;
  • Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver reliable data products;
  • Define and implement medallion architecture (Bronze/Silver/Gold) patterns for data transformation and governance;
  • Mentor junior engineers and contribute to engineering best practices and documentation.
What we expect from you:
  • Big Data Platforms: Hands-on experience with enterprise big data platforms. Strong preference for Databricks experience including Unity Catalog, Delta Lake, DLT, Workflows, and Spark optimization;
  • Python: Strong to Expert level proficiency required. Deep understanding of Python for data engineering including PySpark, pandas, data validation libraries (Pydantic, Great Expectations), and production-grade coding practices;
  • Data Ingestion: Expert-level experience designing and implementing ingestion patterns including Auto Loader, Change Data Capture (CDC), streaming ingestion, API integrations, and file-based batch processing;
  • SQL: Advanced SQL skills including complex joins, window functions, CTEs, and query optimization for both OLTP and analytical workloads (SparkSQL, T-SQL, or similar);
  • Platform Architecture: Understanding of lakehouse architecture patterns, data modeling principles (Kimball, Data Vault), and data governance frameworks;
  • 4+ years of professional experience in data engineering or related roles;
  • Proficiency in English;
  • Track record of delivering production data pipelines at scale;
  • Experience with CI/CD for data pipelines and infrastructure-as-code practices.
You will stand out if you have:
  • Databricks certifications (Data Engineer Associate/Professional);
  • Experience with cloud platforms (AWS or Azure) and their native data services;
  • Familiarity with orchestration tools (Airflow, Prefect, Databricks Workflows);
  • Experience with data migration projects or multi-source integration scenarios;
  • Knowledge of real-time streaming technologies (Kafka, Event Hubs, Kinesis);
  • Background in SaaS, ERP, or property management software domains.
Obtém a tua avaliação gratuita e confidencial do currículo.
ou arrasta um ficheiro em formato PDF, DOC, DOCX, ODT ou PAGES até 5 MB.