Job Search and Career Advice Platform

Ativa os alertas de emprego por e-mail!

Senior Data Engineer

Tractian

São Paulo

Presencial

BRL 80.000 - 120.000

Tempo integral

Ontem
Torna-te num dos primeiros candidatos

Cria um currículo personalizado em poucos minutos

Consegue uma entrevista e ganha mais. Sabe mais

Resumo da oferta

A leading technology company in São Paulo, Brazil, is seeking a Data Engineer to build and maintain scalable data pipelines and ETL processes. The role involves ensuring data quality and collaborating with backend and analytics engineers. Candidates should have a Bachelor's degree in Data Science or a related field, 2+ years of experience in Data Engineering, and strong skills in SQL and cloud-based platforms. This is an opportunity to work on large datasets and advanced data solutions in a dynamic environment.

Qualificações

  • Bachelor degree in Data Science, Statistics, Computer Science, or a related field.
  • 2+ years of experience in Data Engineering or Analytics.
  • Highly experienced in SQL and database management systems.

Responsabilidades

  • Develop and maintain scalable data pipelines and ETL processes.
  • Design, implement, and optimize data extraction and loading processes.
  • Collaborate with backend and analytics engineers.

Conhecimentos

Data Engineering
SQL
Python
Data Warehousing
Advanced English
ETL Tools

Formação académica

Bachelor degree in Data Science or related field

Ferramentas

PostgreSQL
Clickhouse
AWS Redshift
Airflow
Kafka
Descrição da oferta de emprego
Analytics at TRACTIAN

The Data Engineering team is responsible for building and maintaining the infrastructure that handles massive datasets flowing through TRACTIAN’s systems. This department ensures the availability, scalability, and performance of data pipelines, enabling seamless access and processing of real‑time and historical data. The team’s core objective is to architect robust, fault‑tolerant data systems that support everything from analytics to machine learning, ensuring that the right data is in the right place, at the right time.

What you'll do

As a Data Engineer, you will build data pipelines that enable data extraction, loading and transformation for several contexts. The goal is to have a reliable, available and trustworthy system that backbones the entire analytics pipeline. The challenges may vary from large datasets to high data throughput systems, not being reduced to a small set of techniques for data handling. You will also lead initiatives on data pipelines reliability and observability.


Responsibilities
  • Develop and maintain scalable data pipelines and ETL processes.
  • Design, implement, and optimize existing data extraction and loading processes with adequate data engineering design patterns.
  • Lead data engineering reliability and observability, increasing analytics team awareness of the data flow processes before it becomes an issue.
  • Collaborate with backend and analytics engineers in a holistic data engineering process, loading data accordingly with the technical requirements.
  • Ensure data quality and consistency across various sources by implementing data validation and cleansing techniques.
  • Work with cloud‑based data warehouses and analytics platforms to manage and store large datasets.
  • Monitor and troubleshoot data pipelines to ensure reliable and timely delivery of data.
  • Document data processes, workflows, and best practices to enhance team knowledge and efficiency.
  • Create dashboards as data products as internal
Requirements
  • Bachelor degree in Data Science, Statistics, Computer Science, or a related field.
  • Advanced English
  • 2+ years of experience in Data Engineering or Analytics.
  • Highly experienced in SQL and database management systems such as PostgreSQL and Clickhouse.
  • Strong understanding of data warehousing concepts and experience with ETL tools (e.g., Airflow, dbt).
  • Strong experience with programming languages such as Python with modern data stack for data engineering (e.g. DuckDb, Polars…)
  • Experience with streaming tools (e.g. Kafka).
  • Experience with cloud‑based data platforms like AWS Redshift.
  • Experience with GoLang/Rust is a plus.
  • Experience with observability tools is a plus (e.g. Datadog, Grafana)

Obtém a tua avaliação gratuita e confidencial do currículo.
ou arrasta um ficheiro em formato PDF, DOC, DOCX, ODT ou PAGES até 5 MB.