Job Search and Career Advice Platform

Ativa os alertas de emprego por e-mail!

Intermediate Data Platform Engineer

Elios Talent

Brasília

Presencial

BRL 120.000 - 160.000

Tempo integral

Hoje
Torna-te num dos primeiros candidatos

Cria um currículo personalizado em poucos minutos

Consegue uma entrevista e ganha mais. Sabe mais

Resumo da oferta

A leading tech recruitment firm is seeking an Intermediate Data Platform Engineer to enhance scalable data solutions and support innovative analytics. In this pivotal role, you'll build and maintain data infrastructure using cutting-edge technologies like Spark and Kafka, while ensuring data reliability and security. Applying your 3+ years of relevant experience, you'll contribute to a modern data ecosystem that promotes collaboration and experimentation across teams. Join to leverage your skills in a forward-thinking environment based in Brasília.

Qualificações

  • 3+ years in data platform, data engineering, or platform engineering.
  • Experience with CI/CD workflows.
  • Experience with data governance principles.

Responsabilidades

  • Build and maintain scalable data storage and processing systems.
  • Support Iceberg-based data lakehouse architecture.
  • Automate components of A/B testing and experimentation frameworks.

Conhecimentos

Production-level coding experience in Python
Experience supporting Iceberg-based data warehouses
Hands-on experience with Spark
Hands-on experience with Kafka
Familiarity with RBAC systems
Experience with Kubernetes
Strong troubleshooting skills
Strong communication skills

Formação académica

Degree in Computer Science, Engineering, or related field

Ferramentas

Spark
Kafka
Trino
Kubernetes
Helm
Descrição da oferta de emprego
Data Platform Engineer – Intermediate
Key Highlights

Build and expand core data platform components powering analytics, experimentation, and algorithm development

Develop scalable ETL, streaming, and metadata systems using Spark, Kafka, and modern lakehouse technologies

Support high-volume data transformation, federated querying, and performance optimization

Strengthen RBAC, data governance, and secure access patterns for multi-team data environments

Enable experimentation and data development with reliable tooling and automated data workflows

Position Overview

We are seeking an Intermediate Data Platform Engineer to help build and scale a modern data ecosystem. In this role, you'll contribute to distributed data systems, ETL / streaming pipelines, metadata platforms, and frameworks that support experimentation across the organization.

You will develop platform features, enhance observability, ensure data reliability, and collaborate with engineering and data science teams to support new initiatives.

Key Responsibilities
Data Platform Engineering

Build and maintain scalable data storage, transformation, and processing systems

Support Iceberg-based data lakehouse architecture and metadata catalogs

Develop ETL and streaming pipelines using Spark, Kafka, Parquet, and Iceberg

Build backend services in Python, Go, Scala, or Java to enhance platform capabilities

Contribute to Trino query performance and metadata optimization

Experimentation & Analytics Enablement

Develop tooling that improves experiment design, tracking, and analysis

Collaborate with data science and analytics teams to support experimentation workflows

Automate components of A / B testing and experimentation frameworks

Data Reliability, Security & Governance

Maintain and improve platform-level RBAC and AWS IAM configuration

Use Datadog for logging, alerting, and monitoring platform performance

Support schema evolution, data quality, and reproducibility across pipelines

Infrastructure & Deployment

Deploy and support services using Kubernetes and Helm

Improve automation, performance, and cost efficiency of data workloads

Contribute to capacity planning for data-intensive services

Collaboration & Documentation

Participate in cross-team engineering discussions

Document platform components to support internal stakeholders

Qualifications

Degree in Computer Science, Engineering, or related field

3+ years in data platform, data engineering, or platform engineering

Production-level coding experience in Python, Go, Scala, or Java

Experience supporting Iceberg-based data warehouses

Hands‑on experience with Spark, Kafka, Trino, and distributed systems

Familiarity with RBAC systems, IAM, and data governance principles

Experience with Kubernetes, Helm, and CI / CD workflows

Strong troubleshooting and communication skills
Why Join Us

Join a team focused on building scalable, modern data infrastructure that powers experimentation and analytics. You’ll gain exposure to leading‑edge tools and architectures while contributing to systems used across the organization.

Obtém a tua avaliação gratuita e confidencial do currículo.
ou arrasta um ficheiro em formato PDF, DOC, DOCX, ODT ou PAGES até 5 MB.