
Ativa os alertas de emprego por e-mail!
Cria um currículo personalizado em poucos minutos
Consegue uma entrevista e ganha mais. Sabe mais
A leading tech recruitment firm is seeking an Intermediate Data Platform Engineer to enhance scalable data solutions and support innovative analytics. In this pivotal role, you'll build and maintain data infrastructure using cutting-edge technologies like Spark and Kafka, while ensuring data reliability and security. Applying your 3+ years of relevant experience, you'll contribute to a modern data ecosystem that promotes collaboration and experimentation across teams. Join to leverage your skills in a forward-thinking environment based in Brasília.
Build and expand core data platform components powering analytics, experimentation, and algorithm development
Develop scalable ETL, streaming, and metadata systems using Spark, Kafka, and modern lakehouse technologies
Support high-volume data transformation, federated querying, and performance optimization
Strengthen RBAC, data governance, and secure access patterns for multi-team data environments
Enable experimentation and data development with reliable tooling and automated data workflows
We are seeking an Intermediate Data Platform Engineer to help build and scale a modern data ecosystem. In this role, you'll contribute to distributed data systems, ETL / streaming pipelines, metadata platforms, and frameworks that support experimentation across the organization.
You will develop platform features, enhance observability, ensure data reliability, and collaborate with engineering and data science teams to support new initiatives.
Build and maintain scalable data storage, transformation, and processing systems
Support Iceberg-based data lakehouse architecture and metadata catalogs
Develop ETL and streaming pipelines using Spark, Kafka, Parquet, and Iceberg
Build backend services in Python, Go, Scala, or Java to enhance platform capabilities
Contribute to Trino query performance and metadata optimization
Develop tooling that improves experiment design, tracking, and analysis
Collaborate with data science and analytics teams to support experimentation workflows
Automate components of A / B testing and experimentation frameworks
Maintain and improve platform-level RBAC and AWS IAM configuration
Use Datadog for logging, alerting, and monitoring platform performance
Support schema evolution, data quality, and reproducibility across pipelines
Deploy and support services using Kubernetes and Helm
Improve automation, performance, and cost efficiency of data workloads
Contribute to capacity planning for data-intensive services
Participate in cross-team engineering discussions
Document platform components to support internal stakeholders
Degree in Computer Science, Engineering, or related field
3+ years in data platform, data engineering, or platform engineering
Production-level coding experience in Python, Go, Scala, or Java
Experience supporting Iceberg-based data warehouses
Hands‑on experience with Spark, Kafka, Trino, and distributed systems
Familiarity with RBAC systems, IAM, and data governance principles
Experience with Kubernetes, Helm, and CI / CD workflows
Join a team focused on building scalable, modern data infrastructure that powers experimentation and analytics. You’ll gain exposure to leading‑edge tools and architectures while contributing to systems used across the organization.