Job Search and Career Advice Platform

Ativa os alertas de emprego por e-mail!

Data Engineer - Platform & Pipelines

-

Rio de Janeiro

Híbrido

BRL 60.000 - 80.000

Tempo integral

Hoje
Torna-te num dos primeiros candidatos

Cria um currículo personalizado em poucos minutos

Consegue uma entrevista e ganha mais. Sabe mais

Resumo da oferta

A leading global energy services company in Rio de Janeiro is seeking a Data Engineer to implement a strict Medallion Architecture for organizing industrial data. The ideal candidate will have 3+ years of experience and proficiency in Apache Airflow and Databricks. Responsibilities include developing ELT pipelines, optimizing processing workflows, and ensuring data quality. This position offers full-time employment with competitive compensation based on experience.

Qualificações

  • 3+ years of experience in Data Engineering.
  • Strong proficiency in Apache Airflow and Databricks.
  • Experience implementing Medallion/Delta Lake architectures.
  • Strong SQL and Python skills.
  • Advanced English communication skills.

Responsabilidades

  • Develop and maintain robust Airflow DAGs to orchestrate complex data transformations.
  • Use Spark and Polars for data cleaning, enrichment, and aggregation.
  • Implement the Medallion Architecture ensuring clear separation of data layers.
  • Optimize Polars/Spark jobs and SQL queries for performance.

Conhecimentos

Apache Airflow
Databricks
SQL
Python

Ferramentas

Delta Lake
Polars
PySpark
Descrição da oferta de emprego
Introduction

We are looking for the right people - people who want to innovate, achieve, grow and lead. We attract and retain the best talent by investing in our employees and empowering them to develop themselves and their careers. Experience the challenges, rewards and opportunity of working for one of the world's largest providers of products and services to the global energy industry.

Job Duties

We are implementing a strict Medallion Architecture to organize petabytes of industrial data. This role is for a Data Engineer who excels at transforming raw chaos into structured, queryable assets.

You will build and maintain the ELT pipelines that move data from "Bronze" (Raw) to "Silver" (Cleaned) and "Gold" (Aggregated). You will work with Delta Lake (On-prem/Databricks), Polars and Airflow to ensure data quality and availability for Data Scientists and the Knowledge Graph.

What You'll Do
  • Pipeline Development: Develop and maintain robust Airflow DAGs to orchestrate complex data transformations.
  • Data Transformation: Use Spark (when scale requires) and Polars to clean, enrich, and aggregate data according to business logic.
  • Architecture Implementation: Enforce the Medallion Architecture patterns, ensuring clear separation of concerns between data layers.
  • Performance Tuning: Optimize processing workflows (Polars/Spark) jobs and SQL queries to reduce costs and execution time; make intelligent decisions on when to use Polars vs. Spark.
  • Deployment & Operations: Manage code deployment to on-prem and cloud infrastructure, including containerization and environment configuration.
  • Data Quality: Implement comprehensive data validation checks and quality gates between medallion layers.
  • Data Cataloging: Maintain the metadata and catalog entries to ensure all data assets are discoverable and documented.
The Technology Stack
  • Orchestration: Apache Airflow.
  • Data Processing: Polars (primary for ETL), PySpark/SQL (for massive scale)
  • Compute: Single-node workers (Polars), Databricks/Spark clusters (when scale requires)
  • Storage: Delta Lake, Parquet, S3/Blob Storage, MinIO
  • Language: Python 3.12+ (w/ Polars), SQL.
Qualifications
  • The Structured Thinker: You love organizing data. You understand the importance of schemas, data typing, and normalization.
  • Quality Obsessive: You don't just move data; you test it. You implement checks to ensure no bad data reaches the Gold layer.
  • Pipeline Builder: You view data engineering as software engineering. You write modular, reusable code for your transformations.
Knowledge, Skills, and Abilities
Must Haves:
  • 3+ years of experience in Data Engineering.
  • Strong proficiency in Apache Airflow and Databricks.
  • Experience implementing Medallion/Delta Lake architectures.
  • Strong SQL and Python skills.
  • Advanced English communication skills.
Good to Have:
  • Experience with Unity Catalog or other governance tools.
  • Familiarity with dbt (data build tool).
  • Background in processing telemetry or sensor data.

Halliburton is an Equal Opportunity Employer. Employment decisions are made without regard to race, color, religion, disability, genetic information, pregnancy, citizenship, marital status, sex/gender, sexual preference/ orientation, gender identity, age, veteran status, national origin, or any other status protected by law or regulation.

Location

Rua Paulo Emidio Barbosa 485 Q,Rio de Janeiro,Rio de Janeiro,291941, Brazil

Job Details

Requisition Number: 205556

Experience Level: Entry-Level

Job Family: Engineering/Science/Technology

Product Service Line: Landmark Software & Services

Full Time / Part Time: Full Time

Additional Locations for this position:

Compensation Information Compensation is competitive and commensurate with experience.

Job Segment: Cloud, Testing, Database, SQL, Technology

Obtém a tua avaliação gratuita e confidencial do currículo.
ou arrasta um ficheiro em formato PDF, DOC, DOCX, ODT ou PAGES até 5 MB.