Job Search and Career Advice Platform

Ativa os alertas de emprego por e-mail!

Brazil - Remote: Data Engineer - Platform & Pipelines

Halliburton

Teletrabalho

BRL 60.000 - 80.000

Tempo integral

Ontem
Torna-te num dos primeiros candidatos

Cria um currículo personalizado em poucos minutos

Consegue uma entrevista e ganha mais. Sabe mais

Resumo da oferta

A global energy services provider is seeking a Data Engineer to work remotely. The role involves developing robust ELT pipelines and ensuring data quality using technologies such as Apache Airflow, Polars, and Databricks. Candidates must have a Bachelor's degree in a related field and at least 3 years of experience in Data Engineering, with strong skills in SQL and Python. This position offers the opportunity to innovate and grow in a competitive environment.

Qualificações

  • 3+ years of experience in Data Engineering.
  • Strong proficiency in Apache Airflow and Databricks.
  • Experience implementing Medallion/Delta Lake architectures.

Responsabilidades

  • Develop and maintain robust Airflow DAGs for data transformations.
  • Optimize processing workflows with Polars/Spark.
  • Implement comprehensive data validation checks.

Conhecimentos

Apache Airflow
Databricks
SQL
Python
Data Engineering

Formação académica

Bachelor's degree in Computer Science, Engineering or related

Ferramentas

Polars
PySpark
Delta Lake
Descrição da oferta de emprego

We are looking for the right people — people who want to innovate, achieve, grow and lead. We attract and retain the best talent by investing in our employees and empowering them to develop themselves and their careers. Experience the challenges, rewards and opportunity of working for one of the world’s largest providers of products and services to the global energy industry.

Job Duties

We are implementing a strict Medallion Architecture to organize petabytes of industrial data. This role is for a Data Engineer who excels at transforming raw chaos into structured, queryable assets.

You will build and maintain the ELT pipelines that move data from 'Bronze' (Raw) to 'Silver' (Cleaned) and 'Gold' (Aggregated). You will work with Delta Lake (On-prem/Databricks), Polars and Airflow to ensure data quality and availability for Data Scientists and the Knowledge Graph.

What You’ll Do

  • Pipeline Development: Develop and maintain robust Airflow DAGs to orchestrate complex data transformations.
  • Data Transformation: Use Spark (when scale requires) and Polars to clean, enrich, and aggregate data according to business logic.
  • Architecture Implementation: Enforce the Medallion Architecture patterns, ensuring clear separation of concerns between data layers.
  • Performance Tuning: Optimize processing workflows (Polars/Spark) jobs and SQL queries to reduce costs and execution time; make intelligent decisions on when to use Polars vs. Spark.
  • Deployment & Operations: Manage code deployment to on-prem and cloud infrastructure, including containerization and environment configuration.
  • Data Quality: Implement comprehensive data validation checks and quality gates between medallion layers.
  • Data Cataloging: Maintain the metadata and catalog entries to ensure all data assets are discoverable and documented.
The Technology Stack
  • Orchestration: Apache Airflow.
  • Data Processing: Polars (primary for ETL), PySpark/SQL (for massive scale)
  • Compute: Single-node workers (Polars), Databricks/Spark clusters (when scale requires)
  • Storage: Delta Lake, Parquet, S3/Blob Storage, MinIO
  • Language: Python 3.12+ (w/ Polars), SQL.
Qualifications

Must Haves:

  • Complete Bachelor's degree in Computer Science, Engineering, or related.
  • 3+ years of experience in Data Engineering.
  • Strong proficiency in Apache Airflow and Databricks.
  • Experience implementing Medallion/Delta Lake architectures.
  • Strong SQL and Python skills.
  • Advanced English communication skills.

Good to Have:

  • Experience with Unity Catalog or other governance tools.
  • Familiarity with dbt (data build tool).
  • Background in processing telemetry or sensor data.
Knowledge, Skills, and Abilities
  • The Structured Thinker: You love organizing data. You understand the importance of schemas, data typing, and normalization.
  • Quality Obsessive: You don\u2019t just move data; you test it. You implement checks to ensure no bad data reaches the Gold layer.
  • Pipeline Builder: You view data engineering as software engineering. You write modular, reusable code for your transformations.

Halliburton is an Equal Opportunity Employer. Employment decisions are made without regard to race, color, religion, disability, genetic information, pregnancy, citizenship, marital status, sex/gender, sexual preference/ orientation, gender identity, age, veteran status, national origin, or any other status protected by law or regulation.

Location

Fully Remote position.

Job Details

Requisition Number: 205556

Experience Level: Entry-Level

Job Family: Engineering/Science/Technology

Product Service Line: Landmark Software & Services

Full Time / Part Time: Full-time

Employee Group: Temporary

Compensation Information

Compensation is competitive and commensurate with experience.

Obtém a tua avaliação gratuita e confidencial do currículo.
ou arrasta um ficheiro em formato PDF, DOC, DOCX, ODT ou PAGES até 5 MB.