Ativa os alertas de emprego por e-mail!

Intermediate Data Engineer - OP01793

Dev.Pro

São Paulo

Teletrabalho

USD 60.000 - 80.000

Tempo integral

Há 4 dias
Torna-te num dos primeiros candidatos

Resumo da oferta

A global software company is seeking an Intermediate Data Engineer to contribute to large-scale data modernization projects. You will be responsible for migrating and transforming legacy data pipelines to a cloud environment. Candidates should have 3+ years in data engineering with strong SQL and GCP skills, and a degree in a related field. The role is fully remote and offers generous benefits including 30 paid days off annually and health insurance.

Serviços

30 paid days off per year
5 paid sick days
Partially covered health insurance
Wellness bonus for gym memberships
English lessons and training programs

Qualificações

  • 3+ years in data engineering.
  • Strong proficiency in ETL for large data volumes.
  • Expert-level SQL skills.
  • Hands-on experience with GCP services.
  • Proficiency in Python for ETL scripting.
  • Upper-Intermediate+ English level.

Responsabilidades

  • Review and analyze existing ETL solutions.
  • Design and migrate data pipelines to GCP.
  • Build and manage data transformations with dbt.
  • Ensure new infrastructure meets performance standards.
  • Develop migration scripts for historical data.

Conhecimentos

Data engineering
ETL design
SQL
GCP
Python
dbt
Collaboration

Formação académica

Degree in Computer Science, Data Engineering, Information Systems

Ferramentas

Snowflake
Apache Iceberg
Airflow
Docker

Descrição da oferta de emprego

???? Are you in Brazil, Argentina or Colombia? Join us as we actively recruit in these locations, offering a comfortable remote environment. Submit your CV in English, and we'll get back to you!

We invite a Intermediate Data Engineer to contribute to a large-scale data modernization effort for a major enterprise client. You’ll help migrate and transform complex legacy data pipelines to a modern custom-built cloud environment for improved scalability, maintainability, and compliance. You’ll work closely with architects, DevOps, QA, and product stakeholders to deliver scalable, reliable data solutions that meet unique business needs.

???? What's in it for you:

  • Join a fully integrated delivery team built on collaboration, transparency, and mutual respect
  • Contribute to high-impact data platform transformation and gain experience with Google Landing Zones
  • Work hands-on with modern, in-demand technologies like GCP, Snowflake, Apache Iceberg, dbt, Airflow, Dataflow, and BigQuery

Is that you?

  • 3+ years in data engineering and data warehouse modeling
  • Strong proficiency in designing and building ETL for large data volumes and streaming solutions
  • Expert-level SQL skills and experience in Snowflake and Apache Iceberg tables
  • Hands-on experience with GCP services (BigQuery, GCS, Airflow, Dataflow, Dataproc, Pub/Sub)
  • Proficiency in Python for ETL scripting and DAG development
  • Experience using dbt for data transformation and orchestration
  • Familiarity with CI/CD processes and tools (Git, Terraform, Serverless)
  • Degree in Computer Science, Data Engineering, Information Systems, or related fields
  • Strong communication and collaboration abilities
  • Upper-Intermediate+ English level

Desirable:

  • Experience building and managing streaming data pipelines and event-driven architectures
  • Experience writing Bash scripts
  • Experience with Java for Dataflow jobs
  • Familiarity with data lakehouse architectures using Iceberg tables
  • Proficiency with Docker for containerizing data pipelines and supporting orchestration
  • Familiarity with AI-assisted tools like GitHub Copilot

Key responsibilities and your contribution

In this role, you'll be actively involved in key data engineering activities, helping ensure the project’s success and timely delivery.

  • Review and analyze existing ETL solutions for migration to the new architecture
  • Design, optimize, and migrate batch and streaming data pipelines to the GCP Landing Zone
  • Build and manage data transformations with dbt, supporting ELT pipelines in Snowflake
  • Ensure the new data infrastructure meets performance and quality SLAs/SLOs
  • Implement monitoring and alerting for pipelines to ensure system fault tolerance
  • Develop migration scripts to transfer historical data to Iceberg tables
  • Collaborate closely with the team and other stakeholders to align on data requirements and solutions
  • Participate in code reviews, design discussions, and technical planning

What's working at Dev.Pro like?

Dev.Pro is a global company that's been building great software since 2011. Our team values fairness, high standards, openness, and inclusivity for everyone — no matter your background

We are 99.9% remote — you can work from anywhere in the world
Get 30 paid days off per year to use however you like — vacations, holidays, or personal time
️ 5 paid sick days, up to 60 days of medical leave, and up to 6 paid days off per year for major family events like weddings, funerals, or the birth of a child
️ Partially covered health insurance after the probation, plus a wellness bonus for gym memberships, sports nutrition, and similar needs after 6 months
We pay in U.S. dollars and cover all approved overtime
Join English lessons and Dev.Pro University programs, and take part in fun online activities and team-building events

Our next steps:

Submit a CV in English — Intro call with a Recruiter — Internal interview — Client interview — Offer

Interested? Find out more:

How we work

LinkedIn Page

Our website

IG Page

Obtém a tua avaliação gratuita e confidencial do currículo.
ou arrasta um ficheiro em formato PDF, DOC, DOCX, ODT ou PAGES até 5 MB.