Job Search and Career Advice Platform

Ativa os alertas de emprego por e-mail!

Senior Data Engineer

AM53

São Paulo

Presencial

BRL 120.000 - 150.000

Tempo integral

Há 30+ dias

Cria um currículo personalizado em poucos minutos

Consegue uma entrevista e ganha mais. Sabe mais

Resumo da oferta

A leading technology company in São Paulo is seeking a highly skilled Senior Data Engineer to lead a significant data migration project to Snowflake. The ideal candidate will have strong expertise in data pipelines and transformations using SQL and Python, and a proven ability to optimize data solutions. This is a full-time position in a collaborative team environment.

Qualificações

  • 3+ years of experience in data engineering and building scalable data pipelines.
  • Solid experience with data migration projects and large datasets.
  • Strong hands-on experience with Snowflake: data loading, querying, and performance optimization.
  • Proficient in dbt for data transformation and modeling.
  • Proven experience with Apache Airflow for scheduling data workflows.
  • Expert-level SQL skills with complex joins and performance tuning.
  • Proficient in Python for data manipulation and automation.
  • Understanding of data warehousing concepts and ELT principles.
  • Experience with CI/CD pipelines and version control systems like GitHub.

Responsabilidades

  • Design and implement data pipelines to migrate data to Snowflake.
  • Translate data requirements into dbt models and transformations.
  • Build and maintain Airflow DAGs for data ingestion and processing.
  • Optimize data pipelines for performance and cost efficiency.
  • Develop robust data models in Snowflake using best practices.
  • Monitor data pipelines for reliability and availability.

Conhecimentos

Apache Hive
S3
Hadoop
Redshift
Spark
AWS
Apache Pig
NoSQL
Big Data
Data Warehouse
Kafka
Scala

Formação académica

Bachelor's or Master's degree in Computer Science, Engineering, or related field

Ferramentas

Snowflake
dbt
Apache Airflow
PySpark
AWS Athena
Google BigQuery
Descrição da oferta de emprego

We are seeking a highly skilled and motivated Senior Data Engineer to play a key role in our significant data platform migration project. You will be responsible for designing developing and optimizing data pipelines and transformations as we transition from our current Airflow PySpark Athena and BigQuery environment to a modern stack built on Airflow dbt and Snowflake. This role requires a strong understanding of data warehousing principles excellent SQL and Python skills and a proven ability to deliver robust and scalable data solutions.

Your Responsibilities

Data Migration & Pipeline Development :

Design develop and implement efficient and reliable data pipelines to migrate data from PySpark / Athena / BigQuery to DBT / Snowflake.

Translate complex data requirements into actionable dbt models and transformations within Snowflake.

Build and maintain Airflow DAGs for orchestrating data ingestion transformation and loading processes.

Optimize existing data pipelines for performance scalability and cost efficiency in the new Snowflake environment.

Data Modeling & Transformation :

Develop and maintain robust data models in Snowflake using dbt adhering to best practices for data warehousing and analytics.

Write complex SQL queries for data extraction transformation and loading.

Ensure data quality accuracy and consistency throughout the migration and ongoing data operations.

Troubleshooting & Optimization :

Identify diagnose and resolve data-related issues performance bottlenecks and data discrepancies.

Proactively monitor data pipelines and systems to ensure smooth operation and data availability.

Implement performance tuning strategies within Snowflake and dbt to optimize query execution and resource utilization.

Collaboration & Documentation :

Collaborate closely with Lead Data Engineers Data Analysts and other stakeholders to understand data needs and deliver effective solutions.

Contribute to the development and maintenance of comprehensive technical documentation for data pipelines models and processes.

Participate in code reviews and contribute to the teams adherence to coding standards and best practices.

Requirements
Qualifications

Bachelors or Masters degree in Computer Science Engineering or a related quantitative field.

3 years of experience in data engineering with a focus on building and maintaining scalable data pipelines.

Solid experience with data migration projects and working with large datasets.

Strong hands‑on experience with Snowflake including data loading querying and performance optimization.

Proficiency in dbt (data build tool) for data transformation and modeling.

Proven experience with Apache Airflow for scheduling and orchestrating data workflows.

Expert‑level SQL skills including complex joins window functions and performance tuning.

Proficiency in Python for data manipulation scripting and automation for edge cases

Familiarity with PySpark AWS Athena and Google BigQuery (source systems).

Understanding of data warehousing concepts dimensional modeling and ELT principles.

Knowledge of building CI / CD pipelines for code deployment

Experience with version control systems (e.g. Github).

Excellent problem‑solving analytical and communication skills.

Ability to work independently and as part of a collaborative team in an agile environment.

Must speak and write in English fluently; Effective communicator

Required Experience :

Senior IC

Key Skills
  • Apache Hive
  • S3
  • Hadoop
  • Redshift
  • Spark
  • AWS
  • Apache Pig
  • NoSQL
  • Big Data
  • Data Warehouse
  • Kafka
  • Scala

Employment Type: Full Time

Experience: years

Vacancy: 1

Obtém a tua avaliação gratuita e confidencial do currículo.
ou arrasta um ficheiro em formato PDF, DOC, DOCX, ODT ou PAGES até 5 MB.