Job Search and Career Advice Platform

Ativa os alertas de emprego por e-mail!

Big Data Engineer

Sigma Software

São Paulo

Teletrabalho

BRL 80.000 - 120.000

Tempo integral

Hoje
Torna-te num dos primeiros candidatos

Cria um currículo personalizado em poucos minutos

Consegue uma entrevista e ganha mais. Sabe mais

Resumo da oferta

A leading software company in Brazil is seeking a Data Engineer to develop and maintain ETL pipelines and data integration services. The ideal candidate will have 2-4 years of experience, strong skills in Python and SQL, and familiarity with AWS data services. This role offers a full-time position with remote work options available.

Qualificações

  • 2-4 years of experience as a Data Engineer or Back-end Developer.
  • Strong hands-on experience with Python and SQL.
  • Experience with AWS data services (S3, DynamoDB, Lambda, Glue).
  • Familiarity with NoSQL databases and API integrations.

Responsabilidades

  • Develop and maintain ETL pipelines and data integration services.
  • Work with AWS services and NoSQL databases.
  • Design, optimize and validate data flows.
  • Troubleshoot production data issues.

Conhecimentos

Python
SQL
AWS services
NoSQL databases
Analytical mindset
Problem-solving skills
English (Upper-Intermediate)

Ferramentas

Apache Hive
Hadoop
Redshift
Spark
Kafka
Scala
Descrição da oferta de emprego
Responsibilities
  • Develop and maintain ETL pipelines and data integration services using Python and SQL
  • Work with AWS services (S3, DynamoDB, Lambda, Glue) and NoSQL databases (MongoDB, DynamoDB)
  • Design, optimize and validate data flows ensuring data quality across systems
  • Collaborate with Senior engineers on architecture and performance improvements
  • Troubleshoot production data issues and perform root cause analyses
  • Contribute to the continuous improvement of development practices and performance monitoring
Qualifications
  • 2-4 years of experience as a Data Engineer or Back-end Developer
  • Strong hands‑on experience with Python and SQL
  • Experience with AWS data services (S3, DynamoDB, Lambda, Glue)
  • Familiarity with NoSQL databases and API integrations
  • Basic understanding of PySpark or similar distributed frameworks
  • Analytical mindset and proactive problem‑solving skills
  • Fluent in English (Upper‑Intermediate level or higher)
WILL BE A PLUS
  • Experience with Airflow or other orchestration tools
  • Interest in big data performance tuning and cloud optimization
Personal Profile
  • Strong communication and collaboration skills
  • Self‑motivated, responsible and able to work independently

Remote Work: Yes

Employment Type: Full-time

Key Skills

Apache Hive, S3, Hadoop, Redshift, Spark, AWS, Apache Pig, NoSQL, Big Data, Data Warehouse, Kafka, Scala

Experience: years

Vacancy: 1

Obtém a tua avaliação gratuita e confidencial do currículo.
ou arrasta um ficheiro em formato PDF, DOC, DOCX, ODT ou PAGES até 5 MB.