Job Search and Career Advice Platform

Ativa os alertas de emprego por e-mail!

Senior Big Data Engineer

Sigma Software

Teletrabalho

BRL 120.000 - 150.000

Tempo integral

Há 2 dias
Torna-te num dos primeiros candidatos

Cria um currículo personalizado em poucos minutos

Consegue uma entrevista e ganha mais. Sabe mais

Resumo da oferta

A leading software company in Brazil is seeking a skilled Data Engineer to design and maintain data pipelines using Python, SQL, and PySpark. The role involves working with AWS for data storage and ensuring the reliability and scalability of data solutions. Ideal candidates should have a minimum of 5 years of experience in data engineering or backend development, strong knowledge of ETL principles, and fluency in English. This full-time position also offers remote work options.

Qualificações

  • Minimum 5 years of experience in data engineering or backend development.
  • Strong knowledge of Python and SQL.
  • Hands-on experience with AWS services like S3, Glue, and Lambda.
  • Practical knowledge of distributed processing frameworks, especially PySpark.
  • Good understanding of ETL principles and data security.

Responsabilidades

  • Design and maintain data pipelines using Python, SQL, and PySpark.
  • Work with AWS for large-scale data storage solutions.
  • Ensure reliability and performance of data flows.
  • Collaborate with developers for integration and deployment.
  • Implement monitoring for data pipelines.

Conhecimentos

Python
SQL
AWS
ETL principles
PySpark
NoSQL
Data modeling
Apache Hive
Spark
Kafka

Ferramentas

MongoDB
DynamoDB
Hadoop
Redshift
Apache Pig
Scala
Descrição da oferta de emprego
Responsibilities
  • Design develop and maintain robust data pipelines and ETL processes using Python SQL and PySpark
  • Work with large-scale data storage on AWS (S3 DynamoDB MongoDB)
  • Ensure high-quality consistent and reliable data flows between systems
  • Optimize performance scalability and cost efficiency of data solutions
  • Collaborate with backend developers and DevOps engineers to integrate and deploy data components
  • Implement monitoring logging and alerting for production data pipelines
  • Participate in architecture design propose improvements and mentor mid-level engineers.
Qualifications
  • 5 years of experience in data engineering or backend development
  • Strong knowledge of Python and SQL
  • Hands‑on experience with AWS (S3 Glue Lambda DynamoDB)
  • Practical knowledge of PySpark or other distributed processing frameworks
  • Experience with NoSQL databases (MongoDB or DynamoDB)
  • Good understanding of ETL principles data modeling and performance optimization
  • Understanding of data security and compliance in cloud environments
  • Fluent in English (Upper-Intermediate level or higher)
Personal Profile
  • Strong communication and collaboration skills in cross‑functional environments
  • Proactive accountable and driven to deliver high-quality results
Remote Work

Yes

Employment Type

Full‑time

Key Skills
  • Apache Hive
  • S3
  • Hadoop
  • Redshift
  • Spark
  • AWS
  • Apache Pig
  • NoSQL
  • Big Data
  • Data Warehouse
  • Kafka
  • Scala
Obtém a tua avaliação gratuita e confidencial do currículo.
ou arrasta um ficheiro em formato PDF, DOC, DOCX, ODT ou PAGES até 5 MB.