Job Search and Career Advice Platform

Ativa os alertas de emprego por e-mail!

Data Engineer

Luxoft

São Paulo

Presencial

BRL 80.000 - 120.000

Tempo integral

Há 12 dias

Cria um currículo personalizado em poucos minutos

Consegue uma entrevista e ganha mais. Sabe mais

Resumo da oferta

A leading technology firm in São Paulo is looking for a skilled Data Engineer to design and implement scalable data pipelines using Databricks and Apache Kafka. The ideal candidate will have over 5 years of experience in data engineering, proven expertise in building real-time streaming solutions, and strong proficiency in Python or Scala. This role involves collaborating with cross-functional teams and ensuring data quality and compliance across pipelines. A good understanding of cloud platforms is also required.

Qualificações

  • +5 years of experience in data engineering roles.
  • Proven expertise with Databricks (Spark, Delta Lake, notebooks, performance tuning).
  • Strong hands-on experience with Apache Kafka (topics, producers/consumers, schema registry).
  • Solid understanding of streaming frameworks (e.g., Spark Structured Streaming, Flink, or similar).
  • Experience with cloud platforms (AWS, Azure, or GCP).

Responsabilidades

  • Design and implement scalable data pipelines using Databricks and Kafka.
  • Build and maintain real-time streaming solutions for high-volume data.
  • Collaborate with cross-functional teams to integrate data flows.
  • Optimize performance and reliability of data processing workflows.
  • Ensure data quality, lineage, and compliance across pipelines.

Conhecimentos

Data engineering
Databricks
Apache Kafka
Real-time data streaming
Python
Scala
Cloud platforms (AWS, Azure, GCP)
CI/CD pipelines
Agile tools (Jira)
Descrição da oferta de emprego
Project description

We are seeking a skilled and hands-on Data Engineer with proven experience in Databricks, Apache Kafka, and real-time data streaming solutions.

Responsibilities
  • Design and implement scalable data pipelines using Databricks and Kafka
  • Build and maintain real-time streaming solutions for high-volume data
  • Collaborate with cross-functional teams to integrate data flows into broader systems
  • Optimize performance and reliability of data processing workflows
  • Ensure data quality, lineage, and compliance across streaming and batch pipelines
  • Participate in agile development processes and contribute to technical documentation
SKILLS
Must have
  • +5 years of experience in data engineering roles
  • Proven expertise with Databricks (Spark, Delta Lake, notebooks, performance tuning)
  • Strong hands-on experience with Apache Kafka (topics, producers/consumers, schema registry)
  • Solid understanding of streaming frameworks (e.g., Spark Structured Streaming, Flink, or similar)
  • Experience with cloud platforms (AWS, Azure, or GCP)
  • Proficiency in Python or Scala for data pipeline development
  • Familiarity with CI/CD pipelines (GitLab, Jenkins) and agile tools (Jira)
  • Exposure to data lakehouse architectures and best practices
  • Knowledge of data governance, security, and observability
Nice to have

Scrum / Agile

Obtém a tua avaliação gratuita e confidencial do currículo.
ou arrasta um ficheiro em formato PDF, DOC, DOCX, ODT ou PAGES até 5 MB.