Job Search and Career Advice Platform

Ativa os alertas de emprego por e-mail!

Senior Data Engineer

Sinch

São Paulo

Presencial

BRL 80.000 - 120.000

Tempo integral

Há 5 dias
Torna-te num dos primeiros candidatos

Cria um currículo personalizado em poucos minutos

Consegue uma entrevista e ganha mais. Sabe mais

Resumo da oferta

A leading global customer engagement platform in São Paulo is seeking an experienced Senior Data Engineer. In this role, you will design and implement robust data pipelines ensuring accuracy and availability. Responsibilities include optimizing existing data workflows and collaborating with teams to meet data needs. Ideal candidates should have strong software development experience, particularly in Python and SQL, and cloud environment knowledge. This position offers a dynamic environment with opportunities for project leadership and innovation.

Qualificações

  • Consistent experience as a Data Engineer or in a related role.
  • Strong knowledge of software development including Python, Spark, Git, CI/CD, Docker.
  • Expertise in SQL to query data and build ETL/ELT processes.
  • Proficiency in designing modern data pipelines and architectures.
  • Experience working with Data Lake and Data Warehouse concepts.

Responsabilidades

  • Design and implement scalable data pipelines ensuring data accuracy.
  • Work on data integration of Sinch platforms worldwide.
  • Contribute to development, performance, and maintenance of data pipelines.
  • Optimize workflows for performance using best practices.
  • Collaborate with data scientists and analysts to deliver high-quality solutions.

Conhecimentos

Data Engineering
Software Development
SQL
Data Pipeline Design
Cloud Environments
Collaboration
Fluent English

Ferramentas

Python
Spark
Git
Docker
Airflow
Databricks
Snowflake
Delta Lake
Descrição da oferta de emprego

Sinch is looking for a talented and experienced Senior Data Engineer to join our Data Engineering team. In this crucial role, you will be responsible for building, maintaining, and supporting data pipelines that connect our various products globally. You will use innovative approaches and technologies, designing data architectures that empower the data domain and provide actionable insights that support key decisions across the organization.

Sinch is a global customer engagement platform that provides communication services for businesses across messaging, voice, and email. Sinch's services are used by more than 150,000 businesses, including many of the world's largest tech companies.

Responsibilities
  • Design and implement robust, scalable data pipelines that ensure data accuracy and availability across multiple platforms.
  • Work on the data integration of various Sinch platforms and products worldwide.
  • Contribute to data pipelines development, performance, quality, monitoring, and maintenance.
  • Optimize existing data workflows and databases for performance and scalability, using best practices and cutting-edge tools.
  • Identify opportunities for improvement in our products, business, and architecture through the strategic use of data.
  • Lead data engineering projects , serving as a technical reference and providing guidance to team members (e.g., through code reviews).
  • Collaborate actively with data scientists, analysts, and product teams to understand data needs and deliver high-quality solutions.
  • Advocate for Data Engineering best practices both inside and outside the team.
  • Stay abreast of emerging technologies and industry trends to contribute innovative ideas to our data strategy.
Must Have
  • Consistent experience as a Data Engineer or in a related role.
  • Strong knowledge of software development (e.g., Python , Spark, Git, CI / CD, Docker).
  • Expertise in SQL to query data and build ETL / ELT processes.
  • Proficiency in designing modern data pipelines and architectures.
  • Experience working with Data Lake and Data Warehouse concepts, using best practices to structure and store big volumes of data.
  • Ability to troubleshoot and optimize data pipelines for performance and reliability.
  • Experience in data pipeline creation / orchestration tools (e.g., Airflow , Dagster ).
  • Hands‑on knowledge of cloud environments (e.g., AWS or GCP ).
  • Experience using Databricks or database technologies such as Snowflake or Delta Lake .
  • Knowledge of different data architectures (e.g., Data Lake, Data Mesh, Data Fabric).
  • Fluent English for effective communication with technical and business stakeholders.
  • Demonstrated ability to collaborate effectively and communicate complex ideas clearly .
Nice to Have
  • Experience with real‑time data processing and related tools / frameworks.
  • Experience designing and implementing new data architectures.
  • Experience using any data governance / management tool .
  • Experience using a tool (e.g., Dremio ) to create a virtualization layer .
  • Knowledge of data security best practices (encryption, access controls).
  • Ability to organize and break down complex projects / initiatives into manageable tasks.
Obtém a tua avaliação gratuita e confidencial do currículo.
ou arrasta um ficheiro em formato PDF, DOC, DOCX, ODT ou PAGES até 5 MB.