Job Search and Career Advice Platform

Ativa os alertas de emprego por e-mail!

Sr. Data Engineer (Remote)

Jobgether

Teletrabalho

BRL 80.000 - 120.000

Tempo integral

Há 2 dias
Torna-te num dos primeiros candidatos

Cria um currículo personalizado em poucos minutos

Consegue uma entrevista e ganha mais. Sabe mais

Resumo da oferta

A recruitment firm is seeking a Data Engineer for a remote role. The ideal candidate will design and implement scalable data solutions, focusing on data management and delivery of impactful results. Responsibilities include developing data pipelines and utilizing cloud technologies such as Azure and AWS. Expected qualifications include a degree in Computer Science and experience with tools like Databricks, Python, and SQL. The position offers flexible working conditions and opportunities for professional growth.

Serviços

Flexible remote working conditions
Opportunities for professional growth
Collaborative company culture
Access to modern technologies
Health and wellness benefits
Work-life balance

Qualificações

  • 3–6 years of experience in Data Engineering or related roles.
  • Hands-on experience with big data processing frameworks and data lakes.
  • Familiarity with CI/CD pipelines.

Responsabilidades

  • Design, build, and optimize ETL/ELT workflows.
  • Develop and maintain robust data pipelines for processing large datasets.
  • Work on cloud platforms to build and manage data lakes.
  • Utilize cloud services for data processing.
  • Leverage CI/CD pipelines to streamline development.
  • Maintain documentation for data workflows.

Conhecimentos

ETL/ELT workflows
Data pipelines
Python
SQL
PySpark
Databricks
Apache Spark
Azure
AWS
DevOps principles

Formação académica

Bachelor’s or Master’s degree in Computer Science or related field

Ferramentas

Databricks
ETL tools (Alteryx is a plus)
Git
Jenkins
Azure DevOps
Descrição da oferta de emprego

This position is posted by Jobgether on behalf of a partner company. We are currently looking for a Data Engineer - REMOTE. In this role, you will design and implement scalableadó data solutions that empower the business to make data-driven decisions. You will play a crucial role in developing data pipelines and working with cutting-edge technologies on cloud platforms. This position offersifts exciting opportunity to contribute to our client’s operational improvement by leveraging comprehensive data » insights that enhance customer experiences. Collaborating with various teams, you will ensure high-quality data management practices are upheld, ultimately driving impactful results for the organization. Join us to be part of a dynamic team focused on innovation and customer satisfaction.

Accountabilities
  • Design, build, and optimize ETL/ELT workflows using Databricks, SQL, and Python/PySpark.
  • Develop and maintain robust, scalable, and efficient data pipelines for processing large datasets.
  • Work on cloud platforms (Azure, AWS) to build and manage data lakes and scalable architectures.
  • Utilize cloud services like Azure Data Factory and AWS Glue for data processing.
  • Use Databricks for big data processing and analytics.
  • Leverage Apache Spark for distributed computing and data transformations.
  • Create and manage SQL-based data solutions ensuring scalability and performance.
  • Develop and enforce data quality checks and validations.
  • Collaborate with cross-functional teams to deliver impactful data solutions.
  • Leverage CI/CD pipelines to streamline development and deployment of workflows.
  • Maintain clear documentation for data workflows and optimize data systems.
Requirements
  • Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field.
  • 3–6 years of experience in Data Engineering or related roles.
  • Hands‑on experience with big data processing frameworks and data lakes.
  • Proficiency in Python, SQL, and PySpark for data manipulation.
  • Experience with Databricks and Apache Spark.
  • Knowledge of cloud platforms like Azure and AWS.
  • Familiarity with ETL tools (Alteryx is a plus).
  • Strong understanding of distributed systems and big data technologies.
  • Basic understanding of DevOps principles and CI/CD pipelines.
  • Hands‑on experience with Git, Jenkins, or Azure DevOps.
Benefits
  • Flexible remote working conditions.
  • Opportunities for professional growth and training.
  • Collaborative and inclusive company culture.
  • Access to modern technologies and tools.
  • Health and wellness benefits.
  • Work‑life balance.
  • Participation in innovative projects.
  • Dynamic and fast‑paced working environment.
Why Apply Through Jobgether?

We use an AI‑powered matching process to ensure your application is reviewed quickly, objectively, and fairly against the role's core requirements. Our system identifies the top‑fitting candidates, and this shortlist is then shared directly with the hiring company. The final decision and next steps (interviews, assessments) are managed by their internal team.

We appreciate your interest and wish you the best!

Data Privacy Notice: By submitting your application, you acknowledge that Jobgether will process your personal data to evaluate your candidacy and share relevant information with the hiring employer. This processing is based on legitimate interest and pre‑contractual measures under applicable data protection laws (including GDPR). You may exercise your rights (access, rectification, erasure, objection) at any time.

Obtém a tua avaliação gratuita e confidencial do currículo.
ou arrasta um ficheiro em formato PDF, DOC, DOCX, ODT ou PAGES até 5 MB.