Ativa os alertas de emprego por e-mail!

Data Engineer (Relocation To Portugal)

buscojobs Brasil

Parnaíba

Teletrabalho

BRL 80.000 - 120.000

Tempo integral

Hoje
Torna-te num dos primeiros candidatos

Resumo da oferta

A technology services company in Brazil is seeking an experienced Data Engineer to develop and maintain scalable data pipelines. Ideal candidates will have 5+ years of experience with data products in the cloud, strong Python and SQL skills, and familiarity with Microsoft Azure. This remote position offers career growth programs, training options, and a multicultural work environment.

Serviços

100% remote work
WFH allowance for remote setup
Career growth programs
Access to online courses
Mentoring programs

Qualificações

  • 3+ years in Data Engineering or similar roles.
  • 5+ years building scalable data pipelines and data products.
  • Experience with APIs and databases.

Responsabilidades

  • Design, develop, and maintain data pipelines for ingestion.
  • Implement automated QA checks and monitoring.
  • Collaborate with teams to ingest, extract, and process data.

Conhecimentos

Strong Python skills for ETL
English: Professional working proficiency
Strong experience with Microsoft Azure
Advanced SQL

Formação académica

Bachelor’s degree in Computer Science or related field

Ferramentas

Docker
Power BI
Terraform
Descrição da oferta de emprego
Overview

We are a technology services company seeking experienced Data Engineers to develop and deliver robust, cost-efficient data products that power analytics, reporting and decision-making across brands. This role involves building and maintaining data pipelines, improving data infrastructure, and collaborating across teams to deliver trusted data assets.

We value professional development, collaborative environments, and opportunities for international experience and growth.

What we’re looking for
  • 3+ years in Data Engineering or similar roles
  • Strong Python skills for ETL (CI/CD, API deployment, data workflows)
  • Familiarity with Big Data tools (e.g. Spark, Hadoop, BigQuery)
  • Hands-on experience in cloud or hybrid environments
  • Deep understanding of data lifecycle, governance, and operational challenges
  • Clear communication and ownership mindset; comfortable with cross-functional projects
  • English: Professional working proficiency required (B2/C1)
Nice to have
  • Coding: Bash / YAML / SQL
  • Experience with security teams or IAM and data protection policies
  • Experience with message brokers (e.g. Kafka, RabbitMQ, ActiveMQ); building and maintaining real-time data pipelines
  • French B2
Responsibilities
  • Design, develop, and maintain data pipelines for ingestion and transformation of complex datasets into usable data products
  • Build scalable infrastructure to support hourly, daily, and weekly update cycles
  • Implement automated QA checks and monitoring to ensure data quality
  • Collaborate with teams to ingest, extract, and process data using Python, SQL, REST, and GraphQL APIs
  • Design and implement automated ELT processes and data models; ensure data integrity and performance
  • Build and maintain Looker/BI data models and dashboards where applicable
  • Define and maintain CI/CD and deployment pipelines for data infrastructure
  • Containerize and deploy solutions using Docker and cloud services
Required Skills
  • Strong experience with Microsoft Azure (Data Factory, SQL Database, Blob/Data Lake) or equivalent cloud platforms
  • Advanced SQL (stored procedures, indexing, views)
  • Proven ETL development experience (batch + scheduled pipelines)
  • Understanding of data governance, lineage, and quality frameworks
  • Familiarity with Power BI data models (not visualization) or similar BI tooling
  • Excellent communication and mentoring abilities
Preferred Qualifications
  • NoSQL design (MongoDB, DynamoDB, Cosmos DB)
  • Cloud platforms experience (AWS, Azure, GCP)
  • Terraform or IaC tooling familiarity
  • Azure DevOps for source and pipeline management
What you’ll do
  • Design, develop, and maintain data pipelines for threat intelligence ingestion, validation, and export automation
  • Implement data validation processes to ensure data accuracy and completeness
  • Collaborate with analysts to understand data requirements and design solutions
  • Develop automation scripts for data export to external systems
  • Optimize existing pipelines for performance and scalability
  • Monitor pipelines and troubleshoot issues to ensure data availability
  • Document specifications, data flows, and procedures
  • Stay updated on trends in data engineering and apply where relevant
  • Provide technical guidance to team members
What you will bring
  • Bachelor’s degree in Computer Science or related field
  • 5+ years building scalable data pipelines and data products in the cloud (AWS preferred)
  • Deep understanding of ELT, data modeling, and data warehousing concepts
  • Strong Python and SQL skills; experience with large datasets
  • Experience with data from multiple sources (APIs, databases, etc.)
  • Excellent problem-solving and collaboration abilities
Nice to have
  • Experience with dbt and data transformation tooling
  • Containerization and orchestration experience (Docker, Kubernetes)
  • API design/integration experience for data pipelines
  • Linux/macOS development environment familiarity
  • Observability or QA tooling exposure (e.g., Great Expectations)
What we offer
  • 100% remote work
  • WFH allowance for remote setup
  • Career growth programs and 360º feedback
  • Training options and access to online courses
  • Mentoring programs and wellbeing resources
  • Multicultural working environment with team events

We are committed to an inclusive culture and providing opportunities based on merit and potential.

Obtém a tua avaliação gratuita e confidencial do currículo.
ou arrasta um ficheiro em formato PDF, DOC, DOCX, ODT ou PAGES até 5 MB.