Job Search and Career Advice Platform

Ativa os alertas de emprego por e-mail!

Data Engineer

Heartcentrix Solutions

Natal

Presencial

BRL 80.000 - 120.000

Tempo integral

Há 19 dias

Cria um currículo personalizado em poucos minutos

Consegue uma entrevista e ganha mais. Sabe mais

Resumo da oferta

A leading data and analytics firm is seeking a highly skilled Python Data Engineer to join their team in Brazil. This fully remote role involves building scalable data pipelines and operationalizing machine learning workflows. The ideal candidate will have 3-7 years of experience, strong proficiency in Python, and expertise with Snowflake. Key responsibilities include architecting data workflows, collaborating with data scientists, and implementing best practices for MLOps. Competitive salary and remote work flexibility are offered.

Qualificações

  • 3-7+ years of experience as a Data Engineer or similar role.
  • Strong proficiency in building production-grade data pipelines using Python.
  • Hands-on experience with at least one cloud platform (AWS, GCP, Azure).

Responsabilidades

  • Build and maintain ETL / ELT pipelines using Python and Snowflake.
  • Architect and manage data workflows ensuring accuracy and reliability.
  • Collaborate with data scientists to deploy and monitor machine learning models.

Conhecimentos

Python proficiency
ETL / ELT pipeline development
Snowflake expertise
AI / ML workflow experience
SQL proficiency
Cloud platform experience (AWS, GCP, Azure)
Data orchestration tools (Airflow, Prefect, Dagster)
MLOps tools familiarity
Data modeling and warehousing

Ferramentas

Snowflake
Airflow
PostgreSQL
MySQL
SQL Server
TensorFlow
PyTorch
Descrição da oferta de emprego

We are seeking a highly skilled Python Data Engineer with an AI / ML focus to join our client's growing data & analytics team in Brazil. This role is ideal for someone who loves building scalable data pipelines, operationalizing machine learning workflows, and partnering closely with data scientists to bring models into production. You will design, develop, and maintain data infrastructure that powers AI-driven insights across the organization, including data models and pipelines that run through Snowflake. This is a fully remote position working with cross-functional product, engineering, and analytics teams.

Key Responsibilities
  • Build, optimize, and maintain ETL / ELT pipelines using Python, modern data engineering frameworks, and Snowflake as a central data warehouse.
  • Architect and manage data workflows, ensuring accuracy, scalability, and reliability.
  • Work closely with data scientists to deploy, monitor, and tune machine learning models.
  • Develop feature engineering pipelines, preprocessing workflows, and model-serving APIs.
  • Integrate data from various sources (APIs, databases, cloud storage, streaming platforms).
  • Implement MLOps best practices including versioning, CI / CD for ML, and automated retraining workflows.
  • Optimize data storage, compute usage, and performance within Snowflake and cloud-native tools (AWS, GCP, or Azure).
  • Create and maintain documentation, data catalogs, and operational guides.
  • Monitor data system performance and recommend improvements.
Required Skills & Experience
  • 3–7+ years of experience as a Data Engineer, Python Engineer, or similar backend / data role
  • Strong proficiency in Python, including building production-grade data pipelines
  • Experience with Snowflake—data modeling, Snowpipe, tasks, streams, stored procedures, and performance optimization
  • Experience with AI / ML workflows : feature engineering, inference pipelines, or deploying models
  • Proficiency in SQL and relational databases (PostgreSQL, MySQL, SQL Server)
  • Hands-on experience with at least one cloud platform (AWS, GCP, or Azure)
  • Experience using data orchestration tools like Airflow, Prefect, or Dagster
  • Familiarity with MLOps tools such as MLflow, Kubeflow, SageMaker, Vertex AI, or similar
  • Strong understanding of data modeling, data warehousing, and distributed systems
Preferred Qualifications
  • Experience with Spark, Databricks, or other big-data processing tools
  • Experience ingesting and transforming data at scale on Snowflake, including optimization of virtual warehouses
  • Familiarity with Kafka, Kinesis, or other streaming platforms
  • Understanding of CI / CD pipelines (GitHub Actions, GitLab CI, Jenkins, etc.)
  • Exposure to deep learning frameworks (TensorFlow, PyTorch).
  • Experience working with Brazilian clients or LATAM distributed engineering teams.
Obtém a tua avaliação gratuita e confidencial do currículo.
ou arrasta um ficheiro em formato PDF, DOC, DOCX, ODT ou PAGES até 5 MB.