Job Search and Career Advice Platform

Ativa os alertas de emprego por e-mail!

Senior Data Pipeline Developer

Bebeedata

Porto Alegre

Presencial

BRL 120.000 - 160.000

Tempo integral

Há 10 dias

Cria um currículo personalizado em poucos minutos

Consegue uma entrevista e ganha mais. Sabe mais

Resumo da oferta

A growing data and analytics organization in Brazil is seeking a Python Data Engineer with a focus on AI and ML. The ideal candidate will build scalable data pipelines, operationalize machine learning workflows, and work closely with data scientists. Responsibilities include maintaining ETL/ELT pipelines using Python and Snowflake, optimizing data workflows, and implementing MLOps best practices. The role provides a dynamic work environment, professional growth opportunities, and competitive compensation.

Serviços

Competitive compensation and benefits
Professional growth opportunities
Recognition and rewards for performance

Qualificações

  • Proficiency in Python and relevant data engineering frameworks.
  • Expertise in designing and implementing efficient data pipelines.
  • Strong understanding of machine learning concepts and workflows.
  • Experience with data warehousing solutions like Snowflake.
  • Ability to work collaboratively with cross-functional teams.

Responsabilidades

  • Build, optimize, and maintain ETL / ELT pipelines using Python.
  • Architect and manage data workflows, ensuring accuracy and scalability.
  • Deploy, monitor, and tune machine learning models with data scientists.
  • Integrate data from various sources including APIs and databases.
  • Implement MLOps best practices including versioning and automated workflows.

Conhecimentos

Python
Data Engineering Frameworks
MLOps
Collaboration
Problem-solving
Communication

Ferramentas

Snowflake
AWS
GCP
Azure
Descrição da oferta de emprego

Job DescriptionWe are seeking a highly skilled Python Data Engineer with an AI / ML focus to join our organization's growing data & analytics team in Brazil.This role is ideal for someone who loves building scalable data pipelines, operationalizing machine learning workflows, and partnering closely with data scientists to bring models into production.Build, optimize, and maintain ETL / ELT pipelines using Python, modern data engineering frameworks, and Snowflake as a central data warehouse.Architect and manage data workflows, ensuring accuracy, scalability, and reliability.Work closely with data scientists to deploy, monitor, and tune machine learning models.Develop feature engineering pipelines, preprocessing workflows, and model-serving APIs.Integrate data from various sources (APIs, databases, cloud storage, streaming platforms).

Implement MLOps best practices including versioning, CI / CD for ML, and automated retraining workflows.Optimize data storage, compute usage, and performance within Snowflake and cloud-native tools (AWS, GCP, or Azure).

Create and maintain documentation, data catalogs, and operational guides.Monitor data system performance and recommend improvements.Required Skills & QualificationsProficiency in Python and relevant data engineering frameworks;Expertise in designing and implementing efficient data pipelines;Strong understanding of machine learning concepts and workflows;Experience with data warehousing solutions like Snowflake;Ability to work collaboratively with cross-functional teams;Excellent problem-solving and analytical skills;Strong communication and interpersonal skills.What We OfferA dynamic work environment that fosters innovation and collaboration;Opportunities for professional growth and development;Competitive compensation and benefits package;Recognition and rewards for outstanding performance.How to ApplyInterested candidates should submit their resumes and cover letters to us.

We look forward to hearing from you!

Obtém a tua avaliação gratuita e confidencial do currículo.
ou arrasta um ficheiro em formato PDF, DOC, DOCX, ODT ou PAGES até 5 MB.