Job Search and Career Advice Platform

Activez les alertes d’offres d’emploi par e-mail !

Senior Data Engineer (Integrations)

Medium

À distance

EUR 40 000 - 60 000

Plein temps

Aujourd’hui
Soyez parmi les premiers à postuler

Générez un CV personnalisé en quelques minutes

Décrochez un entretien et gagnez plus. En savoir plus

Résumé du poste

A leading technology company is seeking a Senior Data Engineer Integrations in Tremblay-en-France. This role involves designing and operating scalable data pipelines and core datasets for blockchain analytics. Ideal candidates will have strong experience with Python, SQL, ClickHouse, and dbt, with a focus on both streaming and batch data processing. Join a remote-first, autonomous team that values impact and quality, and work on significant data challenges in a fast-evolving industry.

Prestations

Competitive compensation package
Flexible working model
Opportunity to work on large-scale data challenges
Collaborative culture

Qualifications

  • Proven experience building production-grade data pipelines.
  • Strong fundamentals in data and software engineering.
  • Hands-on experience with ClickHouse and dbt.
  • Solid understanding of ingestion patterns.
  • Clear communication skills for remote collaboration.

Responsabilités

  • Design and scale data pipelines using ClickHouse, Python, and dbt.
  • Own the full lifecycle of data pipelines.
  • Ensure observability and data quality checks.
  • Collaborate with downstream data consumers.
  • Evolve tooling and practices with modern data engineering approaches.

Connaissances

Python
SQL
ClickHouse
dbt
Data Engineering
Streaming Ingestion
Batch Processing
AI-assisted Development Tools

Outils

ClickHouse
dbt
Description du poste

This position is posted by Jobgether on behalf of a partner company. We are currently looking for a Senior Data Engineer Integrations in France.

In this role, you will help design and operate the data backbone that powers advanced blockchain analytics products used by teams and customers worldwide. You will focus on building reliable, scalable ingestion pipelines and well-modeled core datasets that others can confidently build on. Working at the intersection of data and software engineering, you will own systems that handle large volumes of streaming and batch data with a strong emphasis on correctness and stability. You will collaborate closely with engineers, analysts, and product teams in a remote-first, high-autonomy environment. This position offers meaningful ownership, technical depth, and the opportunity to solve foundational data challenges in a fast-evolving industry. It is ideal for engineers who value impact, quality, and pragmatic execution.


Accountabilities:
  • Design, build, and scale high-performance data pipelines and infrastructure using technologies such as ClickHouse, Python, and dbt
  • Own the full lifecycle of data pipelines, from raw ingestion through transformation to clean, well-defined datasets
  • Build and operate systems that support large-scale streaming and batch processing with strong guarantees around correctness and reliability
  • Improve observability, data quality checks, and failure handling to ensure predictable performance at scale
  • Collaborate with downstream data consumers to define clear dataset contracts, schemas, and usage patterns
  • Contribute to core datasets that serve as long-lived foundations for analytics, product features, and research
  • Leverage AI-powered development tools and agents to accelerate delivery, automate repetitive tasks, and improve code quality
  • Continuously evolve tooling and practices by staying current with modern data engineering approaches
Requirements:
  • Proven experience building and operating production-grade data pipelines that run continuously and reliably
  • Strong fundamentals in data and software engineering, with deep expertise in Python and SQL
  • Hands-on production experience with ClickHouse and dbt
  • Solid understanding of streaming and batch ingestion patterns, including backfills, reprocessing, and schema evolution
  • Comfort working across the full data stack, including ingestion, transformation, storage, and exposure to downstream systems
  • Experience designing clean, reusable datasets intended for broad internal consumption
  • Familiarity with using AI-assisted development tools in daily workflows, or strong curiosity to adopt them
  • Clear written and verbal communication skills suited to remote-first and asynchronous collaboration
  • Pragmatic, ownership-driven mindset with the ability to execute in complex environments
  • Experience with, or strong interest in, blockchain data, crypto ecosystems, and Web3 technologies
Benefits:
  • Competitive compensation package aligned with a remote-first setup
  • Flexible working model with autonomy over schedule and execution
  • Opportunity to work on massive-scale data challenges in a rapidly growing industry
  • High-impact role with influence on technical and product direction
  • Collaborative culture that values ownership, execution, and continuous improvement

Why Apply Through Jobgether?

We use an AI-powered matching process to ensure your application is reviewed quickly, objectively, and fairly against the role's core requirements. Our system identifies the top-fitting candidates, and this shortlist is then shared directly with the hiring company. The final decision and next steps (interviews, assessments) are managed by their internal team.

We appreciate your interest and wish you the best!

Data Privacy Notice: By submitting your application, you acknowledge that Jobgether will process your personal data to evaluate your candidacy and share relevant information with the hiring employer. This processing is based on legitimate interest and pre-contractual measures under applicable data protection laws (including GDPR). You may exercise your rights (access, rectification, erasure, objection) at any time.

#LI-CL1

Obtenez votre examen gratuit et confidentiel de votre CV.
ou faites glisser et déposez un fichier PDF, DOC, DOCX, ODT ou PAGES jusqu’à 5 Mo.