Attiva gli avvisi di lavoro via e-mail!

Mid & Senior Data Engineer - Insurtech - Remote (Italy Based)

Buscojobs

Friuli-Venezia Giulia

Remoto

EUR 50.000 - 80.000

Tempo pieno

2 giorni fa
Candidati tra i primi

Aumenta le tue possibilità di ottenere un colloquio

Crea un curriculum personalizzato per un lavoro specifico per avere più probabilità di riuscita.

Descrizione del lavoro

A cutting-edge InsurTech company is seeking a Mid & Senior Data Engineer to join their growing Data team. This remote role focuses on transforming the insurance industry through innovative data solutions. The ideal candidate will have expertise in data processing and engineering, working closely with data scientists to shape the future of their technology.

Competenze

  • Deep expertise in batch and distributed data processing.
  • Proven experience building Data Lake and Big Data analytics platforms.
  • Solid understanding of DevOps and CI/CD pipeline management.

Mansioni

  • Bridge the gap between data science and engineering.
  • Collaborate with data scientists to develop practical solutions.
  • Drive impactful innovation in products and technology.

Conoscenze

Python
Data Processing
Data Modeling
DevOps

Strumenti

Kafka
Spark
Databricks
AWS
PostgreSQL

Descrizione del lavoro

Mid & Senior Data Engineer - InsurTech - Remote (Italy based)

We have partnered with an exciting business that is in the process of growing its Data team, with an engineering department of over 350 employees!

They’re a cutting-edge insurance company transforming the industry with a customer-first approach. Their focus is on simplicity, transparency, and innovation—delivering seamless digital claims, personalized coverage, and fair, data-driven pricing. As they grow rapidly, they are seeking passionate individuals to join their mission of making insurance smarter and more accessible. If you’re looking for a dynamic, forward-thinking team, this is the place for you!

Tech stack:
  • Python
  • Kafka / Spark
  • Databricks

In this role, you'll help bridge the gap between data science and engineering. You’ll work on complex data challenges, collaborating closely with data scientists and machine learning engineers to develop practical, technical solutions that meet real business needs. Your contributions will drive impactful innovation and help shape the future of our products and technology.

Key requirements:

  • Deep expertise in batch and distributed data processing, as well as near real-time streaming pipelines using technologies like Kafka, Flink, and Spark.
  • Proven experience building Data Lake and Big Data analytics platforms on cloud infrastructure.
  • Proficient in Python with a strong adherence to software engineering best practices.
  • Skilled in relational databases and data modeling, with hands-on experience in RDBMS (e.g., Redshift, PostgreSQL) and NoSQL systems.
  • Solid understanding of DevOps, CI / CD pipeline management, and Infrastructure as Code (IaC) using industry-standard practices.

Nice to haves include:

  • Experience with cloud platforms (they are using AWS) but open to GCP and Azure.
  • Experience with Databricks is a strong plus.
  • Experience with streaming technologies (Kafka is their go-to).
Ottieni la revisione del curriculum gratis e riservata.
oppure trascina qui un file PDF, DOC, DOCX, ODT o PAGES di non oltre 5 MB.