Attiva gli avvisi di lavoro via e-mail!

Mid & Senior Data Engineer - InsurTech - Remote (Italy based)

JR Italy

Vercelli

Remoto

EUR 50.000 - 75.000

Tempo pieno

4 giorni fa
Candidati tra i primi

Aumenta le tue possibilità di ottenere un colloquio

Crea un curriculum personalizzato per un lavoro specifico per avere più probabilità di riuscita.

Descrizione del lavoro

A leading InsurTech company is seeking a Mid & Senior Data Engineer to join their expanding Data team. This remote role focuses on bridging data science and engineering, tackling complex data challenges while working with cutting-edge technologies like Python, Kafka, and AWS. Ideal candidates will have deep expertise in data processing and experience building analytics platforms. Join a dynamic team dedicated to transforming the insurance industry with innovative solutions.

Competenze

  • Deep expertise in batch and distributed data processing.
  • Proven experience building Data Lake and Big Data analytics platforms.

Mansioni

  • Bridge data science and engineering to solve complex data challenges.
  • Collaborate with data scientists and machine learning engineers.

Conoscenze

Python
Data Processing
Data Modeling
DevOps

Strumenti

Kafka
Spark
Databricks
AWS

Descrizione del lavoro

Mid & Senior Data Engineer - InsurTech - Remote (Italy based)

Client: Futureheads Recruitment | B Corp

Location: Italy (Remote)

Job Category: Other

EU work permit required: Yes

Job Reference: 6837761301586903040337169

Job Views: 2

Posted: 12.05.2025

Expiry Date: 26.06.2025

Job Description:

We have partnered with an exciting business that is expanding its Data team, which currently has over 350 members. They are a cutting-edge insurance company transforming the industry with a customer-first approach, emphasizing simplicity, transparency, and innovation. They focus on seamless digital claims, personalized coverage, and data-driven pricing, aiming to make insurance smarter and more accessible. This is a great opportunity for passionate individuals to join a dynamic, forward-thinking team.

The tech stack includes:

  • Python
  • Kafka/Spark
  • Databricks
  • AWS

Your role will involve bridging data science and engineering, focusing on complex data challenges. You will collaborate with data scientists and machine learning engineers to develop practical solutions that meet real business needs, driving innovation and shaping product and technology futures.

Key requirements:
  • Deep expertise in batch and distributed data processing, with experience in real-time streaming pipelines using Kafka, Flink, and Spark.
  • Proven experience building Data Lake and Big Data analytics platforms on cloud infrastructure.
  • Proficiency in Python, following best software engineering practices.
  • Experience with relational databases and data modeling, including RDBMS (e.g., Redshift, PostgreSQL) and NoSQL systems.
  • Understanding of DevOps, CI/CD pipelines, and Infrastructure as Code (IaC).
Nice to have:
  • Experience with cloud platforms, especially AWS, but GCP and Azure are also considered.
  • Experience with Databricks is a strong plus.
  • Familiarity with streaming technologies, particularly Kafka.
Ottieni la revisione del curriculum gratis e riservata.
oppure trascina qui un file PDF, DOC, DOCX, ODT o PAGES di non oltre 5 MB.