Attiva gli avvisi di lavoro via e-mail!

Mid & Senior Data Engineer - InsurTech - Remote (Italy based)

JR Italy

Catania

Remoto

EUR 50.000 - 75.000

Tempo pieno

12 giorni fa

Aumenta le tue possibilità di ottenere un colloquio

Crea un curriculum personalizzato per un lavoro specifico per avere più probabilità di riuscita.

Descrizione del lavoro

A leading InsurTech company is seeking a Mid & Senior Data Engineer to join their growing Data team. This remote position based in Catania offers the chance to work with innovative technologies and contribute to transforming the insurance industry. The ideal candidate will have expertise in data processing and cloud infrastructure, working closely with data scientists to develop impactful solutions.

Competenze

  • Expertise in batch and distributed data processing.
  • Experience building Data Lake and Big Data platforms.

Mansioni

  • Bridge data science and engineering, focusing on complex data challenges.
  • Collaborate with data scientists and ML engineers.

Conoscenze

Python
Data Processing
Data Modeling
DevOps

Strumenti

Kafka
Spark
Databricks
AWS

Descrizione del lavoro

Social network you want to login/join with:

Mid & Senior Data Engineer - InsurTech - Remote (Italy based), Catania
Client:

Futureheads Recruitment | B Corp

Location:

Italy (Remote, based in Catania)

Job Category:

Other

EU work permit required:

Yes

Job Reference:

6837761301586903040337122

Job Views:

2

Posted:

12.05.2025

Expiry Date:

26.06.2025

Job Description:

Mid & Senior Data Engineer - InsurTech - Remote (Italy based)

We have partnered with an exciting business that is growing its Data team, currently over 350 members!

They’re a cutting-edge insurance company transforming the industry with a customer-first approach, focusing on simplicity, transparency, and innovation—digital claims, personalized coverage, and data-driven pricing. They seek passionate individuals to join their mission of making insurance smarter and more accessible. If you’re looking for a dynamic, forward-thinking team, this is the place to be!

The tech stack:

  • Python
  • Kafka/Spark
  • Databricks
  • AWS

You will bridge data science and engineering, focusing on complex data challenges. Collaborate with data scientists and ML engineers to develop solutions that meet business needs, impacting product and technology evolution.

Key requirements:

  • Expertise in batch and distributed data processing, near real-time streaming with Kafka, Flink, Spark
  • Experience building Data Lake and Big Data platforms on cloud infrastructure
  • Proficiency in Python, following software engineering best practices
  • Knowledge of relational databases and data modeling, with hands-on experience in RDBMS (e.g., Redshift, PostgreSQL) and NoSQL systems
  • Understanding of DevOps, CI/CD, and Infrastructure as Code (IaC)

Nice to haves:

  • Experience with AWS (they use it), GCP, or Azure
  • Experience with Databricks is a strong plus
  • Familiarity with streaming technologies, especially Kafka
Ottieni la revisione del curriculum gratis e riservata.
oppure trascina qui un file PDF, DOC, DOCX, ODT o PAGES di non oltre 5 MB.