Attiva gli avvisi di lavoro via e-mail!

Mid & Senior Data Engineer - InsurTech - Remote (Italy based)

JR Italy

Palermo

Remoto

EUR 50.000 - 80.000

Tempo pieno

4 giorni fa
Candidati tra i primi

Aumenta le tue possibilità di ottenere un colloquio

Crea un curriculum personalizzato per un lavoro specifico per avere più probabilità di riuscita.

Descrizione del lavoro

A leading InsurTech company is seeking a Mid & Senior Data Engineer to join their expanding Data team in a remote role based in Italy. This is an opportunity to work in a dynamic environment focused on innovation and customer-centric solutions. The ideal candidate will have extensive experience in data processing, cloud infrastructure, and software engineering best practices. You will play a key role in developing solutions that enhance the insurance experience through data-driven insights.

Competenze

  • Extensive experience in batch and distributed data processing.
  • Proven ability to build Data Lake and Big Data analytics platforms.

Mansioni

  • Bridge data science and engineering, tackling complex data challenges.
  • Collaborate with data scientists and machine learning engineers.

Conoscenze

Python
Data Processing
DevOps

Strumenti

Kafka
Spark
AWS
Databricks

Descrizione del lavoro

Social network you want to login/join with:

Mid & Senior Data Engineer - InsurTech - Remote (Italy based), Palermo

col-narrow-left

Client:

Futureheads Recruitment | B Corp

Location:
Job Category:

Other

-

EU work permit required:

Yes

col-narrow-right

Job Reference:

6837761301586903040337141

Job Views:

2

Posted:

12.05.2025

Expiry Date:

26.06.2025

col-wide

Job Description:

Mid & Senior Data Engineer - InsurTech - Remote (Italy based)

We have partnered with an exciting business that is growing its Data team, currently comprising over 350 members!

They are a cutting-edge insurance company transforming the industry with a customer-first approach. Their focus is on simplicity, transparency, and innovation—covering seamless digital claims, personalized coverage, and fair, data-driven pricing. As they expand rapidly, they seek passionate individuals to join their mission of making insurance smarter and more accessible. If you desire a dynamic, forward-thinking environment, this is the place for you!

The tech stack includes:

  • Python
  • Kafka/Spark
  • Databricks
  • AWS

Your role involves bridging data science and engineering, tackling complex data challenges, and collaborating with data scientists and machine learning engineers to develop practical solutions that meet real business needs. Your contributions will foster impactful innovation and influence the future of their products and technology.

Key requirements:

  • Extensive experience in batch and distributed data processing, including near real-time streaming pipelines with Kafka, Flink, and Spark.
  • Proven ability to build Data Lake and Big Data analytics platforms on cloud infrastructure.
  • Proficiency in Python, following best practices in software engineering.
  • Experience with relational databases and data modeling, including RDBMS (e.g., Redshift, PostgreSQL) and NoSQL systems.
  • Strong understanding of DevOps, CI/CD pipelines, and Infrastructure as Code (IaC) practices.

Nice to have:

  • Experience with cloud platforms (AWS preferred, GCP and Azure are also acceptable).
  • Experience with Databricks is highly advantageous.
  • Familiarity with streaming technologies, especially Kafka.
Ottieni la revisione del curriculum gratis e riservata.
oppure trascina qui un file PDF, DOC, DOCX, ODT o PAGES di non oltre 5 MB.