Attiva gli avvisi di lavoro via e-mail!

Mid & Senior Data Engineer - InsurTech - Remote (Italy based)

JR Italy

Padova

Remoto

EUR 50.000 - 80.000

Tempo pieno

4 giorni fa
Candidati tra i primi

Aumenta le tue possibilità di ottenere un colloquio

Crea un curriculum personalizzato per un lavoro specifico per avere più probabilità di riuscita.

Descrizione del lavoro

A leading InsurTech company is seeking a Mid & Senior Data Engineer to join their expanding Data team. This role focuses on bridging data science and engineering, tackling complex data challenges. You will work with cutting-edge technologies to drive innovation in the insurance sector, making it smarter and more accessible.

Competenze

  • Deep expertise in batch and distributed data processing.
  • Proven experience building Data Lake and Big Data analytics platforms.

Mansioni

  • Bridging the gap between data science and engineering.
  • Collaborating with data scientists to develop technical solutions.

Conoscenze

Python
Data Processing
DevOps

Strumenti

Kafka
Spark
AWS
Databricks

Descrizione del lavoro

Mid & Senior Data Engineer - InsurTech - Remote (Italy based)

We have partnered with an exciting business that is expanding its Data team, currently comprising over 350 members!

They are a cutting-edge insurance company transforming the industry with a customer-first approach. Their focus is on simplicity, transparency, and innovation—offering seamless digital claims, personalized coverage, and fair, data-driven pricing. As they grow rapidly, they seek passionate individuals to join their mission of making insurance smarter and more accessible. If you desire a dynamic, forward-thinking team, this is the place for you!

The tech stack:
  • Python
  • Kafka/Spark
  • Databricks
  • AWS

Your role will involve bridging the gap between data science and engineering, focusing on complex data challenges. You will collaborate closely with data scientists and machine learning engineers to develop practical, technical solutions that meet real business needs. Your contributions will drive impactful innovation and shape the future of our products and technology.

Key requirements:
  • Deep expertise in batch and distributed data processing, including near real-time streaming pipelines using technologies like Kafka, Flink, and Spark.
  • Proven experience building Data Lake and Big Data analytics platforms on cloud infrastructure.
  • Proficiency in Python, with strong adherence to software engineering best practices.
  • Experience with relational databases and data modeling, including RDBMS (e.g., Redshift, PostgreSQL) and NoSQL systems.
  • Solid understanding of DevOps, CI/CD pipelines, and Infrastructure as Code (IaC) practices.
Nice to have:
  • Experience with cloud platforms such as AWS, GCP, or Azure.
  • Experience with Databricks is a strong plus.
  • Familiarity with streaming technologies, especially Kafka.
Ottieni la revisione del curriculum gratis e riservata.
oppure trascina qui un file PDF, DOC, DOCX, ODT o PAGES di non oltre 5 MB.