Attiva gli avvisi di lavoro via e-mail!

Senior Data Engineer

Arduino

Plasencia

In loco

EUR 45.000 - 70.000

Tempo pieno

6 giorni fa
Candidati tra i primi

Descrizione del lavoro

A leading open-source electronics company in Emilia-Romagna, Italy is seeking a Data Engineer to scale their data stack. You will build ETL pipelines, process data, and create insightful dashboards. The ideal candidate has 5+ years of experience in data engineering, strong skills in Python and SQL, and the ability to work in a dynamic environment. This role offers a competitive salary and a flexible work culture.

Servizi

Competitive salary
Flexible working hours
Budget for training
Ping pong and foosball tournaments
Healthy snacks in the office

Competenze

  • 5+ years of experience in data engineering or analytics engineering.
  • Experience with Python, Go, or Scala for data processing.
  • Fluent in English, both written and spoken.

Mansioni

  • Build and maintain Python ETL pipelines and serverless data processing.
  • Create dashboards and reports using Looker Studio.
  • Ensure data quality, consistency, and reliability.

Conoscenze

Python
SQL
Communication
Data modeling
Teamwork

Formazione

BS / MS in Computer Science or related fields

Strumenti

GitHub
Looker Studio
Apache Airflow
GCP
AWS
Descrizione del lavoro
Overview

Arduino’s mission is to enable people to enhance their lives through accessible open-source electronics and digital technologies. Since 2005, millions of people, from kids and students to engineers and professionals around the world, are using Arduino to innovate in the fields of music, games and toys, smart homes, farming, autonomous vehicles and many more.

We’re looking for a Data Engineer / Analytics Engineer to help us scale our modern data stack — someone who can work across ingestion, modeling, and visualization, and partner with teams across the company to unlock value from data. The ideal candidate is autonomous, has strong communication skills and the ability to work effectively in a dynamic, multidisciplinary and international environment.

What we offer
  • A challenging career path in a rapidly growing company with a modern vision and talented teams.
  • A competitive salary (and benefits) that values people skills and experience.
  • A young and inspiring work environment that encourages diversity and cultural exchange.
  • Individual growth objectives with a dedicated budget for learning / training.
  • Flexible working hours and working locations, we value work-life balance!
  • A work opportunity in a mission-driven company committed to empowering people around the world.
And if you live near one of our offices…
  • Ping pong and foosball tournaments (sport or gym benefit is also included for everyone!).
  • Seasonal celebrations, happy hours, and everyday drinks and snacks at the office.
  • Sunny rooftop lunch breaks and hammocks for relaxation and concentration.
What you\'ll work on
  • Build and maintain Python ETL pipelines and serverless data processing.
  • Integrate external data sources via REST APIs, cloud exports, and internal systems.
  • Model and transform data to support analysis, reporting, and monitoring.
  • Create dashboards and reports using Looker Studio.
  • Collaborate with stakeholders across product, marketing, cloud, and leadership.
  • Ensure data quality, consistency, and reliability through testing and monitoring.
  • Document and manage code through GitHub and version-controlled workflows.
  • Contribute to architectural decisions across GCP and AWS environments.

Our Stack :

  • Warehouse : BigQuery
  • Orchestration : Apache Airflow
  • Visualization : Looker Studio (Power BI is a plus)
  • Tracking : Segment
  • Integrations : REST APIs, internal systems
  • Version Control : GitHub
  • Cloud : GCP & AWS
What you bring
  • 5+ years of experience in data engineering, analytics engineering, or a similar role.
  • BS / MS in Computer Science or other technical related fields
  • Programming skills in Python, Go, or Scala for data processing.
  • Fluency in SQL and a strong foundation in data modeling.
  • Experience ingesting and processing data from third-party APIs.
  • Work comfortably with cloud infrastructure (especially GCP or AWS).
  • Use Git for code collaboration and version control.
  • Value simplicity, maintainability, and clarity in your work.
  • Communicate effectively with stakeholders using well-structured dashboards.
  • Fluent in English, written and spoken
  • Great teamwork and positive attitude
Bonus Points
  • Experience with Apache Airflow.
  • Knowledge of data privacy, security, or GDPR-compliant data practices.
  • AWS Glue and Spark experience for large-scale ETL
  • Familiarity with Power BI or other enterprise BI platforms.
  • Exposure to customer data platforms like Segment or event-based tracking.
  • Understanding of experimentation, funnel analysis, or retention metrics.
  • Use of CI / CD workflows in data engineering or analytics contexts.

If you\'re excited about this role or about our company, but your experience doesn\'t align perfectly with the points outlined above, we strongly encourage you to apply anyway. If we feel you don’t fit for this job, we may have something else for you!

Ottieni la revisione del curriculum gratis e riservata.
oppure trascina qui un file PDF, DOC, DOCX, ODT o PAGES di non oltre 5 MB.