Job Search and Career Advice Platform

Attiva gli avvisi di lavoro via e-mail!

Data Engineer - Data Platforms

IBM

Napoli

In loco

EUR 40.000 - 60.000

Tempo pieno

Oggi
Candidati tra i primi

Genera un CV personalizzato in pochi minuti

Ottieni un colloquio e una retribuzione più elevata. Scopri di più

Descrizione del lavoro

A leading technology firm is seeking a Data Engineer to join their team in Naples, Italy. In this role, you will collaborate with technical teams to build and optimize data ingestion pipelines and workflows. The ideal candidate will have at least two years of experience in data engineering, strong programming skills in Python and SQL, and familiarity with cloud environments. This full-time position offers opportunities for continuous learning and innovation within a dynamic environment.

Competenze

  • Minimum 2 years of experience working in data engineering or data integration.
  • Strong programming skills in Python and SQL, optionally Java.
  • Knowledge of workflow orchestration tools like Airflow.

Mansioni

  • Collaborate with teams to understand data requirements and platform constraints.
  • Build, maintain, and optimize data ingestion pipelines.
  • Support orchestration of data workflows, ensuring reliability.

Conoscenze

Python
SQL
Data engineering
ETL/ELT processes
Workflow orchestration tools
MLOps concepts
Data quality assurance
Docker
Kubernetes

Formazione

Bachelor's Degree

Strumenti

Airflow
IBM DataStage Flow Designer
Prometheus
Grafana
ELK
Descrizione del lavoro
Data Engineer - Data Platforms

Join to apply for the Data Engineer - Data Platforms role at IBM.

Introduction

In this role, you will join one of our IBM Consulting Client Innovation Centers (Delivery Centers), helping deliver deep technical and industry expertise to a wide range of public and private sector clients. You will be part of a team that accelerates innovation and adoption of emerging technologies across hybrid cloud and AI ecosystems.

You will work with experts across multiple industries to shape and improve data platforms for some of the most impactful and innovative organizations in the world. Our partnerships with leading technology providers, combined with IBM’s own software and Red Hat capabilities, will empower you to deliver high-quality, scalable, and secure data solutions.

A mindset driven by curiosity, engineering excellence, and continuous learning will be essential. You will be encouraged to explore ideas beyond your direct responsibilities, embrace modern engineering practices, and contribute to transformative client outcomes.

Your Role And Responsibilities
  • Collaborate with technical and business teams to understand data requirements, integration needs, and platform constraints.
  • Build, maintain, and optimize data ingestion pipelines and ETL/ELT workflows across heterogeneous systems.
  • Support the orchestration of data workflows, ensuring reliability, monitoring, and scalability.
  • Contribute to the deployment and operational management of data and AI solutions in cloud-based environments.
  • Assist in troubleshooting data quality issues, ingestion failures, and infrastructure-related incidents.
  • Work closely with architects and senior engineers to enable MLOps practices, ensuring smooth integration between data pipelines, model deployment, and cloud services.
  • Help document technical designs, configuration details, and operational procedures.
Preferred Education

Bachelor's Degree

Required Technical And Professional Expertise
  • Minimum 2 years of experience working in data engineering, data platform development, or data integration.
  • Strong programming skills in Python, SQL, and optionally Java for distributed processing or backend components.
  • Experience building and maintaining data pipelines, ETL/ELT processes, and ingestion frameworks.
  • Knowledge of workflow orchestration tools (e.g., Airflow, IBM DataStage Flow Designer, Prefect, Luigi).
  • Understanding of cloud environments, ideally IBM Cloud, including object storage, compute, IAM, and networking fundamentals.
  • Familiarity with containerization technologies such as Docker and Kubernetes.
  • Exposure to MLOps concepts, CI/CD, or automation pipelines supporting ML lifecycle operations.
  • Solid understanding of database principles, data modeling concepts, and best practices for data quality and reliability.
Preferred Technical And Professional Experience
  • Agile mindset—ability to adapt quickly, learn continuously, and use critical thinking to solve engineering challenges.
  • Experience with data governance, metadata management, or cataloging tools.
  • Exposure to logging, monitoring, and observability stacks (Prometheus, Grafana, ELK, OpenTelemetry).
  • Familiarity with cloud-native architectures, serverless functions, or distributed data processing (Spark, Flink, etc.).
  • Interest in pursuing cloud, DevOps, or MLOps certifications (IBM Cloud, Red Hat, AWS, Azure, GCP).
  • Availability to travel when required.
Seniority level

Mid‑Senior level

Employment type

Full‑time

Job function

Information Technology

Industries

IT Services and IT Consulting

Referrals increase your chances of interviewing at IBM by 2x

Ottieni la revisione del curriculum gratis e riservata.
oppure trascina qui un file PDF, DOC, DOCX, ODT o PAGES di non oltre 5 MB.