Activez les alertes d’offres d’emploi par e-mail !

Senior Data Engineer

Hitachi Solutions

Paris

À distance

EUR 60 000 - 80 000

Plein temps

Il y a 8 jours

Résumé du poste

A technology consulting firm is seeking a Senior Data Engineer to design and build data pipelines and platforms in the cloud, utilizing technologies like Apache Spark and Azure. The candidate will collaborate with various teams to ensure robust solutions. Applicants must have strong communication skills in both French and English. This is a permanent, home-based role with benefits such as lunch vouchers and an annual bonus.

Prestations

Lunch vouchers
Annual bonus

Qualifications

  • Strong expertise in Apache Spark development and performance tuning.
  • Hands-on experience with Microsoft Fabric, especially Lakehouses.
  • Experience with Azure Data Factory for orchestration.

Responsabilités

  • Design and implement data platform solutions using Azure and Databricks.
  • Build scalable data pipelines using Spark.
  • Collaborate with architects and clients.

Connaissances

Apache Spark
Microsoft Fabric
Databricks
Azure Data Factory
Data Lakehouse architecture
Fluency in French
Fluency in English

Outils

Azure Synapse Analytics
Power BI
Bicep
Terraform

Description du poste

About the Role

We are seeking a highly skilled and client-oriented Senior Data Engineer with expertise in Apache Spark, Azure Synapse Analytics, Databricks, and Microsoft Fabric. The successful candidate will design, build, and optimize end-to-end data pipelines and platforms in the cloud, collaborating with cross-functional teams including architects, data scientists, and business stakeholders to ensure solutions are robust, scalable, and aligned with business needs.

Key Responsibilities

  • Design and implement modern data platform solutions using Azure, Databricks, Synapse, and Microsoft Fabric.
  • Build scalable, high-performance data pipelines using Spark.
  • Implement data ingestion from various sources including structured, semi-structured, and unstructured data.
  • Define and implement data models and transformation logic to support analytics and reporting.
  • Develop and manage data integration and orchestration using Azure Data Factory or Microsoft Fabric Data Pipelines.
  • Ensure data quality, integrity, lineage, and governance using best practices and Azure-native services.
  • Collaborate with architects and clients to define solution architecture and implementation roadmaps.
  • Mentor junior team members and contribute to internal knowledge sharing.
  • Participate in pre-sales proposals and client workshops to shape future engagements.
  • Continuously explore new tools and technologies to stay at the forefront of data engineering.

Qualifications

Required Technical Skills

  • Strong expertise in Apache Spark development, performance tuning, and optimization (PySpark preferred; Scala or SQL relevant).
  • Hands-on experience with Microsoft Fabric, especially Lakehouses, Data Pipelines, and Notebooks.
  • Deep knowledge of Databricks, including workflows, Delta Lake, and Unity Catalog.
  • Experience with Azure Data Factory or Fabric Data Factory for orchestration and data movement.
  • Solid understanding of Data Lakehouse architecture, Data Modeling (Dimensional/Star Schema), and ETL/ELT best practices.
  • Familiarity with CI/CD practices for data solutions using Azure DevOps.
  • Understanding of data governance, security, and RBAC within Azure.

Preferred Skills

  • Experience in consulting or professional services environments.
  • Knowledge of Power BI, especially for working with Microsoft Fabric datasets.
  • Familiarity with Infrastructure-as-Code (IaC) tools like Bicep or Terraform.
  • Understanding of real-time data processing with Azure Event Hub, Stream Analytics, or similar tools.
  • Exposure to Machine Learning pipelines or supporting Data Science teams is a plus.

Non-Technical Skills

  • Strong communication and client-facing skills; ability to articulate complex ideas clearly to non-technical stakeholders.
  • Consultative mindset with the ability to assess client needs and propose tailored data solutions.
  • Experience working in agile and delivery-oriented teams.
  • Strong problem-solving and analytical skills.
  • Ability to work independently and collaboratively.
  • Fluency in French and English (both written and verbal) is required.

Additional Information

Recruitment Process:

  • Telephone interview with Recruiter or HR
  • Technical interview #1 in English with a Data/AI Architect
  • Technical case study interview #2 in French or English with Architect & Director
  • Face-to-face interview with the General Manager, France

Contract:

  • Permanent, home-based contract
  • Immediate availability

Benefits:

  • Lunch vouchers
  • Annual bonus

By applying, you consent to Hitachi Solutions Europe Limited collecting and storing your personal data according to our Privacy Policy available at Politique de confidentialité Hitachi Solutions.

Beware of scams:

Our recruiting team communicates via our official @ domain email addresses and SmartRecruiters. All legitimate offers originate from our @ domain emails. Be cautious of offers from other domains.

Remote Work: Full-time employment

Key Skills: Apache Hive, S3, Hadoop, Redshift, Spark, AWS, Apache Pig, NoSQL, Big Data, Data Warehouse, Kafka, Scala

Experience: Years

Vacancy: 1

Obtenez votre examen gratuit et confidentiel de votre CV.
ou faites glisser et déposez un fichier PDF, DOC, DOCX, ODT ou PAGES jusqu’à 5 Mo.