Job Description
About the Role
We are looking for a highly skilled and client-oriented Senior Data Engineer with deep expertise in Apache Spark , Azure Synapse Analytics , Databricks , and Microsoft Fabric . You will be responsible for designing, building, and optimizing end-to-end data pipelines and platforms in the cloud. You will work with cross-functional teams including architects, data scientists, and business stakeholders to ensure our solutions are robust, scalable, and business-aligned.
Key Responsibilities
- Design and implement modern data platform solutions using Azure , Databricks , Synapse , and Microsoft Fabric .
- Build scalable, high-performance data pipelines using Spark .
- Implement data ingestion from various sources, including structured, semi-structured, and unstructured data.
- Define and implement data models and transformation logic to support analytics and reporting use cases.
- Develop and manage data integration and orchestration using Azure Data Factory or Microsoft Fabric Data Pipelines .
- Ensure data quality, integrity, lineage, and governance using best practices and Azure-native services.
- Collaborate with architects and clients to define solution architecture and implementation roadmaps.
- Mentor junior team members and contribute to internal knowledge sharing.
- Participate in pre-sales, proposal writing, and client workshops to shape future engagements.
- Continuously explore new tools and technologies to stay at the forefront of the data engineering domain.
Qualifications
Required Technical Skills
- Strong expertise with Apache Spark development, performance tuning and optimisation (PySpark preferred; Scala or SQL also relevant).
- Hands-on experience in Microsoft Fabric , particularly with Lakehouses , Data Pipelines , and Notebooks .
- Deep knowledge of Databricks , including development workflows, Delta Lake, and Unity Catalog.
- Experience with Azure Data Factory and / or Fabric Data Factory for orchestration and data movement.
- Solid understanding of Data Lakehouse architecture , Data Modeling (Dimensional / Star Schema) , and ETL / ELT best practices.
- Familiarity with CI / CD for data solutions using Azure DevOps.
- Understanding of data governance , security , and RBAC within the Azure ecosystem.
Preferred Skills
- Experience working in a consulting or professional services environment.
- Knowledge of Power BI , especially for working with Microsoft Fabric datasets.
- Familiarity with Infrastructure-as-Code (IaC) for Azure (, Bicep, Terraform).
- Understanding of real-time data processing with Azure Event Hub, Stream Analytics, or similar.
- Exposure to Machine Learning pipelines or supporting Data Science teams is a plus.
Non-Technical Skills
- Strong communication and client-facing skills ; able to articulate complex ideas clearly to non-technical stakeholders.
- Consultative mindset with the ability to assess client needs and propose tailored data solutions.
- Experience working in agile and delivery-oriented teams .
- Strong problem-solving ability and analytical thinking .
- Comfortable working both independently and collaboratively.
- Fluency in French and English (both written and verbal) is required.
Recruitment Process
- Telephone with Recruiter or HR
- Technical interview #1 - in English with one of our Data / AI Architect
- Technical case study interview #2- in French or English with our Architect & Director
- Face-to-face interview with the General Manager France
Contract
- Permanent contract
- home-based contract
- Available from immediately
Benefits
- Lunch vouchers
- Annual bonus
In applying for a role with Hitachi Solutions Europe Limited and / or its affiliates (“Hitachi”) you consent to Hitachi collecting and storing your personal information (including your name, job title and email address) in relation to this role and any others that may be suitable in the future.