
Enable job alerts via email!
Generate a tailored resume in minutes
Land an interview and earn more. Learn more
A leading recruitment agency is seeking a Data Engineer to provision and configure Microsoft Fabric capacities and manage data integrations. Responsibilities include establishing connectivity to external data sources, designing ingestion pipelines, and supporting production deployments. The ideal candidate must have strong proficiency in Azure, Databricks, and Python, along with experience in managing data quality and transformation processes. This role is essential for ensuring the reliability and scalability of data operations.
Provision and configure Microsoft Fabric capacities, workspaces, semantic models, and supporting Azure services (e.g., Storage, Key Vault).
Deploy the Agile Insights Fabric Admin and Governance Pack and the ETL framework to support robust data platform operations.
Data Integration & Manipulation
Establish connectivity between the data platform and external data sources, ensuring secure and reliable access.
Design and implement ingestion pipelines for structured/unstructured data.
Develop data transformations across layers (bronze → silver → gold) to ensure scalable, high-quality, and business-ready datasets.
Manage reference data integration and ensure alignment across domains.
Validation & Governance
Support UAT cycles, identifying and addressing data quality and integration issues.
Facilitate Change Advisory Board (CAB) endorsement, ensuring architecture and deployment readiness.
Production Deployment & Knowledge Transfer
Deploy solutions into production with proactive monitoring and operational continuity.
Produce clear as-built documentation of data architecture, pipelines, and processes.
Conduct knowledge transfer sessions with client teams to ensure sustainable operations.
Requirements:
Must have: Azure, Databricks & Python
Proven experience as a Data Engineer working with Microsoft Fabric, Azure Data Services, and modern data platforms.
Strong knowledge of data ingestion, transformation, and orchestration frameworks.
Experience building data pipelines across bronze, silver, and gold layers.
Proficiency in SQL, data modelling, and handling structured/unstructured data.
Familiarity with security, governance, and deployment best practices in enterprise data environments.
Hands-on experience supporting UAT, CAB processes, and production deployments.