Enable job alerts via email!

Senior Data Engineer (Informatica and MS Fabric)

Helius Technologies Pte Ltd

Singapore

On-site

SGD 60,000 - 80,000

Full time

21 days ago

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

An innovative company is seeking skilled Data Engineers to design and manage data pipelines and architectures. This role focuses on building ETL workflows using Informatica and Azure Data Factory, optimizing data pipelines in Databricks, and leveraging Microsoft Fabric for data integration. Join a dynamic team where your expertise in cloud-based and hybrid environments will drive impactful data solutions. If you have a passion for transforming data into insights and enjoy working in a collaborative atmosphere, this opportunity is perfect for you.

Qualifications

  • Erfahrung in der Entwicklung und Verwaltung von ETL-Workflows mit Informatica.
  • Kenntnisse in der Erstellung und Optimierung von Datenpipelines in Databricks.

Responsibilities

  • Entwerfen und Implementieren von ETL-Workflows zur Unterstützung von Datenintegration und -transformation.
  • Verwalten von Datenpipelines in Azure Data Factory und Microsoft Fabric.

Skills

Data Engineering
ETL Development
Data Pipeline Management
Data Integration
Cloud Computing
Analytics

Tools

Informatica PowerCenter
Informatica Cloud
Informatica Intelligent Cloud Services (IICS)
Azure Data Factory (ADF)
Databricks
Microsoft Fabric

Job description

Job Requirement:

  1. Data Engineers with experience in Informatica and Microsoft Fabric typically have responsibilities centered around designing, implementing, and managing data pipelines and architectures that support data integration, transformation, and analytics within a cloud-based or hybrid environment.
  2. Design and implement ETL (Extract, Transform, Load) workflows using Informatica PowerCenter, Informatica Cloud, or Informatica Intelligent Cloud Services (IICS).
  3. Build and manage ETL and ELT data pipelines using Azure Data Factory (ADF) to orchestrate data movement and transformation across on-premises and cloud environments.
  4. Create and optimize Spark-based data pipelines in Databricks, using notebooks or jobs to ingest, clean, and transform large datasets for analytics and reporting.
  5. Work with Microsoft Fabric to integrate and manage dataflows, datasets, and transformation processes across its unified data platform, helping build scalable data pipelines.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.