Enable job alerts via email!

Senior Data Engineer

TALENTVIS SINGAPORE PTE. LTD.

Singapore

On-site

SGD 60,000 - 80,000

Part time

9 days ago

Job summary

A global IT consulting client in Singapore is looking for a Senior Data Engineer for a 3-month contract. This role emphasizes acquiring, storing, and processing large volumes of data, leveraging Azure Fabric for building scalable data infrastructure. Candidates should have 6+ years of experience and deep knowledge in big data tools and cloud platforms.

Qualifications

  • 6+ years of experience in data engineering roles.
  • Hands-on experience with Azure Fabric.
  • Experience with data lakes and data warehouses.

Responsibilities

  • Design and maintain scalable data pipelines.
  • Build data lakes and warehouses for optimized storage.
  • Implement data governance and quality checks.

Skills

Advanced knowledge of Azure Fabric
Big data ecosystem understanding
Power BI expertise
ETL pipeline creation
Proficiency in NoSQL databases
Experience with cloud platforms

Education

Bachelor’s, Master’s, or Ph.D. in Computer Science or related field

Tools

Azure
Hadoop
Spark
MongoDB
Kafka

Job description

Talentvis is looking for a Senior Data Engineer experienced in Azure Fabric for a 3-months contract position with our esteemed global IT consulting client.

YOUR ROLE:

As a Senior Data Engineer, you'll be responsible for acquiring, storing, governing, and processing large volumes of structured and unstructured data. You’ll bring strategic insight into big data technologies and help define enterprise-scale data foundations like data lakes. A strong command of Azure Fabric is mandatory, as you'll be leveraging it extensively to build scalable and resilient data infrastructure. You'll work closely with cross-functional teams—including Data Intelligence, Research, UX, Digital Tech, and Agile—to deliver intelligent and high-impact solutions for our clients.

KEY RESPONSIBILITIES:
  • Design, develop, and maintain robust, scalable data pipelines to ingest, transform, and process structured and unstructured data from various sources.
  • Build and manage data lakes and warehouses with optimized storage and retrieval mechanisms. Create data models tailored to business and analytics needs.
  • Mandatorily experienced in Azure Fabric, using it to design and deploy efficient, scalable, and secure data architectures in the cloud.
  • Leverage Azure and other cloud platforms to build high-performing, cost-effective, and reliable data systems.
  • Establish data governance frameworks, validation processes, and quality checks to ensure data integrity, accuracy, and compliance.
  • Monitor and continuously improve the performance and efficiency of data infrastructure and pipelines.
  • Collaborate with data scientists, analysts, and other stakeholders to deliver data solutions aligned with business goals.
  • Build the data presentation layer and create visualizations using Power BI, Tableau, or similar tools.
  • Stay updated with the latest trends and tools in big data and data engineering to evaluate and adopt innovative technologies.
  • Support ongoing development and optimization of the organization’s data infrastructure.
ABOUT YOU:
  • Bachelor’s, Master’s, or Ph.D. in Computer Science, Information Management, or related discipline, with 6+ years of experience.
  • Deep understanding of the big data ecosystem and distributed computing principles.
  • Proven hands-on experience with Azure Fabric is mandatory.
  • Experience with big data tools such as Hadoop, Spark, and distributions like Cloudera, Hortonworks, or MapR.
  • Skilled in creating ETL and batch processing pipelines across multiple data sources.
  • Proficient in NoSQL databases such as MongoDB, Cassandra, Neo4J, or ElasticSearch.
  • Familiar with query engines like Hive, Spark SQL, or Impala.
  • Strong experience with Power BI for dashboards and visualizations.
  • Open to or experienced in real-time streaming platforms like Kafka, AWS Kinesis, Flume, or Spark Streaming.
  • Interested in or experienced with DevOps/DataOps practices (e.g., Infrastructure as Code, pipeline automation).
  • Understands data science workflows and model development at a conceptual level.
  • Technologically curious and self-motivated with a passion for continuous learning
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.