Enable job alerts via email!

Data Engineering Technologist – Large-Scale Distributed

UNISON CONSULTING PTE. LTD.

Singapore

On-site

SGD 80,000 - 120,000

Full time

Today
Be an early applicant

Job summary

A leading consulting firm in Singapore seeks a Data Engineer to design and maintain ETL/ELT pipelines using Spark. The ideal candidate will have experience in managing Hadoop clusters and expertise in Hive data modeling. You will work closely with data scientists and analysts to ensure data quality and support advanced analytics. Competitive compensation and flexible working arrangements are offered.

Qualifications

  • Experience in data engineering or big data development.
  • Strong hands-on experience in Spark (Core, SQL, Streaming).
  • Good understanding of Hadoop (HDFS, YARN) architecture.
  • Expertise in Hive data modeling, query optimization, and performance tuning.
  • Experience with cluster troubleshooting, monitoring, and scaling.

Responsibilities

  • Design and maintain ETL/ELT pipelines using Spark for batch and streaming data.
  • Manage and optimize Hadoop clusters (HDFS, YARN) for scalability and reliability.
  • Build and maintain Hive data models, partitions, and queries for analytics and reporting.
  • Improve query and pipeline performance through tuning, partitioning, bucketing, and caching.
  • Ensure data quality, governance, and security across the big data ecosystem.
  • Collaborate with data scientists, analysts, and architects to support advanced analytics and BI.

Skills

Data engineering
Spark (Core, SQL, Streaming)
Hadoop (HDFS, YARN)
Hive data modeling
Performance tuning
Data governance

Tools

AWS EMR
Azure HDInsight
GCP Dataproc
Job description
Responsibilities
  • Design and maintain ETL/ELT pipelines using Spark for batch and streaming data.
  • Manage and optimize Hadoop clusters (HDFS, YARN) for scalability and reliability.
  • Build and maintain Hive data models, partitions, and queries for analytics and reporting.
  • Improve query and pipeline performance through tuning, partitioning, bucketing, and caching.
  • Ensure data quality, governance, and security across the big data ecosystem.
  • Collaborate with data scientists, analysts, and architects to support advanced analytics and BI.
Qualifications
  • Experience in data engineering or big data development.
  • Strong hands-on experience in Spark (Core, SQL, Streaming).
  • Good understanding of Hadoop (HDFS, YARN) architecture.
  • Expertise in Hive data modeling, query optimization, and performance tuning.
  • Experience with cluster troubleshooting, monitoring, and scaling.
  • Knowledge of data governance and security frameworks is a plus.
  • Familiarity with cloud big data platforms (AWS EMR, Azure HDInsight, or GCP Dataproc) preferred.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.