Enable job alerts via email!

Data Engineer

Unison Consulting Pte Ltd

Singapore

On-site

SGD 60,000 - 90,000

Full time

Today
Be an early applicant

Job summary

A consulting firm in Singapore is seeking a highly motivated Data Engineer to build and maintain robust ETL pipelines and implement data transformations using Spark. Responsibilities include collaborating with cross-functional teams and supporting CI/CD pipelines. Candidates should have strong programming skills and hands-on experience with key data tools. Cloud knowledge is a plus.

Responsibilities

  • Build and maintain ETL pipelines across batch and real-time data sources.
  • Design and implement data transformations using Spark on Hadoop/Hive.
  • Stream data from Kafka into data lakes using Spark Streaming.
  • Collaborate with teams on data modeling and optimization.
  • Implement CI/CD pipelines using Git, Jenkins, Docker/Kubernetes.
  • Contribute to automation and monitoring of data workflows.

Skills

Python
Scala
Java
Apache Spark
Hadoop
Hive
Kafka
HBase
SQL
Agile delivery

Tools

Git
Jenkins
Docker
Kubernetes
Airflow
Terraform
Ansible

Job description

We are looking for a highly motivated and skilled Data Engineer

Key Responsibilities
  • Build and maintain robust, scalable ETL pipelines across batch and real-time data sources.
  • Design and implement data transformations using Spark (PySpark/Scala/Java) on Hadoop/Hive.
  • Stream data from Kafka topics into data lakes or analytics layers using Spark Streaming.
  • Collaborate with cross-functional teams on data modeling, ingestion strategies, and performance optimization.
  • Implement and support CI/CD pipelines using Git, Jenkins, and container technologies like Docker/Kubernetes.
  • Work within cloud and on-prem hybrid data platforms, contributing to automation, deployment, and monitoring of data workflows.

Skills

  • Strong programming skills in Python, Scala, or Java.
  • Hands-on experience with Apache Spark, Hadoop, Hive, Kafka, HBase, or related tools.
  • Sound understanding of data warehousing, dimensional modeling, and SQL.
  • Familiarity with Airflow, Git, Jenkins, and containerization tools (Docker/Kubernetes).
  • Exposure to cloud platforms such as AWS or GCP is a plus.
  • Experience with Agile delivery models and collaborative tools like Jira and Confluence.
Nice to Have
  • Experience with streaming data pipelines, machine learning workflows, or feature engineering.
  • Familiarity with Terraform, Ansible, or other infrastructure-as-code tools.
  • Exposure to Snowflake, Databricks, or modern data lakehouse architecture is a bonus.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.