Enable job alerts via email!

Junior Data Engineer (Hadoop & Spark) - Contract- Changi

NTT SINGAPORE PTE. LTD.

Singapore

On-site

SGD 60,000 - 80,000

Full time

15 days ago

Job summary

A leading data solutions firm in Singapore is seeking a Junior Data Engineer to optimize Spark applications and support ETL pipelines within a banking environment. The ideal candidate will have 1-3 years of experience and strong skills in PySpark, Hadoop ecosystem tools, and SQL. This contract role offers a monthly salary ranging from SGD 4,000 to 5,500.

Qualifications

  • 1 to 3 years of experience as a Data Engineer or Big Data Developer.
  • Strong programming knowledge in PySpark / Scala Spark.
  • Hands-on experience in Hadoop ecosystem tools.
  • SQL proficiency for data extraction and validation.

Responsibilities

  • Develop and maintain Spark applications in PySpark/Scala.
  • Optimize Spark jobs for performance and scalability.
  • Support ETL pipelines and troubleshoot production issues.

Skills

Data Engineering
PySpark
Hadoop
SQL
Problem-solving

Tools

Hadoop Ecosystem (HDFS, Hive, Sqoop)
Linux/Unix
Job description

Junior Data Engineer (Hadoop & Spark) - Contract- Changi


Employer: NTT DATA Singapore


Work Location: Changi , Singapore (Onsite, within a leading bank environment)


Contract Duration: 12 months (renewable)


Monthly Salary Range (SGD): $4,000 – $5,500


Interested candidates are kindly requested to email their CV with their experience to:

sandeep.sringeripai@global.ntt


Job Description:
We are seeking a junior to mid-level Data Engineer to join our project team with a leading Singapore bank.


The role requires hands-on experience in building and optimizing Spark-based applications within a Hadoop ecosystem. The successful candidate will gain valuable exposure to banking data systems while working on large-scale enterprise data projects.


Responsibilities:

  • Develop and maintain Spark applications in PySpark/Scala for data ingestion, transformation, and processing on Hadoop clusters.
  • Work with HDFS, Hive, Sqoop, and other Hadoop ecosystem tools.
  • Optimize Spark jobs for performance and scalability (memory, executors, partitions).
  • Support ETL pipelines, troubleshoot production issues, and ensure data quality.
  • Collaborate with business and technology teams to deliver data solutions in a banking environment.

Requirements:

  • 1 to 3 years of experience as a Data Engineer or Big Data Developer.
  • Strong programming knowledge in PySpark / Scala Spark .
  • Hands-on experience in Hadoop ecosystem (HDFS, Hive, YARN, Sqoop, Oozie, etc.) .
  • SQL proficiency for data extraction and validation.
  • Familiarity with Linux/Unix environments.
  • Good communication and problem-solving skills.

Good to Have:

  • Prior experience in banking or financial services projects.
  • Basic understanding of cloud platforms (AWS, Azure, GCP) – but strong Hadoop on-premise skills preferred.

Monthly Salary Range (SGD): $4,000 – $5,500


Interested candidates are kindly requested to email their CV with their experience to:

sandeep.sringeripai@global.ntt


We look forward to your application!

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.