Job Search and Career Advice Platform

Enable job alerts via email!

Big Data & Hadoop Developer

ALPHA KOGNITA PTE. LTD.

Singapore

On-site

SGD 70,000 - 90,000

Full time

Yesterday
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading tech company in Singapore is seeking a skilled Big Data Developer with 4 to 6 years of experience. This hands-on role involves developing Spark-based data processing applications and enhancing ETL pipelines. Candidates should have strong programming skills in Python, Java, or Scala, along with expertise in the Hadoop ecosystem. This position promises competitive salary and growth opportunities in a supportive engineering environment.

Benefits

Strong learning and growth opportunities
Competitive salary and benefits
Supportive engineering environment
Opportunity to work on enterprise data platforms

Qualifications

  • 4 to 6 years of experience in Big Data application development.
  • Hands-on experience with Apache Spark (PySpark/Scala/Java).
  • Strong programming skills in Python or Java or Scala.

Responsibilities

  • Develop and maintain Spark-based data processing applications.
  • Build and enhance ETL pipelines for structured and semi-structured data.
  • Implement data transformation logic and processing workflows.

Skills

Apache Spark
Data pipeline development
SQL
Python
Java
Scala
Hadoop ecosystem
Linux

Tools

HDFS
Hive
Impala
YARN
Sqoop
Docker
Kubernetes
Job description

We are looking for a skilled Big Data Developer / Spark Developer with 4 to 6 years of experience to develop and maintain scalable data processing applications in a Hadoop-based ecosystem. The role focuses on hands-on development of ETL pipelines and distributed data processing solutions using Spark.

This is a hands‑on engineering role where you will work closely with senior developers, architects, and data teams to deliver high‑quality data solutions.

Key Responsibilities
  • Develop and maintain Spark-based data processing applications using PySpark /Scala / Java.
  • Build and enhance ETL pipelines for structured and semi-structured data.
  • Implement data transformation logic and processing workflows.
  • Support batch and real‑time data ingestion pipelines into data lake and datamart environments.
  • Optimize Spark jobs for performance and scalability.
  • Write SQL queries for data validation and reconciliation.
  • Debug and fix production data pipeline issues.
  • Work with senior engineers to understand design and implementation requirements.
  • Follow coding standards, testing practices, and deployment processes.
Required Skills & Experience
  • 4 to 6 years of experience in Big Data application development.
  • Hands‑on experience with Apache Spark (PySpark / Scala / Java).
  • Strong programming skills in Python or Java or Scala.
  • Experience with Hadoop ecosystem: HDFS, Hive, Impala, YARN, Sqoop.
  • Strong SQL skills for data processing.
  • Experience building batch data pipelines.
  • Basic understanding of data lake and data warehouse concepts.
  • Experience working in Linux environments.
Good to Have
  • Exposure to Apache Airflow, Oozie, or similar orchestration tools.
  • Basic knowledge of Kafka or real‑time streaming frameworks.
  • Exposure to cloud big data platforms (AWS EMR, Azure HDInsight, GCP Dataproc).
  • Familiarity with Docker and Kubernetes (basic level).
  • Understanding of CI/CD pipelines for data jobs.
Role Type
  • Developer / Individual Contributor
  • Hands‑on coding role
  • Design contribution at module and pipeline level
  • No people management
What We Offer
  • Opportunity to work on enterprise data platforms
  • Strong learning and growth opportunities
  • Competitive salary and benefits
  • Supportive engineering environment
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.