Job Search and Career Advice Platform

Enable job alerts via email!

Senior Big Data/Hadoop Developer

ALPHA KOGNITA PTE. LTD.

Singapore

On-site

SGD 80,000 - 100,000

Full time

Yesterday
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading data solutions provider in Singapore is seeking a Senior Big Data Developer to design and develop scalable data processing applications using Apache Spark. The role requires strong hands-on development experience in Big Data applications and will involve collaboration with architects and data teams to deliver effective data solutions. Candidates with 6-10 years of experience and proficient in Hadoop ecosystem tools are highly desirable. The position offers competitive salary and opportunities to work with cutting-edge technologies.

Benefits

Competitive salary and benefits
Opportunity to work on large-scale enterprise data platforms
Exposure to cutting-edge big data and cloud technologies
Strong engineering culture and learning environment

Qualifications

  • 6–10 years of experience in Big Data application development.
  • Strong hands-on development experience with Apache Spark (PySpark / Scala / Java).
  • Solid experience with Hadoop ecosystem tools like HDFS, Hive, Impala, YARN, Sqoop, Oozie.

Responsibilities

  • Design and develop scalable Spark-based data processing applications.
  • Build and maintain ETL pipelines for structured and semi-structured data.
  • Collaborate with architects and data teams to deliver data solutions.

Skills

Apache Spark (PySpark / Scala / Java)
SQL
Hadoop ecosystem (HDFS, Hive, Impala, YARN, Sqoop, Oozie)
Python programming
Java programming
Linux shell scripting

Tools

Apache Airflow
Kafka
AWS EMR
Docker
Kubernetes
Job description

We are seeking a highly skilled Senior Big Data Developer / Spark Developer to design and develop scalable data processing applications in a Hadoop-based ecosystem. The role focuses on building high-performance ETL pipelines and distributed data processing solutions using Spark.

This is a hands-on engineering role where you will contribute to solution design at the module and pipeline level while working closely with architects and data teams.

Key Responsibilities
  • Design and develop scalable Spark-based data processing applications using PySpark / Scala / Java.
  • Build and maintain ETL pipelines for structured and semi-structured data.
  • Design data transformation logic and processing workflows based on business requirements.
  • Implement batch and real-time data ingestion pipelines into data lake and data mart environments.
  • Optimize Spark jobs for performance, memory utilization, and execution efficiency.
  • Develop SQL queries for data validation, reconciliation, and reporting.
  • Debug production issues and resolve data pipeline failures.
  • Collaborate with architects, data engineers, and analytics teams to deliver data solutions.
  • Follow coding standards, testing practices, and CI/CD deployment processes.
Required Skills & Experience
  • 6–10 years of experience in Big Data application development.
  • Strong hands-on development experience with Apache Spark (PySpark / Scala / Java).
  • Strong programming skills in Python, Java, and Scala.
  • Solid experience with Hadoop ecosystem: HDFS, Hive, Impala, YARN, Sqoop, Oozie.
  • Strong SQL skills for data processing and validation.
  • Experience building large-scale batch data pipelines.
  • Good understanding of data lake and data warehouse concepts.
  • Experience working in Linux environments with shell scripting.
  • Knowledge of job scheduling using cron or workflow orchestration tools.
Good to Have
  • Experience with Apache Airflow, NiFi, or similar orchestration tools.
  • Exposure to Kafka or real-time streaming frameworks.
  • Experience with cloud big data platforms (AWS EMR, Azure HDInsight, GCP Dataproc).
  • Familiarity with Docker and Kubernetes.
  • Knowledge of CI/CD pipelines for Spark jobs.
Role Type
  • Senior Developer / Individual Contributor
  • Hands‑on coding role
  • Design responsibility limited to modules, pipelines, and data workflows
  • No people management
What We Offer
  • Opportunity to work on large-scale enterprise data platforms
  • Exposure to cutting‑edge big data and cloud technologies
  • Competitive salary and benefits
  • Strong engineering culture and learning environment
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.