Enable job alerts via email!

Big Data Engineer

LION & ELEPHANTS CONSULTANCY PTE. LTD.

Singapore

On-site

SGD 80,000 - 120,000

Full time

Today
Be an early applicant

Job summary

A leading consultancy firm in Singapore seeks a Senior Big Data Engineer with 7–12 years of experience. The role involves designing large-scale data processing systems using Java, Apache Spark, and Hadoop, along with leading data ingestion and ETL/ELT pipeline development. Candidates should possess strong programming skills and a solid understanding of data architecture. This full-time position is only open to citizens and PRs, with no visa sponsorship available.

Qualifications

  • 7–12 years of experience in big data engineering or backend data systems.
  • Proven experience with Apache Spark, Hadoop, and related tools.
  • Strong problem-solving and analytical skills.

Responsibilities

  • Design and optimize distributed data processing systems using Apache Spark and Hadoop.
  • Lead the development of data ingestion and ETL/ELT pipelines.
  • Work with cross-functional teams to gather requirements.

Skills

Java
Apache Spark
Hadoop
Scala
Python
SQL
Kafka
Data governance

Education

Bachelor’s or Master’s degree in Computer Science or Data Engineering

Tools

HDFS
YARN
Spark Streaming
Flink
Docker
Kubernetes
Job description

Job Title: Big Data Engineer (Java, Spark, Hadoop)

Location: Singapore

Experience: 7- 12 years

Employment Type: Full-Time

Open to Citizens and SPR only | No Visa sponsorship available

Job Summary

We are looking for a Senior Big Data Engineer with 7–12 years of experience to join our growing data engineering team. The ideal candidate will bring deep expertise in Java, Apache Spark, and Hadoop ecosystems, and have a strong track record of designing and building scalable, high-performance big data solutions. This role is critical to ensuring robust data processing and delivering clean, actionable data for business insights and advanced analytics.

Key Responsibilities
  • Design, build, and optimize large-scale, distributed data processing systems using Apache Spark, Hadoop, and Java.
  • Lead the development and deployment of data ingestion, ETL/ELT pipelines, and data transformation frameworks.
  • Work with cross-functional teams to gather data requirements and translate them into scalable data solutions.
  • Ensure high performance and reliability of big data systems through performance tuning and best practices.
  • Manage and monitor batch and real-time data pipelines from diverse sources including APIs, databases, and streaming platforms like Kafka.
  • Apply deep knowledge of Java to build efficient, modular, and reusable codebases.
  • Mentor junior engineers, participate in code reviews, and enforce engineering best practices.
  • Collaborate with DevOps teams to build CI/CD pipelines and automate deployment processes.
  • Ensure data governance, security, and compliance standards are maintained.
Required Qualifications
  • 7–12 years of experience in big data engineering or backend data systems.
  • Strong hands-on programming skills in Java; exposure to Scala or Python is a plus.
  • Proven experience with Apache Spark, Hadoop (HDFS, YARN, MapReduce), and related tools.
  • Solid understanding of distributed computing, data partitioning, and optimization techniques.
  • Experience with data access and storage layers like Hive, HBase, or Impala.
  • Familiarity with data ingestion tools like Apache Kafka, NiFi, Flume, or Sqoop.
  • Comfortable working with SQL for querying large datasets.
  • Good understanding of data architecture, data modeling, and data lifecycle management.
  • Experience with cloud platforms like AWS, Azure, or Google Cloud Platform.
  • Strong problem-solving, analytical, and communication skills.
Preferred Qualifications
  • Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field.
  • Experience with streaming data frameworks such as Spark Streaming, Kafka Streams, or Flink.
  • Knowledge of DevOps practices, CI/CD pipelines, and infrastructure as code (e.g., Terraform).
  • Exposure to containerization (Docker) and orchestration (Kubernetes).
  • Certifications in Big Data technologies or Cloud platforms are a plus.

Please note that this is an equal opportunities employer.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.