Enable job alerts via email!

Big Data Consultant

HR Tech

Singapore

On-site

SGD 80,000 - 100,000

Full time

30+ days ago

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

An innovative HR Tech company is seeking a skilled Hadoop Developer with a strong background in PySpark, Scala, and Java. This role involves developing modules, ingesting data into Big Data platforms, and ensuring project quality and timelines are met. The ideal candidate will have excellent communication skills, a proactive attitude, and the ability to work collaboratively with offshore teams across different time zones. If you are passionate about big data technologies and eager to contribute to impactful projects, this opportunity is perfect for you!

Qualifications

  • 4+ years of experience in Hadoop programming with PySpark/Scala/Java.
  • Strong SQL skills, including complex SQL scripting.

Responsibilities

  • Develop modules and ensure timely closure of tasks in Hadoop projects.
  • Ingest data into Big Data platforms using Spark and HIVE.

Skills

Hadoop Programming
PySpark
Scala
Java
Shell Scripting
SQL Scripting
Data Analysis
Team Collaboration
Communication Skills

Education

Bachelor's Degree in Computer Science or related field

Tools

Databricks
Confluence
JIRA

Job description

Job Responsibilities
  1. Must have good technical experience and should be able to understand and develop the modules and bring the task to closure on time.
  2. At least 4+ years of development work experience in Hadoop programming (HDFS) using PySpark/Scala/Java with Hive-based Data warehouse projects, along with good Shell Scripting experience.
  3. Able to understand the requirements and ingest the data into the Big Data platform using Spark with HIVE.
  4. Should be able to write SQL scripts, including complex SQL scripts to process the data with SCD handling exposure.
  5. Responsible for meeting quality and scope as per the project plans, must be able to meet stringent timelines.
  6. Flexible to work as per regional timing.
  7. Excellent communication and documentation skills.
  8. Very good at team playing and flexible to work with the offshore team in different time zones based on project needs.
  9. Willingness to obtain certification in Databricks Certified Developer: Apache Spark 3.X.

Nice to Have

  1. Proactive, with good communication skills to articulate technical issues.
  2. Exposure to Confluence/JIRA.
  3. Ability to work independently; prior experience with databases like Oracle/SQL Server/ETL will be an added advantage.
  4. At least 1+ years of Module Lead experience.
  5. Hands-on experience in data analysis and debugging SQL issues.
  6. Experience in performance tuning Hive QLs and cluster nodes, as well as functional testing, test planning, and execution.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.