Job Search and Career Advice Platform

Enable job alerts via email!

Big Data Operations Engineer _Contract

NTT SINGAPORE PTE. LTD.

Singapore

On-site

SGD 80,000 - 100,000

Full time

Yesterday
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A technology services company in Singapore is looking for a Data Engineer to manage data pipelines and support operations. The ideal candidate will have over 5 years of experience in cluster operations, proficiency in Python and SQL, and strong communication skills. Responsibilities include monitoring service performance and ensuring data availability. Prior experience in the financial sector is preferred, and familiarity with tools like Dataiku and Tableau is a plus.

Qualifications

  • 5+ years of experience in cluster operations.
  • Proficient in Python and SQL for data warehousing.
  • Experience working in financial industry preferred.

Responsibilities

  • Support and troubleshoot data pipelines from ingestion to consumption.
  • Ensure data availability thresholds are met.
  • Monitor service availability and performance.

Skills

Cluster operations (Kubernetes, Kafka, Spark, S3/HDFS)
Proficient in Python
Proficient in SQL
Strong communication skills
Planning skills
Teamwork skills
Experience with Linux and scripting
CI/CD pipelines

Education

Degree in Computer Science or equivalent

Tools

Dataiku
Tableau
Job description
Key Responsibilities
  • Daily support and troubleshooting of data pipelines (ingestion to consumption).
  • Ensure data availability thresholds are met.
  • Support release cycles and prevent production disruptions.
  • Maintain JSON schemas and metadata for reuse.
  • Act as Data Engineer for corrective measures (historization, dependencies, quality checks).
  • Manage Common Data Model tables in a separate access zone.
  • Provide 3rd-level support for incidents and problems.
  • Monitor service availability and performance; collaborate with 1st/2nd level support.
  • Continuously improve service availability, performance, and capacity.

Requirement:

  • Degree in Computer Science or equivalent.
  • Strong communication, planning, and teamwork skills.
  • Proactive and customer-focused approach
  • 5+ years in cluster operations (Kubernetes, Kafka, Spark, S3/HDFS).
  • Proficient in Python and SQL (DWH and distributed environments).
  • Experience with Linux, scripting, CI/CD pipelines.
  • Knowledge of Data Ops processes and monitoring tools.
  • Financial industry experience preferred.
  • Nice to have: Dataiku, Tableau.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.