Job Search and Career Advice Platform

Enable job alerts via email!

Data Operation Engineer (ID: 690961)

PERSOL

Kuala Lumpur

On-site

MYR 150,000 - 200,000

Full time

2 days ago
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading data solutions company in Kuala Lumpur is seeking a professional to lead big data governance initiatives. The role involves ensuring data accuracy, maintaining big data platforms, and resolving system issues. Solid experience with big data technologies and a strong proficiency in SQL and Linux operations are required. The company offers comprehensive medical coverage and various types of leave, working Monday to Friday from 9am to 6pm.

Benefits

Annual, medical, hospitalization, maternity/paternity leave
Unlimited general practitioner visits
Specialist coverage
Life/PA insurance

Qualifications

  • Solid experience with core big data technologies including Hadoop, Spark, Hive, and more.
  • Strong proficiency in Linux operations.
  • Proficient in SQL with hands-on experience in relational databases.

Responsibilities

  • Lead big data governance initiatives, ensuring data accuracy and reliability.
  • Maintain and optimize big data platforms and applications.
  • Conduct system maintenance and diagnose faults to ensure stability.

Skills

Hadoop
Spark
Hive
HBase
Zookeeper
Flink
Kafka
Redis
Pulsar
ClickHouse
ElasticSearch
Linux
SQL
Job description

Lead big data governance initiatives, including data, asset management, and quality monitoring, ensuring data accuracy, consistency, and reliability.

Maintain and optimize both the underlying big data platforms and upper-layer data applications.

Conduct daily system maintenance, diagnose faults, and resolve issues to ensure stable and uninterrupted operation of big data systems and data products.

Analyze system performance and implement optimization strategies, including developing automated operational scripts and scheduling cleanup of redundant data and tasks.

Collaborate with cross-functional teams to address technical challenges and improve overall data platform efficiency.

Job Requirements:

Solid experience with core big data technologies, including but not limited to Hadoop, Spark, Hive, HBase, Zookeeper, Flink, Kafka, Redis, Pulsar, ClickHouse, and ElasticSearch.

Strong proficiency in Linux operations and a good understanding of its fundamental concepts.

Proficient in SQL, with hands‑on experience in HiveSQL, SparkSQL, Oracle, MySQL, PostgreSQL, or other relational databases.

Able to design and implement deployment and operations plans tailored to business needs.

Skilled in system monitoring, metrics development, and using data-driven insights to identify and resolve platform issues.

Strong problem‑solving skills with a proactive approach to maintaining system stability and performance.

Hours: Mon–Fri, 9am–6pm

Leave: Annual, medical, hospitalization, maternity/paternity

Medical & Insurance: GP unlimited, specialist coverage, hospitalization & surgery, life/PA insurance

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.