Job Search and Career Advice Platform

Enable job alerts via email!

Data Engineer

Hytech Consulting Management

Kuala Lumpur

On-site

MYR 70,000 - 90,000

Full time

Yesterday
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading IT consulting company in Kuala Lumpur is seeking a Data Engineer to build scalable data pipelines and support analytics. The ideal candidate will have 2–5 years of experience, a Bachelor’s degree in Computer Science, and strong proficiency in SQL and Python. Responsibilities include designing ETL/ELT pipelines, collaborating with data teams, and ensuring data quality. The position offers competitive compensation and opportunities for cross-functional collaboration.

Benefits

Competitive compensation
Opportunities for collaboration
Diverse data scenarios

Qualifications

  • 2–5 years of experience in data engineering or backend engineering.
  • Hands-on experience in large-scale data processing.
  • Knowledge of distributed data processing technologies.

Responsibilities

  • Design, develop, and maintain scalable ETL/ELT data pipelines.
  • Collaborate with cross-functional teams to translate needs into data models.
  • Implement data quality monitoring and ensure system stability.

Skills

SQL
Python
Data processing
Communication
Problem-solving

Education

Bachelor’s degree in Computer Science

Tools

Spark
AWS
Airflow
Job description
Overview

Hytech is a leading IT company specializing in cutting‑edge financial technology solutions. Our innovative platforms and applications empower our clients to manage their finances efficiently, securely, and with unparalleled convenience. As a market leader in the industry, we are dedicated to driving digital transformation and shaping the future of financial technology.

Position

We are seeking a highly skilled Data Engineer with experience in cloud‑based data platforms to build scalable, reliable data pipelines and robust data models. This role will work closely with data teams, AI teams, and business stakeholders to ensure a solid data foundation that supports analytics, reporting, machine learning, and downstream data products.

Job Responsibilities
  • Design, develop, and maintain scalable ETL/ELT data pipelines, including ingestion, cleaning, transformation, and loading into data lakes and data warehouses.
  • Collaborate with Data Science, BI, Product, and Backend teams to translate business and analytical needs into reliable data models and table structures.
  • Build and optimize Bronze, Silver, and Gold layers to ensure data consistency, performance, and usability.
  • Manage batch and streaming data processing frameworks such as Spark, Flink, or Kafka, ensuring system stability and efficiency.
  • Implement and maintain data quality monitoring, including schema validation, row‑count checks, anomaly detection, and pipeline automation.
  • Provide foundational datasets and feature pipelines to support AI and analytics teams.
  • Work with platform and infrastructure teams to ensure availability, security, and scalability of the data platform.
  • Contribute to data governance practices, including metadata management, data cataloging, field definitions, and versioning standards.
  • Continuously improve pipeline performance, reduce processing costs, and enhance maintainability.
Qualifications
  • 2–5 years of experience in data engineering or backend engineering, with hands‑on experience in large‑scale data processing.
  • Bachelor’s degree or above in Computer Science, Information Systems, Data Engineering, or related fields.
  • Strong proficiency in SQL and experience with Python or Scala for data processing.
  • Experience with at least one major cloud provider (AWS / GCP / Azure); familiarity with S3, Glue, Lambda, Databricks, or similar platforms.
  • Knowledge of distributed data processing technologies such as Spark, Flink, or Kafka.
  • Solid understanding of data warehousing concepts and data modeling (Star Schema, Data Vault, Medallion Architecture).
  • Experience with ETL/ELT pipeline orchestration tools such as Airflow, dbt, or Dagster.
  • Strong communication skills and ability to collaborate with cross‑functional stakeholders.
  • Detail‑oriented, proactive, and strong problem‑solving mindset.
What We Offer
  • Clear role definition with well‑defined objectives
  • Extensive cross‑functional and cross‑regional collaboration opportunities
  • Diverse data scenarios with challenging product strategy initiatives
  • Fast‑paced and dynamic industry environment
  • Strong sense of ownership
  • Competitive compensation package within a performance‑driven culture
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.