Job Search and Career Advice Platform

Enable job alerts via email!

Data Engineer (Risk Control)

Hytech Consulting Management

Kuala Lumpur

On-site

MYR 70,000 - 90,000

Full time

4 days ago
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading data consulting firm is seeking a Data Engineer skilled in cloud-based data platforms to design scalable data pipelines. Responsibilities include developing ETL processes, collaborating with teams to model data, and ensuring data quality. The ideal candidate has 3-5 years' experience in data engineering, strong SQL skills, and cloud platform knowledge. Competitive package offered in a fast-paced environment.

Benefits

Competitive compensation package
Cross-functional collaboration opportunities
Dynamic industry environment

Qualifications

  • 3–5 years of experience in data engineering or backend engineering.
  • Hands-on experience in large-scale data processing.
  • Strong proficiency in SQL and experience with Python or Scala.

Responsibilities

  • Design, develop, and maintain ETL/ELT data pipelines.
  • Collaborate with various teams for data modeling.
  • Implement and maintain data quality monitoring.

Skills

Data processing
SQL
Python
Scala
Cloud computing
Data modeling
ETL/ELT orchestration
Data governance
Collaboration

Education

Bachelor’s degree in Computer Science or related fields

Tools

AWS
GCP
Azure
Spark
Flink
Kafka
Airflow
dbt
Dagster
Job description

We are seeking a highly skilled Data Engineer with experience in cloud-based data platforms to build scalable, reliable data pipelines and robust data models. This role will work closely with data teams, AI teams, and business stakeholders to ensure a solid data foundation that supports analytics, reporting, machine learning, and downstream data products.

Job Responsibilities
  • Design, develop, and maintain scalable ETL/ELT data pipelines, including ingestion, cleaning, transformation, and loading into data lakes and data warehouses.
  • Collaborate with Data Science, BI, Product, and Backend teams to translate business and analytical needs into reliable data models and table structures.
  • Build and optimize Bronze, Silver, and Gold layers to ensure data consistency, performance, and usability.
  • Manage batch and streaming data processing frameworks such as Spark, Flink, or Kafka, ensuring system stability and efficiency.
  • Implement and maintain data quality monitoring, including schema validation, row-count checks, anomaly detection, and pipeline automation.
  • Provide foundational datasets and feature pipelines to support AI and analytics teams.
  • Work with platform and infrastructure teams to ensure availability, security, and scalability of the data platform.
  • Contribute to data governance practices, including metadata management, data cataloging, field definitions, and versioning standards.
  • Continuously improve pipeline performance, reduce processing costs, and enhance maintainability.
Qualifications
  • 3–5 years of experience in data engineering or backend engineering, with hands‑on experience in large‑scale data processing.
  • Bachelor’s degree or above in Computer Science, Information Systems, Data Engineering, or related fields.
  • Strong proficiency in SQL and experience with Python or Scala for data processing.
  • Experience with at least one major cloud provider (AWS / GCP / Azure); familiarity with S3, Glue, Lambda, Databricks, or similar platforms.
  • Knowledge of distributed data processing technologies such as Spark, Flink, or Kafka.
  • Solid understanding of data warehousing concepts and data modeling (Star Schema, Data Vault, Medallion Architecture).
  • Experience with ETL/ELT pipeline orchestration tools such as Airflow, dbt, or Dagster.
  • Strong communication skills and ability to collaborate with cross‑functional stakeholders.
  • Detail‑oriented, proactive, and strong problem‑solving mindset.
  • Experience in financial technology platforms, or risk‑related data is a strong advantage.
What we offer
  • Clear role definition with well‑defined objectives
  • Extensive cross‑functional and cross‑regional collaboration opportunities
  • Diverse data scenarios with challenging product strategy initiatives
  • Fast‑paced and dynamic industry environment
  • Strong sense of ownership
  • Competitive compensation package within a performance‑driven culture
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.