Job Search and Career Advice Platform

Enable job alerts via email!

Data Engineer

Cradle Fund

Kuala Lumpur

On-site

MYR 48,000 - 60,000

Full time

Yesterday
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading funding organization in Kuala Lumpur is seeking a Data Engineer to assist in designing and developing robust data pipelines. The role includes optimizing processes, ensuring data quality, and collaborating with both analytics teams and stakeholders to provide timely access to data. Candidates must have a bachelor's degree in a relevant field, with practical experience in SQL and a programming language such as Python. Familiarity with ETL concepts and data visualization tools will be advantageous. This position offers opportunities for growth in a dynamic environment.

Qualifications

  • Basic experience with SQL and at least one programming language (Python preferred).
  • Familiarity with ETL concepts, data modeling, and relational databases.
  • Strong analytical and problem-solving skills with attention to detail.

Responsibilities

  • Assist in designing, building, and maintaining ETL/ELT pipelines.
  • Collaborate with analysts and stakeholders to ensure data availability.
  • Support the integration of structured and unstructured data.

Skills

SQL
Python
Data modeling
ETL concepts
Data visualization (Power BI, Tableau)

Education

Bachelor’s degree in Computer Science or related field

Tools

Airflow
Prefect
Spark
Hadoop
Azure
AWS
GCP
Job description
Overview

The Data Engineer is responsible for assisting in the design and development of data pipelines that enable reliable, timely, and scalable data flow across the organization. This role ensures that business intelligence and analytics teams have access to clean, consistent, and well-structured data to support data-driven decision-making.

Responsibilities
  • Assist in designing, building, and maintaining ETL/ELT pipelines to move and transform data from multiple sources into our data platform.
  • Collaborate with analysts and stakeholders to understand data requirements and ensure data availability for reporting and analytics.
  • Support the integration of structured and unstructured data from internal and external sources.
  • Ensure data quality, consistency, and integrity by implementing validation and monitoring processes.
  • Optimize database queries and pipeline performance for efficiency and scalability.
  • Document data workflows, processes, and best practices for repeatability and transparency.
  • Participate in troubleshooting, debugging, and resolving data pipeline issues in a timely manner.
  • Stay updated on emerging data engineering tools and practices to improve workflows.
Requirements
  • Bachelor’s degree in Computer Science, Information Technology, Data Engineering, or related field (or equivalent practical experience).
  • Basic experience with SQL and at least one programming language (Python preferred).
  • Familiarity with ETL concepts, data modeling, and relational databases.
  • Exposure to cloud platforms (Azure, AWS, or GCP) is an advantage.
  • Strong analytical and problem-solving skills with attention to detail.
  • Eagerness to learn, adapt, and grow in a fast-paced environment.
  • Experience with data pipeline/orchestration tools (e.g., Airflow, Prefect).
  • Familiarity with distributed data frameworks (e.g., Spark, Hadoop).
  • Knowledge of data visualization tools (Power BI, Tableau) to support downstream consumers.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.