Job Search and Career Advice Platform

Enable job alerts via email!

Data Engineer

Averis Sdn Bhd

Kuala Lumpur

On-site

MYR 60,000 - 80,000

Full time

2 days ago
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A global IT services firm located in Kuala Lumpur is seeking a Data Engineer to join their team. The successful candidate will build and maintain data pipelines, ensuring data quality and supporting production operations. Candidates should have 1–3 years of relevant experience, proficiency in SQL and programming skills, and a strong understanding of data processing tools. This role offers mentorship from senior engineers and opportunities for career development in data engineering.

Benefits

Mentorship opportunities
Career development in Data Engineering

Qualifications

  • 1–3 years of hands-on experience in data engineering.
  • Proficiency in SQL and at least one programming language (Python, Java, or Scala).
  • Understanding of ETL/ELT workflows and orchestration concepts.

Responsibilities

  • Build and maintain data pipelines to ingest and transform data.
  • Write efficient SQL and Python/Scala for processing.
  • Collaborate with analytics teams for curated datasets.

Skills

SQL
Python
Data modeling
Attention to detail

Education

Bachelor's degree or relevant field

Tools

Apache Spark
Git
Databricks
Job description

Grow your career with us

Here at Averis, our common purpose is to improve lives by developing resources sustainably. Our people are crucial in helping us to realise our vision to be one of the best Global Business Solution (GBS) organization to support our customers in creating value for the Community, Country, Climate, Customer and Company.

Responsibilities

Hello, we are Averis!

Averis Information Technology is a global IT services organization headquartered in Kuala Lumpur that designs and delivers IT solutions for large enterprises, to drive economies of scale and business transformation. Our core areas of expertise are: infrastructure and networking, ERP and procurement systems, cybersecurity and data governance, digital workplace and end-user management. Our journey started in 2006, and today, more than 300 IT professionals are servicing major customers in resource-based manufacturing industries such as i.e. paper, packaging and tissue, edible oils, and energy which collectively encompass $35B of assets and 80,000 employees sited across 32 locations globally.

Role Summary
  • Join our Data Engineering team to help build reliable data pipelines and analytics foundations that power decision-making across the business.
  • You will develop ETL/ELT workflows, maintain data quality, and collaborate with senior engineers, analysts, and data scientists in a fast‑paced environment.
Key Responsibilities
  • Build, test, and maintain data pipelines to ingest, transform, and load data from diverse sources (batch and streaming).
  • Write clean, efficient SQL and Python/Scala for data processing on platforms such as Apache Spark/Databricks.
  • Contribute to data quality checks, monitoring, alerting, and documentation.
  • Support production operations: investigate issues, perform root‑cause analysis, and implement fixes.
  • Create reusable components and follow coding standards, version control (Git), and CI/CD practices.
  • Collaborate with analytics teams to deliver curated datasets for dashboards and advanced analytics.
  • Follow security, privacy, and compliance best practices to protect sensitive data.
  • Stay curious—learn new tools and propose improvements to performance, cost, and reliability.
Required Qualifications
  • 1–3 years of hands‑on experience in data engineering or closely related roles;
  • OR strong internships/capstone projects demonstrating practical data engineering skills (Exceptional fresh graduates are welcome to apply).
  • Proficiency in SQL and at least one programming language (Python, Java, or Scala).
  • Familiarity with distributed data processing (e.g., Apache Spark) and data lake/warehouse concepts.
  • Experience with relational databases and basic data modeling (star schema, normalization).
  • Understanding of ETL/ELT workflows and orchestration concepts (e.g., Airflow/Prefect).
  • Comfort with Git and collaborative development workflows.
  • Clear communication, growth mindset, and attention to detail.
Nice to Have
  • Exposure to Databricks, dbt, or cloud warehouse technologies (e.g., BigQuery, Redshift, Snowflake).
  • Experience with finance/ERP data (e.g., SAP FI/CO, SD, MM) and basic understanding of GL, AP/AR, Cost/Profit Centers.
  • Familiarity with data quality frameworks, testing (e.g., Great Expectations), and observability/monitoring tools.
  • Basics of containerization and CI/CD (Docker, GitHub Actions) and data security best practices.
What You’ll Gain
  • Mentorship from senior data engineers and exposure to end‑to‑end data platform work.
  • Opportunities to ship production features, learn Databricks/Spark at scale, and develop your career path in Data Engineering.

When you send us your resume and personal details, it is deemed you have provided your consent for us to keep or store your information in our database. All the information you have provided is only used for the recruitment process. Averis will only collect, use, process or disclose personal information where and when allowed to under applicable laws.
Only shortlisted candidates will be contacted for an interview. We endeavour to respond to every applicant. However, if you receive no response from us within 60 days, please consider your application for this specific position unsuccessful. We may contact you in the future if there are opportunities that match your qualifications and experience. Thank you for considering a career with Averis.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.