Job Search and Career Advice Platform

Enable job alerts via email!

Data Engineer (PySpark & AWS)

EPS Malaysia

Kuala Lumpur

On-site

MYR 100,000 - 150,000

Part time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading data services firm in Kuala Lumpur is seeking a Data Engineer specializing in PySpark for an oil and gas project. The role involves designing and maintaining data architectures, optimizing ETL processes, and ensuring data quality. Candidates should have a Bachelor’s Degree in a related field and at least 4 years of relevant experience, particularly with AWS technologies. This position is contract-based and requires adaptability in fast-paced environments.

Qualifications

  • Bachelor’s Degree in Computer Science, Information Technology, or a related field.
  • Minimum 4 years of experience in a Data Engineering role.
  • Expert proficiency in PySpark (mandatory).
  • Strong experience with AWS, particularly AWS Glue (highly preferred).
  • Advanced working knowledge of SQL and relational databases.
  • Experience with Databricks Platform.
  • Familiarity with Amazon Athena and Grafana is an advantage.
  • Experience in cloud data platforms such as AWS (S3, Glue) or Microsoft Azure.
  • Comfortable working in fast-paced environments and pushing changes live when required.
  • Willing and able to work on a contract basis.

Responsibilities

  • Design, develop, test, and maintain scalable data architectures and data pipelines based on business requirements.
  • Build and optimize ETL/ELT processes across multiple data layers within the Enterprise Data Hub (EDH).
  • Identify performance bottlenecks in existing data pipelines and lead optimization and automation initiatives.
  • Support and resolve data integration issues at application, infrastructure, and network levels.
  • Ensure data reliability, quality, and efficiency using appropriate programming languages and data processing tools.
  • Collaborate with cross-functional teams to support analytics, reporting, and business use cases.
  • Deploy changes to production environments and support live data operations.
Job description
Data Engineer (PySpark) ( Oil and Gas Project)
Responsibilities

Design, develop, test, and maintain scalable data architectures and data pipelines based on business requirements

Build and optimize ETL/ELT processes across multiple data layers within the Enterprise Data Hub (EDH)

Identify performance bottlenecks in existing data pipelines and lead optimization and automation initiatives

Support and resolve data integration issues at application, infrastructure, and network levels

Ensure data reliability, quality, and efficiency using appropriate programming languages and data processing tools

Collaborate with cross-functional teams to support analytics, reporting, and business use cases

Deploy changes to production environments and support live data operations

Qualifications
  • Bachelor’s Degree in Computer Science, Information Technology, or a related field
  • Minimum 4 years of experience in a Data Engineering role
  • Expert proficiency in PySpark (mandatory)
  • Strong experience with AWS, particularly AWS Glue (highly preferred)
  • Advanced working knowledge of SQL and relational databases
  • Experience with Databricks Platform
  • Familiarity with Amazon Athena and Grafana is an advantage
  • Experience in cloud data platforms such as AWS (S3, Glue) or Microsoft Azure
  • Comfortable working in fast-paced environments and pushing changes live when required
  • Willing and able to work on a contract basis
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.