Enable job alerts via email!

Data Engineer

Zenika Singapore

Singapore

Hybrid

SGD 60,000 - 80,000

Full time

Today
Be an early applicant

Job summary

A leading tech consultancy in Singapore is seeking a Data Engineer Consultant. You'll design and implement data platforms in public sector and enterprise environments, working with AWS and Databricks. The ideal candidate has 5–7 years of experience, strong skills in ETL/ELT processes, and is adept at mentoring junior engineers. This role offers a flexible work arrangement and a comprehensive benefits package.

Benefits

20 days of annual leave + up to 5 LEAP days
Dedicated Learning & Development budget
Comprehensive international medical insurance
Flexible benefits package

Qualifications

  • 5–7 years of experience in data engineering, ideally on AWS-native platforms.
  • Proven experience building ETL/ELT pipelines.
  • Hands-on expertise in Databricks, PySpark, and SQL.
  • Strong understanding of data warehouse design and data governance.

Responsibilities

  • Design and build enterprise-scale data architectures.
  • Develop and maintain high-performance ETL/ELT pipelines.
  • Implement and optimise data transformations in Databricks.
  • Collaborate and mentor junior engineers.

Skills

AWS
Databricks
Python
PySpark
SQL
ETL/ELT pipelines
Data security
Data governance
DevOps
Collaboration

Education

Bachelor’s degree in Computer Science, Data Engineering, or a related field
Master’s degree (preferred)

Tools

AWS Glue
AWS Redshift
AWS S3
Kafka
Kinesis
Job description
Is there a Zenika in you?

Let’s talk skills and passion first.

You’re a data enthusiast who thrives on transforming complexity into clarity. With deep technical expertise in AWS and Databricks, you design robust, scalable, and high-performance data pipelines that power intelligent decisions. Curious by nature, you’re constantly exploring new tools, automation techniques, and architectures to deliver meaningful data solutions that scale.

Your Role as a Zenika Consultant:

As a Data Engineer Consultant, you’ll play a key role in designing and implementing data platforms for our clients — particularly in public sector and enterprise environments. You’ll work hands-on with technologies like AWS, Databricks, and PySpark, and collaborate with cross-functional teams to deliver scalable, production-ready data solutions.

You’ll work on projects that will allow you to:

  • Design and build enterprise-scale data architectures — including data lakes, warehouses, and real-time streaming pipelines.
  • Develop and maintain high-performance ETL/ELT pipelines that process large volumes of structured and unstructured data.
  • Implement and optimise data transformations in Databricks using PySpark, Python, and SQL, ensuring quality, scalability, and cost-efficiency.
  • Collaborate and mentor — guide junior engineers, review code, and drive adoption of best practices in data engineering and DevOps.
  • Integrate and automate data flows using APIs, AWS services (Glue, Redshift, S3, EMR, Lambda), and real-time tools like Kafka or Kinesis.
  • Monitor and troubleshoot performance bottlenecks, ensuring reliability, consistency, and security across all data operations.

What You Bring

  • 5–7 years of experience in data engineering, ideally on AWS-native platforms.
  • Proven experience building ETL/ELT pipelines, migrating data solutions, and managing real-time streaming architectures.
  • Hands‑on expertise in Databricks, PySpark, Python, and SQL for large‑scale data transformation.
  • Strong understanding of data warehouse design (Redshift, Snowflake) and data governance principles.
  • Familiarity with DevOps concepts, including CI/CD workflows and version control (GitLab, GitHub).
  • Experience with serverless compute (AWS Lambda, Azure Functions) and automation scripting.
  • Solid grasp of data security, performance optimisation, and cost management in cloud environments.
  • Excellent communication and collaboration skills — you translate complex technical details into clear business language.
  • Bonus: Experience with public sector or HR analytics projects, as well as mentoring and leading data teams.
  • Bachelor’s degree in Computer Science, Data Engineering, or a related field (Master’s preferred).
  • Certifications such as: AWS Certified Data Engineer – Associate or AWS Certified Solutions Architect, Databricks Certified Data Engineer (Associate/Professional), Snowflake or Redshift certifications (a plus), Agile/Scrum Master certification (preferred)
About Zenika

Founded by developer Carl Azoury, Zenika is a consultancy built around community, transparency, and craftsmanship. We are passionate technophiles advising clients with deep expertise in open‑source tech and modern solutions.

Why Join Zenika?
  • Work with a global client base across 11 locations, benefiting from over 28,000 Zenika‑led training sessions
  • Partner with tech giants like Google Cloud and Scrum.org, and engage in research, open‑source contributions, and conferences outside client projects
  • Participate in Zenika tech conferences (TechnoZaures) to learn, share, and grow together
  • Hybrid work arrangement
  • 20 days of annual leave + up to 5 LEAP(Learning, Education, Advancement, Progress) days
  • Dedicated Learning & Development (L&D) budget to support your growth
  • Flexible benefits package to cater to your well‑being and lifestyle needs
  • Comprehensive international medical insurance package

Ready to code your story with us? Apply NOW!

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.