Job Search and Career Advice Platform

Enable job alerts via email!

Data Engineer (Immediate Starter)

SPACE EXECUTIVE PTE. LTD.

Singapore

On-site

SGD 60,000 - 80,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading technology recruitment firm in Singapore is seeking a Data Engineer to design and maintain ETL/ELT data pipelines across cloud and on-prem environments. The ideal candidate will have proficiency in SQL and Python, experience with cloud platforms like AWS or Azure, and skills in data orchestration tools. This role involves developing automated workflows for data ingestion, ensuring data quality, and building analytical datasets to support business intelligence. Competitive compensation is offered.

Qualifications

  • Proficiency in SQL for data manipulation and optimized queries.
  • Hands-on experience with Python for ETL and data transformation.
  • Experience with at least one cloud platform like AWS, Azure, or GCP.
  • Familiarity with data orchestration or workflow tools like Airflow.
  • Experience with relational databases such as MySQL or PostgreSQL.

Responsibilities

  • Design, build, and maintain ETL/ELT data pipelines.
  • Develop automated workflows for data ingestion and cleaning.
  • Build and maintain data marts and warehouse tables.
  • Implement data quality checks and monitoring dashboards.
  • Collaborate with teams to understand data requirements.

Skills

SQL
Python
AWS
Azure
GCP
Airflow
Data Lakes

Tools

MySQL
PostgreSQL
Spark
Databricks
Job description
Key Responsibilities
  • Design, build, and maintain ETL/ELT data pipelines across cloud or on-prem environments.

  • Develop automated workflows to ingest, clean, validate, and transform structured and unstructured data.

  • Build and maintain data marts, data warehouse tables, and analytical datasets to support business intelligence and reporting.

  • Implement data quality checks, monitoring dashboards, and alerting mechanisms for pipeline reliability.

  • Work with APIs, streaming data, or batch processes to integrate data from multiple internal and external sources.

  • Support troubleshooting, incident investigation, and optimisation of pipeline performance.

  • Collaborate with analysts, product teams, and business units to understand data requirements and deliver usable datasets.

  • Manage cloud resources, storage layers, and compute workloads (AWS/Azure/GCP).

  • Participate in documentation, version control, code reviews, and CI/CD practices.

Skills & Experience We’re Looking For

Core Requirements

  • Proficiency in SQL for data manipulation, modelling, and performance‑optimised queries.

  • Hands‑on experience with Python for ETL scripting, data transformation, or API integrations.

  • Experience with at least one cloud platform: AWS, Azure, or GCP.

  • Familiarity with data orchestration or workflow tools (e.g., Airflow, ADF, Step Functions, Cloud Composer, Cron).

  • Experience with relational databases (MySQL, PostgreSQL, SQL Server, etc.).

  • Ability to design and maintain data pipelines across batch or near‑real‑time processes.

Good To Have (Bonus)

  • Experience with Spark / Databricks (PySpark or Scala).

  • Exposure to data lake architecture (bronze/silver/gold), Delta Lake, or Snowflake.

  • Web scraping tools (BeautifulSoup, Selenium) or API integration experience.

  • Knowledge of BI tools such as Power BI, Tableau, QuickSight, or Looker.

  • Understanding of data modelling (star schema, fact/dimension tables).

  • Familiarity with CI/CD pipelines, Git, Docker, or serverless functions.

  • Experience handling large datasets and optimising performance.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.