Job Search and Career Advice Platform

Enable job alerts via email!

Data Engineer

Seacare Manpower Services

Singapore

On-site

SGD 50,000 - 80,000

Full time

Yesterday
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading manpower service provider in Singapore is looking for a Data Engineer to develop infrastructure improvements for their data platform. The successful candidate will have expertise in SQL and Python, and will be responsible for designing and maintaining data pipelines while ensuring the reliability and security of the data. This is a 1-year contract role, requiring 42 hours of work per week, with a focus on continuous process enhancement and mentorship within the team.

Qualifications

  • Experience managing data infrastructure on cloud providers.
  • Driven and motivated team player capable of working independently.
  • Passion for championing improvement initiatives.

Responsibilities

  • Develop infrastructure improvements for the data platform.
  • Design and maintain data pipelines across the medallion lifecycle.
  • Implement data governance policies to manage data access.
  • Collaborate on data quality pipelines with domain owners.
  • Ensure reliability and security of the data platform.
  • Provide guidance on building and optimizing data pipelines.

Skills

Strong skills in SQL
Python
Knowledge of Spark
Experience with data modelling
Experience with data platform tools
Data architecture concepts
Data management concepts

Tools

Databricks
Snowflake
Cloudera
AWS
Azure
GCP
Job description
About the job Data Engineer (Labrador Park)
  • Develop infrastructure improvements and initiatives for the data platform, including areas such as data observability, Infrastructure-as-Code, and DataOps.

Design, build, and maintain data pipelines across the medallion lifecycle from raw (bronze) to consumption-ready (gold) data.

Develop data models using appropriate methodologies (e.g., Data Vault 2.0, Kimball).

Implement data governance policies to manage data access and ensure compliance.

Collaborate on data quality pipelines to monitor and remediate data issues with domain owners and data stewards.

Co-develop master and reference data tables and publish them for organizational use.

Develop domain expertise across various organizational functions, such as HR, Finance, Product, and Infrastructure.

Ensure reliability and security of the data platform to minimize downtime for critical data products.

Act as subject matter experts for the platform technology stack, including Databricks, Tableau, MicroStrategy, etc.

Provide guidance to teams on building, maintaining, and optimizing data pipelines in Spark.

Conduct feasibility assessments, proof-of-concepts, and production rollouts for best-in-industry tooling (e.g., dbt, Neo4j, Fivetran, Apache Atlas).

Support development and mapping of the organizations data estate across all units.

Requirement:

  • Strong skills in SQL and Python (knowledge of Spark or distributed computing is a bonus).

Experience with data modelling and data architecture concepts.

Experience with data platform tools such as Databricks, Snowflake, Cloudera, etc.

Experience managing data infrastructure on cloud providers such as AWS, Azure, or GCP.

Knowledge of data management concepts including data access, data quality, and data security.

Driven and motivated team player who can work independently.

Passion for championing improvement initiatives and a desire to continuously enhance processes.

Willingness to learn, share knowledge, and mentor other team members

Duration/ Working hours:

1 year contract (Immediate)

42 hours per week

** We regret to inform only shortlisted candidates will be notified.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.