Job Search and Career Advice Platform

Enable job alerts via email!

Data Engineer

ONE NORTH CONSULTING PTE. LTD.

Singapore

On-site

SGD 70,000 - 100,000

Full time

3 days ago
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A technology consulting firm in Singapore is seeking a Data Engineer with at least 5 years of experience to support its Data Engineering team. Responsibilities include building data pipelines, collaborating with teams for data availability, and managing project timelines. Expertise in Databricks, experience with Azure or AWS, and knowledge of NoSQL databases are essential. Join a dynamic environment focused on innovative data solutions.

Qualifications

  • 5+ years of experience with Enterprise IT applications in cloud platforms.
  • Experience with data transformations within Data Lake.
  • Experience in building streaming data pipelines.

Responsibilities

  • Build pipelines to bring in data from multiple sources.
  • Collaborate with teams to make data available for consumption.
  • Manage timelines and escalations with stakeholders.
  • Ensure successful handover of projects to business processes.

Skills

Expertise in Databricks
Experience with Azure or AWS
Building data pipelines with Apache Spark
Knowledge of NoSQL databases
Expertise in Cosmos DB
Experience with CI/CD tools
Basic knowledge of scripting
Excellent problem analysis skills

Education

Certifications in Data and Analytics

Tools

Apache Spark
Hive
Hadoop
Jenkins
Git
Job description
Job Title and Overview

One North Consulting, a Singapore based firm specializing in providing Technology Solutions is currently hiring Data Engineers - Singapore Citizen / Singapore PR / PLOC - with about 5~10 years of experience as per details given below.

Job Description & Requirements

As Data Engineer, you will support Data Engineering team in setting up the Data Lake on Cloud and the implementation of standardized Data Model, single view of customer.

You will develop data pipelines for new sources, data transformations within the Data Lake, implementing GRAPHQL, work on NO SQL Database, CI/CD and data delivery as per the business requirements.

Responsibilities
  • Build pipelines to bring in wide variety of data from multiple sources within the organization as well as from social media and public data sources.
  • Collaborate with cross functional teams to source data and make it available for downstream consumption.
  • Work with the team to provide an effective solution design to meet business needs.
  • Ensure regular communication with key stakeholders, understand any key concerns in how the initiative is being delivered or any risks/issues that have either not yet been identified or are not being progressed.
  • Ensure dependencies and challenges (risks) are escalated and critical issues to the Sponsor and/or Head of Data Engineering team.
  • Ensure timelines (milestones, decisions and delivery) are managed and achieved, without compromising quality and within budget.
  • Ensure an appropriate and coordinated communications plan is in place for initiative execution and delivery, both internal and external.
  • Ensure final handover of initiative to business-as-usual processes, carry out a post implementation review (as necessary) to ensure initiative objectives have been delivered, and any lessons learnt are included in future processes.
Who we are looking for

Competencies & Personal Traits

  • Expertise in Databricks
  • Experience with at least one Cloud Infra provider (Azure/AWS)
  • Experience in building data pipelines using batch processing with Apache Spark (Spark SQL, Dataframe API) or Hive query language (HQL)
  • Experience in building streaming data pipeline using Apache Spark Structured Streaming or Apache Flink on Kafka & Data Lake
  • Knowledge of NOSQL databases.
  • Expertise in Cosmos DB, Restful APIs and GraphQL
  • Knowledge of Big data ETL processing tools, Data modelling and Data mapping.
  • Experience with Hive and Hadoop file formats (Avro / Parquet / ORC)
  • Basic knowledge of scripting (shell / bash)
  • Experience of working with multiple data sources including relational databases (SQL Server / Oracle / DB2 / Netezza), NoSQL / document databases, flat files
  • Experience with CI CD tools such as Jenkins, JIRA, Bitbucket, Artifactory, Bamboo and Azure Dev-ops.
  • Basic understanding of DevOps practices using Git version control
  • Ability to debug, fine tune and optimize large scale data processing jobs
  • Excellent problem analysis skills
Experience
  • 5+ years (no upper limit) of experience working with Enterprise IT applications in cloud platform and big data environments.
Professional Qualifications

Certifications related to Data and Analytics would be an added advantage

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.