Enable job alerts via email!
Boost your interview chances
Create a job specific, tailored resume for higher success rate.
A leading company is seeking a Data Engineer in Singapore to manage the extract/transform/load processes. Responsibilities include designing efficient ETL pipelines in Azure Data Factory, ensuring data accuracy, and participating in data modeling. The ideal candidate should have 3-5 years of experience in data engineering, proficiency in analytics languages, and familiarity with big data technologies like Snowflake and Hadoop.
Objectives of this position:
The objective of the position is to manage the extract/transform/load processes ensuring the data availability.
Responsibilities:
The holder of the position is mainly responsible for the following areas in coordination with his / her superior:
•Design, create, modify extract/transform/load (sETL) pipelines in Azure Data Fac-tory ensuring efficient data flow from source to destination.
•Ensure data accuracy and data integrity throughout the ETL processes via data validation, cleansing, deduplication, and error handling to ensure reliable and us-able data being ingested.
•Monitor the ETL processes and optimize ETL pipelines for speed and efficiency, addressing bottlenecks, and ensuring the ETL system can handle the volume, ve-locity, and variety of data.
•Participate in data modeling, designing of the data structures and schema in the data warehouse to optimize query performance and align with business needs.
•Work closely with different departments and IT team to under-stand data requirements and deliver the data infrastructure that supports business goals.
•Provide technical support for ETL systems, troubleshooting issues and ensuring the continuous availability and reliability of data flows.
•Ensure proper documentation of data sources, ETL processes and data architecture.
Requirements:
3 to 5 years of data engineering in Snowflake
3 to 5 years in upstream / downstream Retail industry and/or Supply Chain / Manufacturing domain
Sound Understanding of data quality principles and data governance best practices
Proficiency in data analytics languages like Python, Java, Scala, etc.
Knowledge of big data technologies like Hadoop, Spark and distributed computing frameworks to manage large scale data processing.
Proficient in using version control systems like Git for managing code and configurations.
SnowPro Core Certification and SnowPro Advanced Certification will be advantage