Job Search and Career Advice Platform

Enable job alerts via email!

Data Engineer

FPT Asia Pacific

Singapore

On-site

SGD 60,000 - 80,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading technology firm in Singapore is seeking a Data Engineer to develop and maintain data engineering products for Business Intelligence. The ideal candidate will design ETL pipelines, collaborate with stakeholders, and ensure adherence to quality processes. Proficiency in SQL, cloud technologies, and big data frameworks is essential for success in this role, which offers a dynamic work environment focused on data-driven solutions.

Qualifications

  • Proficient in general data cleaning and transformation using tools like SQL, R.
  • Experience in building ETL pipelines and cloud technologies such as AWS.
  • Familiar with big data frameworks and tools like Hadoop and Spark.

Responsibilities

  • Develop and maintain data engineering solutions for BI and data projects.
  • Collaborate with various stakeholders to ensure alignment on data products.
  • Manage incident response and provide support for deployed systems.

Skills

Data cleaning and transformation
Building ETL pipelines
Database design
Cloud technologies
Production-grade data pipelines
Data modelling
Big data frameworks
Scripting languages
Communication skills

Tools

SQL Server Integration Services
AWS Lambda
PostgreSQL
Apache Spark
BeautifulSoup
Job description

Work closely with Product Owner for the development, implementation and maintenance of data engineering products for Business Intelligence (BI) and Data warehouse projects. Scope of work includes:

  • Design, develop and deploy data engineering solutions (including data tables, views, marts and etc) in data pipeline, data warehouse, operational data store, data lake and virtualisation.
  • Perform data extraction, cleaning, transformation, and flow. Web scraping may be also a part of the work scope in data extraction.
  • Design, build, launch and maintain efficient and reliable large-scale batch and real-time data pipelines with data processing frameworks.
  • Integrate and collate data silos in a manner which is both scalable and compliant.
  • Ensure adherence to quality processes throughout the data product lifecycle, including development, deployment, validation, change management and documentation, with particular emphasis on production environment standards.
  • Ensure proper data practices and maintain compliance with security policies and regulatory requirements.
  • Collaborate with Project Manager, Data Architect, Business Analysts, Frontend Developers, Designers and Data Analyst to build scalable data driven products.
  • Be responsible for developing backend APIs & working on databases to support the applications.
  • Work in an Agile Environment that practices Continuous Integration and Delivery.
  • Work closely with fellow developers through pair programming and code review process.
  • Collaborate with business stakeholders to gather and analyse requirements for data engineering solutions, ensuring alignment between technical implementation and business objectives.
  • Support business users in designing and developing front-end reports, dashboards and interactive visualisations to enhance organisational data analysis capabilities.
  • Provide ongoing operational maintenance and technical support for deployed data engineering products and systems.
  • Manage incident response and service requests, ensuring timely resolution within established Service Level Agreements.
  • Participate in regular project audits and provide training support to team members and stakeholders as required.
Knowledge and Skills
  • Proficient in general data cleaning and transformation (e.g. SQL, VQL, pandas, R, etc) to ensure data accuracy and consistency.
  • Proficient in building ETL pipeline (eg. SQL Server Integration Services(SSIS), AWS Database Migration Services (DMS), Python, AWS Lambda, ECS Container task, Eventbridge, AWS Glue, Spring).
  • Proficient in database design and various databases (e.g. SQL, PostgreSQL, AWS S3, Athena, mongodb, postgres/gis, mysql, sqlite, voltdb, cassandra, etc).
  • Experience in cloud technologies such as GPC, GCC (i.e. AWS, Azure, Google Cloud).
  • Experience and passion for data engineering in a big data environment using Cloud platforms such as GPC, GCC (i.e. AWS).
  • Experience with building production-grade data pipelines, ETL/ELT data integration.
  • Knowledge about system design, data structure and algorithms.
  • Familiar with data modelling, data access, and data storage infrastructure like Data Mart, Data Lake, Data Virtualisation and Data Warehouse for efficient storage and retrieval.
  • Familiar with rest api and web requests/protocols in general.
  • Familiar with big data frameworks and tools (eg. Hadoop, Spark, Kafka,RabbitMQ).
  • Familiar with W3C Document Object Model and customized web scraping(e.g. BeautifulSoup, CasperJS, PhantomJS, Selenium, Nodejs, etc).
  • Familiar with data policies, access control and security best practices.
  • Comfortable in at least one scripting language (eg. SQL,Python).
  • Comfortable in both windows and linux development environments.
  • Interest in being the bridge between engineering and analytics.
  • Good communication skills to work closely with stakeholders, technical lead and fellow team members.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.