Job Search and Career Advice Platform

Enable job alerts via email!

Data Engineer, Senior

JOBSTER PRIVATE LTD.

Singapore

On-site

SGD 80,000 - 120,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A data solutions company in Singapore is seeking a Data Engineer to design and manage data pipelines, utilizing AWS services including Glue and Redshift. The ideal candidate will have a strong background in data engineering, experience with SQL and Python, and knowledge of CI/CD practices. This role offers a dynamic environment focused on data optimization and infrastructure support.

Qualifications

  • Strong background in data engineering and data pipeline development.
  • Extensive experience with AWS data services and data warehouse design.
  • Knowledge of data modeling and database design principles.

Responsibilities

  • Design, build, and maintain robust data pipelines using AWS Glue.
  • Manage AWS Redshift data warehouse operations.
  • Develop and maintain CI/CD pipelines for data workflows using GitLab.
  • Set up monitoring and alerting for data pipeline health.

Skills

Data pipeline development
SQL
Python
Shell scripting
AWS data services
CI/CD knowledge
Infrastructure as Code
Data modeling

Tools

AWS Glue
GitLab
AWS Redshift
AWS S3
Terraform
Job description
Scope of Work:

Data Pipeline Development & Management

  • Design, build, and maintain robust data pipelines using AWS Glue
  • Implement ETL/ELT processes for data ingestion from multiple sources
  • Optimize data workflows for performance and scalability
  • Monitor and troubleshoot data pipeline failures and performance issues

Data Infrastructure & Engineering

  • Manage and optimize AWS Redshift data warehouse operations
  • Configure and maintain data storage solutions (AWS S3, data lakes)
  • Implement data partitioning, indexing, and compression strategies
  • Support Infrastructure as Code (IaC) for data infrastructure deployment

CI/CD & DevOps for Data

  • Develop and maintain CI/CD pipelines for data workflows using GitLab
  • Implement automated testing for data pipelines and data quality
  • Support version control and deployment strategies for data assets
  • Configure AWS Lambda functions for data processing automation

Monitoring & Support

  • Set up monitoring and alerting for data pipeline health
  • Provide technical support for data-related issues
  • Collaborate with technical teams on data architecture requirements
  • Optimize query performance and database operations

Documentation & Reporting

  • Document data pipeline architectures and technical specifications
  • Maintain runbooks and operational procedures
  • Conduct monthly progress meetings (1 hour) to report on system health
  • Track engineering tasks through SHIP-HATS Jira
  • Maintain technical documentation on SHIP-HATS Confluence
Required Skills & Experience:
  • Strong background in data engineering and data pipeline development
  • Proficiency in SQL, Python, and shell scripting
  • Extensive experience with AWS data services (Redshift, S3, Glue, Lambda, CloudWatch)
  • Data warehouse design and optimization experience
  • Strong CI/CD pipeline knowledge (GitLab preferred)
  • Infrastructure as Code (IaC) experience (Terraform, CloudFormation)
  • Knowledge of data modeling and database design principles
  • Strong troubleshooting and performance optimization skills
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.