
Enable job alerts via email!
Generate a tailored resume in minutes
Land an interview and earn more. Learn more
A data engineering company in Singapore is seeking a Data Engineer to design, develop, and maintain ETL/ELT pipelines and build data warehouses. The ideal candidate will have strong SQL and Python skills, experience with Big Data technologies, and expertise in data quality and integrity. This role involves collaboration with Data Analysts and Business Stakeholders to deliver effective solutions. Candidates with a Bachelor's degree in a related field and relevant certifications are encouraged to apply.
Design, develop, and maintain robust ETL/ELT pipelines to ingest data from multiple sources (databases, APIs, flat files, streaming sources).
Build and optimize data warehouses and data lakes to support business intelligence and analytics use cases.
Ensure data quality, integrity, accuracy, and availability through validation, monitoring, and alerting mechanisms.
Collaborate closely with Data Analysts, Data Scientists, and Business Stakeholders to understand data requirements and deliver scalable solutions.
Optimize data processing performance, including query tuning and pipeline efficiency.
Implement and maintain data security, access controls, and governance standards.
Automate workflows and data operations to improve reliability and reduce manual intervention.
Troubleshoot and resolve data pipeline, performance, and production issues.
Document data models, pipeline architecture, and operational processes.
Support data migration, integration, and modernization initiatives.
Strong experience with SQL and relational databases (e.g., MySQL, PostgreSQL, SQL Server, Oracle).
Hands-on experience with Python (preferred) or Scala/Java for data engineering tasks.
Proven expertise in building ETL pipelines using tools such as Apache Airflow, Talend, Informatica, or similar.
Experience with Big Data technologies (e.g., Hadoop, Spark).
Solid understanding of data warehousing concepts, dimensional modeling, and schema design.
Experience working with cloud platforms (AWS, Azure, or GCP), including cloud-native data services.
Familiarity with REST APIs, data ingestion, and integration patterns.
Knowledge of version control systems (Git) and CI/CD practices.
Strong analytical and problem-solving abilities.
Excellent communication and stakeholder coordination skills.
Ability to work independently and within cross-functional teams.
Detail-oriented with a strong focus on data accuracy and reliability.
Experience with streaming platforms (Kafka, Kinesis, Pub/Sub).
Exposure to BI tools (Power BI, Tableau, Looker).
Knowledge of data governance, metadata management, and data cataloging tools.
Experience supporting machine learning or advanced analytics pipelines.
Bachelor’s degree in Computer Science, Information Technology, Engineering, Mathematics, or a related field.
Relevant certifications in cloud platforms or data engineering are a plus.