Singapore
On-site
SGD 70,000 - 90,000
Full time
Job summary
A leading engineering firm located in Singapore seeks a Data Engineer to develop and maintain scalable data pipelines. The candidate should have 2-3 years of experience, strong skills in Python, SQL, and AWS technologies, and engage in collaboration with business and analytics teams to optimize data models and quality. This role is key to driving data-driven decision-making within the organization.
Qualifications
- 2-3 years of experience in data engineering or a similar role.
- Strong programming skills in Python, SQL, AWS and related tech stack.
- Good understanding of basic machine learning concepts (Sagemaker).
Responsibilities
- Develop, maintain scalable data pipelines and build out new integrations.
- Collaborate with analytics and business teams to refine data models.
- End-to-end ownership of data quality in core datasets.
Skills
Python
AWS
SQL
Data Pipelines
Data Engineering
Education
Bachelor's degree in Computer Science, Engineering, or a related field
Tools
Glue
Airflow
Kafka
Spark
Snowflake
DBT
Responsibilities
- Develop, maintain scalable data pipelines and build out new integrations to support continuing increases in data volume and complexity
- Develop and maintain scalable, optimized data pipelines leveraging Python and AWS services to support increasing data volume and complexity, while ensuring seamless integration with AI platforms like Bedrock and Google. Further enhance data accessibility and drive data-driven decision making by collaborating with analytics and business teams to refine data models for business intelligence tools
- Develop, maintain, and optimize scalable data pipelines using Python and AWS services (e.g., S3, Lambda, ECS, EKS, RDS, SNS/SQS, Vector DB)
- Rapidly developing next-generation scalable, flexible, and high-performance data pipelines
- Collaborate with analytics and business teams to create and improve data models for business intelligence
- End-to-end ownership of data quality in our core datasets and data pipelines
- Participate in code reviews and contribute to DevOps / DataOps / MLOps
Job Requirements
- Bachelor\'s degree in Computer Science, Engineering, or a related field
- 2-3 years of experience in data engineering or a similar role
- Strong programming skills in Python, SQL, AWS and related tech stack
- Experience with building scalable data pipelines with technologies such as Glue, Airflow, Kafka, Spark etc.
- Experience using Snowflake, DBT, Bedrock is a plus
- Good understanding of basic machine learning concepts (Sagemaker)