Job Search and Career Advice Platform

Enable job alerts via email!

Data Scientist (Cloud data)

User Experience Researchers Pte Ltd (Singapore)

Singapore

On-site

SGD 50,000 - 70,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A data-driven company in Singapore is seeking a skilled Data Scientist to design and implement data-driven solutions using AWS technologies. The role involves statistical analysis, developing predictive models, and creating dashboards in AWS QuickSight. The ideal candidate will possess strong technical expertise in Python, SQL, and AWS tools, with a focus on analytics and data governance. This is a full-time position suitable for entry-level candidates looking to grow in the data science field.

Qualifications

  • Strong background in data science, statistics, and machine learning.
  • Proficiency in data analysis tools such as Python, R, and SQL.
  • Experience with AWS data services including Redshift, QuickSight, and Glue.
  • Ability to develop and troubleshoot data pipelines.

Responsibilities

  • Perform data analysis and statistical modeling using AWS Redshift.
  • Develop predictive models and machine learning algorithms.
  • Create and maintain interactive dashboards in AWS QuickSight.
  • Monitor data pipelines and flag data quality issues.
  • Document analytical methodologies and findings.

Skills

Data analysis tools
Statistical analysis
Machine learning
AWS data services
Data visualization
Python
SQL
R

Tools

AWS Redshift
AWS QuickSight
AWS Glue
GitLab
Confluence
Jira
Job description
About The Data Scientist Role

We are seeking a skilled Data Scientist to design and implement data‑driven solutions using AWS technologies for cloud data. The role involves performing statistical analysis, developing predictive models, and creating interactive dashboards in AWS QuickSight to deliver actionable business insights. You will support data pipelines, CI/CD workflows, and infrastructure automation while ensuring data quality and governance. The ideal candidate combines strong technical expertise in Python, SQL, and AWS data tools with a deep understanding of analytics, visualization, and operational excellence in cloud environments.

Responsibilities
  • Data Analysis & Insights
    • Perform data analysis and statistical modelling using AWS Redshift data
    • Develop predictive models and machine learning algorithms
    • Generate actionable insights from large datasets
    • Conduct data quality assessments and validation
  • Dashboard & Visualization Development
    • Create and maintain interactive dashboards in AWS QuickSight
    • Design data visualizations to support business decision‑making
    • Optimize dashboard performance and user experience
    • Ensure data accuracy in reporting and visualizations
  • Data Pipeline & Engineering Support
    • Monitor and troubleshoot AWS Glue jobs and data ingestion processes
    • Support CI/CD pipelines with data‑focused monitoring and validation
    • Assist with GitLab pipeline configurations for data workflows
    • Support AWS Lambda functions related to data processing
    • Collaborate on Infrastructure as Code (IaC) for data infrastructure
  • Data Science Operations
    • Monitor data pipelines and flag data quality issues
    • Collaborate with technical teams on data requirements
    • Support data governance and best practices implementation
    • Assist in data model validation and testing
  • Documentation & Reporting
    • Document analytical methodologies and findings
    • Prepare regular reports on data insights and model performance
    • Conduct monthly progress meetings (1 hour) to present findings
    • Maintain project documentation on SHIP-HATS Confluence
    • Track analytical tasks through SHIP-HATS Jira
Requirements
  • Strong background in data science, statistics, and machine learning
  • Proficiency in data analysis tools (Python, R, SQL)
  • Experience with AWS data services (Redshift, QuickSight, S3, Glue, Lambda)
  • Data pipeline development and troubleshooting experience
  • Basic CI/CD pipeline knowledge (GitLab preferred)
  • Infrastructure as Code (IaC) familiarity for data environments
  • Data visualization and dashboard development skillsStrong analytical thinking and problem‑solving abilities
  • Excellent documentation and presentation skills
Seniority Level
  • Entry level
Employment Type
  • Full‑time
Job Function
  • Engineering and Information Technology
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.