Enable job alerts via email!

Data Analyst/Engineer

GrainCorp

Singapore

On-site

SGD 70,000 - 90,000

Full time

Today
Be an early applicant

Job summary

A leading data solutions provider in Singapore is looking for a Data Engineer to manage data pipelines and dashboards. You will build reliable data hubs and help teams self-serve data insights while ensuring data quality and security. The ideal candidate has strong SQL skills and experience with data modeling and BI tools. This role offers mentorship and competitive compensation.

Benefits

Autonomy and ownership of data stack
Competitive compensation
Birthday leave
Mentorship and growth opportunities

Qualifications

  • Proficiency in writing efficient SQL queries.
  • Experience with data modeling concepts.
  • Familiarity with cloud warehouse technologies.

Responsibilities

  • Build data pipelines for various systems data.
  • Create and manage the metrics layer for business metrics.
  • Design and ship clear visual dashboards.

Skills

Strong SQL
Data modeling
Modern stack
BI tools
Quality
Stakeholder work
Python scripting
Job description

You tame messy data, model truth, and ship dashboards people actually use. Fast clock speed, sleeves rolled. Build our Data Hub so every team pulls from one clean, performant source—freeing product engineers to ship product.

Mission

Own the pipelines, models, and metrics layer that power decisions at Grain. Turn chaos into clarity: reliable datasets, crisp dashboards, and self-serve tools that scale.

Outcomes (first 90 days)
  • Data Hub foundation: documented sources, core datasets modelled and tested
  • Leadership has reliable data for top decision areas (such as demand forecasting, breakdown of product and channel performance, customer retention, etc.) — no more "let me check and get back to you"
  • Team productivity: 30% reduction in ad-hoc requests as teams self-serve common questions
Responsibilities
  • Build data pipelines: pull data from our systems (sales, inventory, finance, marketing), clean it, and make it queryable
  • Create the metrics layer: define key business metrics once so everyone uses the same definitions (no more "which revenue number is right?")
  • Ship dashboards people use: fast, clear visualisations that answer real questions; teach teams to find their own answers
  • Keep it fast and cheap: optimize queries, manage warehouse costs, monitor performance
  • Ensure quality: write tests, set up alerts when data breaks, establish freshness expectations
  • Protect customer data: handle PII safely, control who sees what, maintain audit trails
  • Push data where it's needed: send clean data back to sales, marketing, and support tools
  • Raise the bar: write docs people actually read, run demos, make everyone more data-literate
Competencies
  • Strong SQL: efficient, readable queries that answer tough questions. You understand execution plans and can optimize slow queries
  • Data modeling: fact/dimension tables, slowly changing dimensions, event schemas - and when to break the rules
  • Modern stack: dbt + orchestration (Airflow/Dagster/Prefect), cloud warehouse (BigQuery/Snowflake/Redshift/Postgres)
  • BI tools: Built dashboards people actually use (Looker, Metabase, Superset, or similar)
  • Quality: Testing, lineage, monitoring - you catch issues before stakeholders do
  • Stakeholder work: Turn "show me engagement" into concrete metrics and actionable insights
  • Bonus points: Python scripting, CI/CD, infrastructure-as-code (Terraform), event tracking, privacy-by-design thinking
How we work
Problem Prototype Prove value
  • Productionise.
  • One source of truth > many spreadsheets.
  • Stewardship over flash: reliable, observable, cost-aware.
  • We value integrity, excellence, service—use data to uplift people.
What’s in it for you
  • Autonomy and ownership of the data stack.
  • Ship work used daily by every function.
  • Mentorship, growth into Staff/Analytics Eng or Data Platform Eng path.
  • Competitive compensation and birthday leave.
What to include in your application
  • CV or LinkedIn + GitHub (if any).
  • 2–3 dashboards or repos you’ve built (screenshots/links) with a short note on impact.
  • A one-pager: your approach to building a “Data Hub” in 90 days.
Interview process (typical)
  • Intro (45m): values, motivations, how you approach messy data.
  • Technical deep dive (60–90m): SQL/design exercise + performance tuning discussion.
  • Take-home (3–4h max): model a tiny domain in dbt + a dashboard; include tests & docs.
  • Stakeholder panel (45m): walk-through, trade-offs, storytelling.
  • References.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.