Job Search and Career Advice Platform

Enable job alerts via email!

Data Engineer

Tigerlab

Selangor

On-site

MYR 60,000 - 80,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading technology company in Malaysia is seeking a Data Engineer to play a pivotal role in architecting and scaling its data ecosystem. You will work closely with founders and various teams to turn raw data into strategic insights. The ideal candidate has 4+ years of data engineering experience, is proficient in SQL and Python, and understands cloud-based data technologies. Join a supportive team that values creativity and offers a competitive salary, training opportunities, and career progression.

Benefits

Competitive salary and benefits package
Comprehensive training and professional development opportunities
Collaborative and innovative work environment
Career progression opportunities

Qualifications

  • 4+ years in a data engineering role, preferably in high-growth environments.
  • Experience optimizing queries for speed and cost.
  • Strong business acumen to interpret data through a customer lens.

Responsibilities

  • Collaborate with founders and teams to build foundational datasets.
  • Design and expand event-driven data infrastructure.
  • Build pipelines for internal dashboards and analytics.

Skills

Data engineering
SQL
Python
Cloud-based data stacks (AWS S3, Athena)
Data quality
Data privacy
Data analytics

Tools

Kafka
BigQuery
Snowflake
dbt
Airflow
Power BI
Job description

As one of tigerlab’s first Data Engineers, you will play a foundational role in architecting and scaling our data ecosystem. You’ll work hands‑on across the entire stack, ingesting raw event streams, shaping data models, building analytics pipelines, and enabling self‑serve insights for all teams. You will collaborate closely with founders, as well as our Sales, Product, and Delivery teams to turn raw data into strategic intelligence, fueling AI‑driven underwriting, smarter product decisions, and customer insights. Your work will also help define how tigerlab delivers scalable, reusable, and trustworthy data services that accelerate growth and operational excellence for insurers, MGAs, retailers, and innovators. This is a rare opportunity to lay the groundwork for a modern analytics ecosystem, shape a self‑serve data culture, and directly impact how tigerlab and our clients make data‑driven decisions.

Key Responsibilities
  • Work directly with founders and leaders across Sales, Product, and Delivery to understand their analytical requirements and build foundational datasets.
  • Design, optimize, and expand our event‑driven data infrastructure (Kafka, Firehose, S3, Athena).
  • Ensure accurate, compliant, and scalable storage of structured and semi‑structured data.
  • Build and maintain pipelines powering internal dashboards, external analytics, pricing models, underwriting insights, and AI‑driven features.
  • Create reusable dashboards, datasets, and visualizations for operational performance and customer intelligence.
  • Prepare large datasets for analysis, machine learning, and predictive modeling.
  • Support business teams with deep dives and ad‑hoc data investigations.
  • Define and maintain data contracts in partnership with engineering.
  • Implement data observability, monitoring, and alerting to ensure reliability.
  • Occasionally participate in client discussions to understand their data needs or present analytical findings.
  • Help establish tigerlab’s data culture as the first member of the Data team.
Required Qualifications
  • You have 4+ years of experience in a data engineering role, ideally in high‑growth or data-intensive environments.
  • You’ve optimized queries for speed and cost at scale; billions of rows/day is familiar territory.
  • You have strong business acumen and can interpret data through a sales or customer lens.
  • You’re experienced with cloud-based data stacks: AWS S3, Athena, Redshift, BigQuery, or Snowflake.
  • You write SQL and Python fluently; experience with Pandas, NumPy, or PySpark is a plus.
  • You care deeply about accuracy, reliability, and data quality.
  • You’re excited about the modern data stack and empowering teams with self‑serve analytics.
  • You have experience with dbt, Airflow, n8n, or other transformation/orchestration tools.
  • You’re comfortable with version control, CI/CD for analytics, and BI tools like Metabase, Looker, or Power BI.
  • You understand data privacy, anonymization techniques, and GDPR compliance.
  • You’ve worked with insurance-related data (policies, quotes, events, pricing, loss ratios); or are eager to learn.
What We Offer
  • Work on real‑world digital products with global clients.
  • Learn directly from experienced designers and engineers.
  • Be part of a supportive, agile team that values creativity and growth.
  • Build your career in the fast‑growing world of insurance technology.
  • Competitive salary and benefits package
  • Comprehensive training and professional development opportunities
  • Opportunity to work with cutting‑edge insurance technology
  • Collaborative and innovative work environment
  • Career progression opportunities within a growing company
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.