Job Search and Career Advice Platform

Enable job alerts via email!

Senior Data Engineer

Steelcase Manufacturing (Malaysia) Sdn Bhd

Kuala Lumpur

On-site

MYR 60,000 - 90,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A global office solutions company is seeking a data engineer in Kuala Lumpur, Malaysia. This role involves collaborating with the data science team to design and maintain data pipelines, support advanced analytics, and enhance data platform infrastructure. Candidates should have a Bachelor's in Computer Science, familiarity with cloud platforms, and a strong passion for leveraging data. Join us to make a meaningful impact within a dynamic environment and work collaboratively with diverse teams.

Qualifications

  • Hands-on programming experience in Python, Scala, or Java.
  • Experience with relational and/or NoSQL databases.
  • Practical experience with Big Data tools and frameworks.

Responsibilities

  • Collaborate to define scope and MVPs.
  • Design and maintain scalable data pipelines.
  • Monitor and troubleshoot data pipeline performance.

Skills

Analytical and problem-solving skills
Familiarity with data governance principles
Collaboration and relationship-building
Ability to communicate technical concepts

Education

Bachelor's degree in Computer Science or related field

Tools

Python
Scala
Java
Databricks
Azure Data Factory
Spark
Snowflake
AWS Redshift
Job description
In this role, you will collaborate closely with the Data Science team to gather data from diverse sources, develop scalable data pipelines, and support advanced analytics initiatives. You will play a key role in building and maintaining the data platform infrastructure, enabling real-time and batch data processing, and supporting machine learning operations.
Desired Skills and Experience
  • Strong analytical and problem-solving skills with the ability to interpret and visualize data effectively.
  • Eagerness to explore Big Data technologies and environments.
  • Passion for leveraging data to influence business decisions and tell compelling stories.
  • Ability to communicate highly technical concepts to non-technical stakeholders.
  • Strong collaboration and relationship-building skills across internal and external teams.
  • Familiarity with data governance principles, data quality standards, and best practices.
Education & Experience
  • Bachelor’s degree in Computer Science or a related field (required).
  • Hands-on programming experience in Python, Scala, or Java (Spark preferred).
  • Experience with relational and/or NoSQL databases, including modeling and writing complex queries.
  • Practical experience with Big Data tools and frameworks such as Databricks, Azure Data Factory, Spark, PySpark.
  • Exposure to public cloud platforms (Azure, AWS, or GCP) preferred.
  • Experience with large-scale distributed systems, data pipelines, and data processing.
  • Familiarity with cloud data warehouses (e.g., Snowflake, Azure Synapse, AWS Redshift, GCP BigQuery).
  • Knowledge of CI/CD pipelines and Infrastructure-as-Code (IaC) is an advantage.
What You Will Be Doing
  • Collaborate with product owners, managers, and engineers to define scope and Minimum Viable Products (MVPs).
  • Design, build, and maintain scalable and robust data pipelines integrating data from diverse sources, APIs, and applications.
  • Apply modern data architecture patterns (e.g., microservices, event-driven, data lake) to ensure scalability and performance.
  • Perform data mapping, establish data lineage, and document information flows for observability and traceability.
  • Partner with analytics stakeholders and data scientists to streamline data acquisition and curation processes.
  • Monitor, optimize, and troubleshoot data pipeline performance issues, coordinating resolution with relevant teams.
  • Research and implement new tools and techniques to enhance the data platform, including proof-of-concept development.
  • Support MLOps teams with deployment and optimization of machine learning models for batch, streaming, and API scenarios.
  • Architect and manage the data platform infrastructure, ensuring high availability, scalability, and security using IaC and CI/CD practices.
Who We Are

Steelcase is a global design and thought leader in the world of work. Along with our expansive community of brands, we design and manufacture innovative furnishings and solutions to help people do their best work in the many places where work happens.

Why People Choose to Work with Us

At Steelcase, we put people at the center of everything we do. We believe work can bring meaning and purpose to life, and we support our employees in all aspects of their journey. Together, we make a lasting impact through our work and our communities.

What Matters to Us

More than qualifications, we value talent, potential, and diverse perspectives. We welcome applicants who are open-minded, respectful, and eager to build positive relationships across our global community. We value applicants who are comfortable interacting with people different from themselves, building mutual respect and positive relationships.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.