Enable job alerts via email!

Data Engineer

Boost Bank

Kuala Lumpur

On-site

MYR 70,000 - 100,000

Full time

4 days ago
Be an early applicant

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

Boost Bank, Malaysia's first homegrown digital bank, is seeking a Data Engineer to build and maintain efficient data pipelines. The role involves supporting analytics teams and contributing to the crafting of data systems in a dynamic banking environment, fostering innovation and financial well-being.

Qualifications

  • 3 years of experience in data storage and analytics solutions.
  • Proficient in Python, Bash Shell scripting, and SQL.
  • Experience with AWS, ETL processes, and data warehousing.

Responsibilities

  • Design and maintain scalable data pipelines.
  • Develop tools for analytics and data science teams.
  • Ensure data integration aligns with business needs.

Skills

Data modelling
Python
Bash Shell
ETL pipelines
SQL
AWS Cloud Infrastructure
Big data tools
UNIX/Linux

Education

Bachelor’s degree in Computer Science
Bachelor’s degree in Engineering
Bachelor’s degree in IT

Tools

MySQL
PostgreSQL
AWS services
Spark
MongoDB

Job description

Boost Bank Berhad (formerly Boost Berhad) has received regulatory approval from Bank Negara Malaysia (BNM) and the Ministry of Finance (MOF) to commence operations on 15 January 2024. As the first homegrown digital bank, our mission is to revolutionize banking and make financial wellbeing accessible and seamless for all Malaysians.

We are seeking an experienced Data Engineer to join the Boost Bank Data Engineering team. The ideal candidate is a skilled builder of data pipelines and a data operations specialist, passionate about designing, optimizing, and constructing robust data systems tailored for a digital banking environment.

Your responsibilities include designing, developing, and maintaining efficient, scalable, and flexible data pipelines capable of processing large data volumes. You will develop tools and solutions to empower our analytics and data science teams, aiding in building and optimizing products that establish us as an industry leader.

Part of your role involves constructing robust infrastructure to support the extraction, transformation, and loading of data from diverse sources, utilizing big data technologies. You will also contribute to integrating technical and application components aligned with business needs, ensuring all development stages follow established methodologies and standards, including documentation and maintenance.

Experience

  • At least 3 years of hands-on experience in operating and optimizing distributed, large-scale data storage and analytics solutions.
  • Proficiency in data modelling for designing, implementing, and maintaining logical and physical data models for structured and unstructured data.
  • Proficiency in scripting languages such as Python and Bash Shell. Experience with Java and Scala is a plus.
  • Strong background in ETL pipelines and data warehousing, with experience handling unstructured or semi-structured data streams and repositories like CSV, JSON, Excel, XML, and AWS services such as Glue, S3, RDS.
  • Advanced SQL skills and experience with relational databases like MySQL, PostgreSQL, MS SQL; familiarity with MongoDB and DynamoDB is advantageous.
  • Experience with AWS Cloud Infrastructure, including IAM, S3, Glue, Athena, EC2, and Security Groups.
  • Expertise in big data tools such as Spark, HDFS, AWS Redshift, and related technologies.
  • Experience in designing and deploying data warehouses and real-time stream-processing systems, preferably using open-source solutions.
  • Solid understanding of computer science fundamentals, including object-oriented design, data structures, and algorithms.
  • Familiarity with professional software engineering practices, including SDLC, coding standards, code reviews, source control, and testing.
  • Proficiency in UNIX/Linux environments, including system commands, package management, and server monitoring.
  • Knowledge of security best practices and data protection measures.
  • Familiarity with the FinTech industry is a plus.

Attributes

  • Strong data modelling skills to support complex analytics and reporting.
  • Passion for product improvement and delivering excellent user experiences.
  • Focus on software quality, maintainability, scalability, performance, and security.

Education

  • Bachelor’s degree or equivalent in Computer Science, Engineering, IT, or related fields.
  • Strong foundation in data structures, data architecture, algorithm design, problem-solving, and complexity analysis.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.