A leading gaming technology company in Singapore is looking for a Data Engineer to design, build, and maintain high-volume big data processing systems. The ideal candidate will have significant experience in developing data pipelines and managing cloud data services. Strong proficiency in programming and data processing techniques is essential. This role offers an opportunity to work closely with cross-functional teams and contribute to AI capabilities.
Qualifications
5+ years of relevant professional experience.
Ability to lead and mentor junior colleagues.
Experience in building and maintaining production pipelines for analytics.
Responsibilities
Design, develop, and maintain data systems and data pipelines.
Monitor and troubleshoot data workflows.
Manage and scope projects involving collaboration with data scientists.
Skills
Python
SQL
Java
Data processing techniques
AI technologies
Project scoping
Problem-solving
Communication skills
Education
Degree in computer science, engineering, mathematics or equivalent
Tools
AWS Redshift
Snowflake
BigQuery
Azure Data Lake
Terraform
Docker
Kubernetes
Airflow
Spark
Data Build Tool (DBT)
Job description
Joining Razer will place you on a global mission to revolutionize the way the world games. Razer is a place to do great work, offering you the opportunity to make an impact globally while working across a global team located across 5 continents. Razer is also a great place to work, providing you the unique, gamer-centric experience that will put you in an accelerated growth, both personally and professionally.
Job Responsibilities
This role is required to design, build and maintain high volume big data processing systems that enable the organization to collect, manage, and convert raw data into usable information for data scientists and business analysts, and enabling the use of Artificial Intelligence (AI) capabilities. He / She is responsible for developing and maintaining data pipelines, data storage systems and cloud infrastructure, while working closely with data scientists, data analysts and internal stakeholders to utilize data for analytics and AI capabilities.
Essential Duties And Responsibilities
Owns and improve the data stack used in the team to enhance the data processing capabilities.
Design, develop and maintain data systems and data pipelines that enable the organization to store, process, and analyze large volumes of data. This involves developing data pipelines, designing data storage systems, and ensuring that data is integrated effectively to support of AI applications.
Manage data lakes and data warehouses by populating and operationalizing them. This involves creating and managing table schemas, views, materialized views, including tokenization and vectorization techniques for Gen AI.
Monitor and troubleshoot data workflows, ensuring timely resolution of failures and rerunning failed jobs to ensure data completeness.
Leverage modern build tools to enhance automation, data quality, testing, and deployment of data pipelines.
Design and build AI Powered and GenAI applications collaboratively with data scientists, data analysts, product managers and business users.
Develop and implement cloud infrastructure that are in line with company&aposs security policies and practices, as well as cost optimization practices.
Manage and scope projects that involve collaboration with data scientists, data analysts and business users to understand the data needs of various stakeholders across the organization to implement appropriate solutions.
Mentor interns and junior engineers in the team
Pre-Requisites
Qualifications and Skills
Degree in computer science, engineering, mathematics or equivalent experience.
5+ years of relevant professional experience.
Ability to scope projects and effectively lead and mentor more junior colleagues.
Ability to write clean, maintainable, scalable and robust code using Python, SQL, Java.
Proven experience in building and maintaining pipelines in production for advanced analytics uses cases.
Experience with cloud data services such as AWS Redshift, Snowflake, BigQuery, or Azure Data Lake.
Experience using Infrastructure As Code tools such as Terraform, containerization tools like Dockers, container orchestration platforms like Kubernetes.
Experience using orchestration tools like Airflow, distributed computing framework like Spark or Dask, data transformation tool like Data Build Tool (DBT).
Experience with CI/CD pipelines for data engineering workflows.
Experience with various data processing techniques (streaming, batch, event-based), managing and optimizing data storage (Data Lake, Data Warehouse, Vector Data Stores and Database, SQL, and NoSQL) is essential.
Excellent problem-solving and analytical skills, with an understanding of AI technologies and their applications
Excellent written and verbal communication skills for coordinating across teams.
Are you game
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.