Select how often (in days) to receive an alert:
An empowering career at Singtel begins with a Hello. Our purpose, to Empower Every Generation, connects people to the possibilities they need to excel. Every "hello" at Singtel opens doors to new initiatives, growth, and BIG possibilities that takes your career to new heights. So, when you say hello to us, you are really empowered to say…“Hello BIG Possibilities”.
Be a Part of Something BIG!
- Design, build, and maintain scalable data pipelines and processing systems that power analytics and AI use cases across a hybrid data platform
- Contribute to design conversation of AIDA’s new data and AI platform
- Collaborate with platform, analytics, and governance teams to deliver high-quality, secure, and well-documented data assets
- Lead team of data engineers to ensure timely availability of accurate, well-documented data for use in AI use case
Make An Impact By
- Design and implement batch and streaming data ingestion pipelines from diverse sources (e.g., files, APIs, Kafka, databases)
- Develop real-time and near-real-time data workflows using tools like Apache Flink, Kafka Streams
- Optimize performance for high-volume and high-velocity datasets
- Implement data quality checks and automatic monitoring system to ensure consistently accurate and available data
- Design and manage storage solutions such as Microsoft Fabric, Delta Lake, and Databricks
- Apply best practices for schema design, partitioning, and data lifecycle management
- Support data discovery and cataloguing in coordination with governance tools
- Work with data scientists, analysts, and business users to understand data needs and translate them into technical solutions
- Partner with DevSecOps and platform engineers to automate deployment and orchestration of data pipelines
- Document data flows, transformations, and quality checks in accordance with governance standard
Skills to Succeed
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field
- 5 years of experience in data engineering
- 1 year in a senior technical role delivering data engineering projects
- Deep expertise in Spark, Databricks, and data processing frameworks
- Strong knowledge of streaming technologies such as Apache Kafka, Apache Flink, or Azure Event Hub
- Experience working with data lake and/or lakehouse architectures such as Hadoop, Delta Lake, Iceberg and Microsoft OneLake
- Proficient in Python and SQL
- Familiar with workflow orchestration (e.g., Apache Airflow) and CI/CD principles
- Analytical mindset with a focus on data quality, performance, and maintainability
- Able to work independently and collaboratively in a dynamic environment
- Strong communication and documentation skills to support cross-functional collaboration
- Knowledge of data types across network and IT in telco environment
Rewards that Go Beyond
- Full suite of health and wellness benefits
- Ongoing training and development programs
- Internal mobility opportunities
Your Career Growth Starts Here. Apply Now!
We are committed to a safe and healthy environment for our employees & customers and will require all prospective employees to be fully vaccinated.