We're looking for Lead Data Engineer / Data Engineering Manager to join our client to help design, build, and evolve large-scale, cloud-native data platforms that power real products and business outcomes. This is a hands‑on technical role for someone who enjoys solving complex data problems, shaping architecture, and still getting close to the code.
You’ll work in a product‑led environment, partnering closely with engineering, analytics, and business teams to modernize data infrastructure, enable advanced analytics or AI use cases, and build scalable platforms.
What You’ll Do
- Lead the architecture and hands‑on development of scalable, cloud‑based data platforms (primarily AWS, with some Azure)
- Design, build, and optimize batch and real‑time data pipelines supporting analytics, products, and AI use cases
- Own end‑to‑end data solutions; from ingestion and processing to storage, modeling, and consumption
- Build and evolve modern data platforms using technologies such as Databricks, Snowflake, and Teradata
- Define and implement reusable data architectures, patterns, and best practices
- Develop secure, reliable infrastructure for structured and unstructured data at scale
- Enable product teams with high‑quality, well‑modeled, and accessible data
- Ensure strong data governance, quality, lineage, and access controls are embedded by design
- Support both greenfield platform builds and legacy data modernization initiatives
- Mentor engineers, review designs and code, and raise the overall engineering bar
What You Bring
- 10+ years of experience in data engineering, data platform development, or data architecture
- Strong hands‑on engineering background with the ability to design, build, and troubleshoot complex systems
- Deep technical experience designing and implementing cloud‑native data architectures (primarily AWS; Azure a plus)
- Proven expertise with modern data platforms such as Databricks, Snowflake, and Teradata
- Advanced proficiency in Python and SQL, with solid experience in Java and/or Scala
- Strong experience with distributed data processing and storage technologies (e.g. Spark, data lakes, lakehouse architectures)
- Hands‑on experience building batch and real‑time data pipelines using modern ingestion and processing patterns
- Experience with orchestration and DevOps tooling, including Airflow, Docker, Git, Terraform, and CI/CD pipelines
- Practical experience designing data models, performance tuning, and optimizing large‑scale analytical workloads
- Exposure to AI/ML data workloads (feature engineering, data preparation, or enabling model training and inference) is highly desirable
- Solid understanding of data governance, security, lineage, and access controls in production environments
- Ability to review architecture and code critically, make trade‑offs, and set technical direction
- Strong communication skills and the ability to align engineers and stakeholders around technical decisions
- A product‑focused mindset, driven by delivering reliable data capabilities that create measurable business impact
Personal data collected will be used for recruitment purposes only.
Only shortlisted candidates will be notified / contacted.
EA Registration No: R21101138
"Sanderson-iKas" is the brand name for iKas International (Asia) Pte Ltd, a company incorporated in Singapore under Company UEN No.: 200914065E with EA license number 16S8086.
Website: www.sanderson-ikas.sg