YOUR ROLE
The Senior Data Engineer will be responsible for acquiring, storing, governing, and processing large sets of structured and unstructured data. You will help shape the big data solutions landscape by selecting optimal architecture components and implementing enterprise data foundations such as data lakes. You will also collaborate closely with experts across Data Intelligence, Research, UX Design, Digital Technology, and Agile teams to deliver high-impact data solutions.
YOUR RESPONSIBILITIES
- Design, implement, and maintain robust, scalable data pipelines to ingest, transform, and process structured and unstructured data from diverse sources.
- Build and manage data warehouses and data lakes, implementing efficient storage and retrieval mechanisms.
- Design data models that support business requirements, analytics use cases, and long-term scalability.
- Leverage cloud platforms (e.g., AWS, Azure, GCP) to design and deploy scalable data infrastructure optimized for performance, cost, and reliability.
- Implement data quality checks, validation frameworks, and governance processes to ensure data accuracy, integrity, and compliance with security standards.
- Continuously monitor and optimize pipelines to improve processing speed, reduce latency, and enhance overall system performance.
- Collaborate with data scientists, analysts, and business stakeholders to understand data needs and deliver solutions aligned with business objectives.
- Build data presentation layers, including dashboards and visualizations using tools such as Tableau or Power BI.
- Stay up to date on emerging technologies and industry trends in big data and data engineering, evaluating innovative solutions to enhance data capabilities.
- Support the design and optimization of underlying data infrastructure and data platform components.
WHO YOU ARE
- Bachelor’s, Master’s, or Ph.D. in IT, Information Management, Computer Science, or related field, with at least 6 years of relevant experience.
- Strong understanding of big data technologies and concepts related to distributed storage and computing.
- Hands‑on experience with big data frameworks (e.g., Hadoop, Spark) and distributions (Cloudera, Hortonworks, MapR).
- Experience building batch and ETL pipelines to ingest and process data from multiple sources.
- Proficiency with NoSQL databases (e.g., Cassandra, MongoDB, Neo4J, ElasticSearch).
- Experience using querying tools (e.g., Hive, Spark SQL, Impala).
- Experience with Power BI or similar visualization tools.
- Interest or experience in real-time stream processing using Kafka, AWS Kinesis, Flume, or Spark Streaming.
- Interest or experience in DevOps/DataOps principles (e.g., Infrastructure as Code, automation of data workflows).
- High-level understanding of Data Science concepts (model building, training, deployment).
- Passionate about technology, constant learning, and staying current with industry advancements.
If you believe you fit the requirements for the role, please submit your application below or drop us an email directly quoting the job title.
Due to an anticipated high volume of applicants, we regret that only shortlisted candidates will be notified. The information provided is for recruitment purposes only.
Know someone who would be a great fit for this role? Refer them to us and get rewarded.
Cornerstone Global Partners (EA License Number: 19C9859) is an affirmative equal-opportunity employer and recruitment firm. We evaluate qualified applicants without regard to race, colour, religion, creed, gender, sexual orientation, gender identity, marital status, national origin, age, veteran status, disability, or any other protected class.
Eugene Then
eugene.then@cornerstoneglobalpartners.com
EA Registration Number: R22104742.
Cornerstone Global Partners Pte Ltd (EA License: 19C9859)