Job Search and Career Advice Platform

Enable job alerts via email!

Data Engineering Lead - Big Data Technologies- Vice President

Citigroup Inc.

Singapore

On-site

SGD 120,000 - 150,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading international financial institution in Singapore is seeking an Engineering Lead Analyst to oversee engineering activities and manage a data engineering team. The ideal candidate will have 10-15 years of experience in big data technologies, strong analytical skills, and a proven track record in data governance. This role involves ensuring quality standards, designing scalable solutions, and collaborating with stakeholders to enhance data management processes. Competitive benefits and a dynamic work environment offered.

Qualifications

  • 10-15 years hands-on experience in big data frameworks.
  • 4+ years experience with relational SQL and NoSQL databases.
  • Strong proficiency in Python and Spark Java.
  • Experience building and optimizing big data pipelines.

Responsibilities

  • Define and execute the data engineering roadmap.
  • Lead and develop a team of data engineers.
  • Oversee design and implementation of data pipelines.
  • Monitor and optimize data pipelines for performance.

Skills

Hadoop
Scala
Java
Spark
Hive
Kafka
SQL
Python
Unix Scripting
Data Integration
ETL

Education

Bachelor’s degree
Master’s degree preferred

Tools

Confluent Kafka
Docker
AWS
OpenShift
Kubernetes
Git
Jira
Job description

The Engineering Lead Analyst is a senior level position responsible for leading a variety of engineering activities including the design, acquisition and deployment of hardware, software and network infrastructure in coordination with the Technology team. The overall objective of this role is to lead efforts to ensure quality standards are being met within existing and planned framework.

Responsibilities
  • Strategic Leadership: Define and execute the data engineering roadmap for Global Wealth Data, aligning with overall business objectives and technology strategy. This includes understanding the data needs of portfolio managers, investment advisors, and other stakeholders in the wealth management ecosystem.
  • Team Management: Lead, mentor, and develop a high-performing, globally distributed team of data engineers, fostering a culture of collaboration, innovation, and continuous improvement.
  • Architecture and Design: Oversee the design and implementation of robust and scalable data pipelines, data warehouses, and data lakes, ensuring data quality, integrity, and availability for global wealth data. This includes designing solutions for handling large volumes of structured and unstructured data from various sources.
  • Technology Selection and Implementation: Evaluate and select appropriate technologies and tools for data engineering, staying abreast of industry best practices and emerging trends specific to wealth management data.
  • Performance Optimization: Continuously monitor and optimize data pipelines and infrastructure for performance, scalability, and cost-effectiveness, ensuring optimal access to global wealth data.
  • Collaboration: Partner with business stakeholders, data scientists, portfolio managers, and other technology teams to understand data needs and deliver effective solutions that support investment strategies and client reporting.
  • Data Governance: Implement and enforce data governance policies and procedures to ensure data quality, security, and compliance with relevant regulations, particularly around sensitive financial data.
Qualifications
  • 10-15 years of hands‑on experience in Hadoop, Scala, Java, Spark, Hive, Kafka, Impala, Unix Scripting and other Big data frameworks.
  • 4+ years of experience with relational SQL and NoSQL databases: Oracle, MongoDB, HBase.
  • Strong proficiency in Python and Spark Java with knowledge of core spark concepts (RDDs, Dataframes, Spark Streaming, etc) and Scala and SQL.
  • Data Integration, Migration & Large Scale ETL experience (Common ETL platforms such as PySpark/DataStage/AbInitio etc.) – ETL design & build, handling, reconciliation and normalization.
  • Data Modeling experience (OLAP, OLTP, Logical/Physical Modeling, Normalization, knowledge on performance tuning).
  • Experienced in working with large and multiple datasets and data warehouses.
  • Experience building and optimizing ‘big data’ data pipelines, architectures, and datasets.
  • Strong analytic skills and experience working with unstructured datasets.
  • Ability to effectively use complex analytical, interpretive, and problem‑solving techniques.
  • Experience with Confluent Kafka, Redhat JBPM, CI/CD build pipelines and toolchain – Git, BitBucket, Jira.
  • Experience with external cloud platform such as OpenShift, AWS & GCP.
  • Experience with container technologies (Docker, Pivotal Cloud Foundry) and supporting frameworks (Kubernetes, OpenShift, Mesos).
  • Experienced in integrating search solution with middleware & distributed messaging - Kafka.
  • Highly effective interpersonal and communication skills with tech/non‑tech stakeholders.
  • Experienced in software development life cycle and good problem‑solving skills.
  • Excellent problem‑solving skills and strong mathematical and analytical mindset.
  • Ability to work in a fast‑paced financial environment.
Education
  • Bachelor’s degree/University degree or equivalent experience.
  • Master’s degree preferred.

Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law.

If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi.

View Citi’s EEO Policy Statement and the Know Your Rights poster.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.