Enable job alerts via email!

GCP & Data Engineer (Python, SQL)/Software Engineer

HSBC

Hyderabad City Taluka

On-site

PKR 2,000,000 - 2,750,000

Full time

Today
Be an early applicant

Job summary

A leading global bank is looking for a Software Engineer to design and maintain scalable data pipelines. You will work with tools like GCP, Apache Airflow, and Python. Experience in data transformation and workflow automation is essential. This position offers a chance to further your career in a dynamic environment.

Qualifications

  • Experience in designing and maintaining scalable data pipelines.
  • Able to automate workflows in GCP using various tools.
  • Proficiency in Python and SQL for large-scale data processing.

Responsibilities

  • Design, develop, and maintain scalable data pipelines on GCP.
  • Optimize and automate data workflows using relevant tools.
  • Build and maintain ETL processes for data transformation.

Skills

Designing data pipelines
Optimizing data workflows
Python programming
SQL querying
Working with GCP

Tools

Dataproc
BigQuery
Apache Airflow
GitHub
Jenkins
Terraform
Ansible
Job description
Overview

Some careers shine brighter than others.

If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further.

HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions.

We are currently seeking an experienced professional to join our team in the role of Software Engineer.

Responsibilities
  • Design, develop, and maintain scalable data pipelines on Google Cloud Platform (GCP).
  • Optimize and automate data workflows using Dataproc, BigQuery, Dataflow, Cloud Storage, and Pub/Sub.
  • Build and maintain ETL processes for data ingestion, transformation, and loading into data warehouses
  • Ensure the reliability and performance of data pipelines by implementing Apache Airflow for orchestration.
  • Collaborate with stakeholders to gather and translate requirements into technical solutions.
  • Work on multiple data warehousing projects, applying a thorough understanding of concepts such as dimensions, facts, and slowly changing dimensions (SCDs).
  • Develop scripts and applications in Python and PySpark to handle large-scale data processing tasks.
  • Write optimized SQL queries for data analysis and transformations.
  • Use GitHub, Jenkins, Terraform, and Ansible to deploy and manage code in production.
  • Troubleshoot and resolve issues related to data pipelines, ensuring high availability and scalability.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.