Job Search and Career Advice Platform

Enable job alerts via email!

Security Services Data Modelling and Engineering - Vice President

JPMorgan Chase & Co.

Bournemouth

On-site

GBP 60,000 - 80,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading financial services firm in the UK is looking for a skilled Data Engineer to design and optimize data pipelines using Databricks and Python. The role involves collaborating with Data Architects and Business Analysts to develop data models and ensure compliance with governance standards. Strong proficiency in Python and experience with Spark are essential. This position offers an opportunity to work in a dynamic environment and contribute to significant data projects.

Qualifications

  • Proven experience developing data pipelines and solutions on Databricks.
  • Strong proficiency in Python, including libraries for data transformation.
  • Solid understanding of ETL concepts, data modelling, and pipeline design.

Responsibilities

  • Design, build, and optimize data pipelines and transformation workflows on Databricks.
  • Collaborate with Data Architects and Business Analysts.
  • Implement data quality checks and validation modules using Python.

Skills

Data pipeline development
Python programming
ETL concepts
Spark experience
Data modeling
Communication skills
Collaboration

Tools

Databricks
Jira
Cloud data platforms
Python libraries (e.g., pandas)
Job description
Responsibilities
  • Design, build, and optimize data pipelines and transformation workflows on Databricks, leveraging Python and Spark.
  • Collaborate with Data Architects and Business Analysts to develop robust data models and clearly document data flows and ETL logic.
  • Implement and execute data quality checks and validation modules using Python.
  • Maintain transparency and accountability by tracking work and progress in Jira.
  • Ensure datasets and pipelines are accurately registered in relevant catalogues and consoles, meeting governance and privacy standards.
  • Proven experience developing data pipelines and solutions on Databricks.
  • Strong proficiency in Python, including libraries for data transformation (e.g., pandas).
  • Solid understanding of ETL concepts, data modelling, and pipeline design.
  • Experience with Spark and cloud data platforms.
  • Ability to document data flows and transformation logic to a high standard.
  • Familiarity with project management tools such as Jira.
  • Collaborative mindset and strong communication skills.
Qualifications
  • Proven experience developing data pipelines and solutions on Databricks.
  • Strong proficiency in Python, including libraries for data transformation (e.g., pandas).
  • Solid understanding of ETL concepts, data modelling, and pipeline design.
  • Experience with Spark and cloud data platforms.
  • Ability to document data flows and transformation logic to a high standard.
  • Familiarity with project management tools such as Jira.
  • Collaborative mindset and strong communication skills.
Preferred qualifications
  • Experience in financial services or large enterprise data environments.
  • Knowledge of data governance, privacy, and compliance requirements.
  • Exposure to business analysis and requirements gathering.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.