Job Search and Career Advice Platform

Enable job alerts via email!

PySpark Developer

DCV Technologies

Greater London

On-site

GBP 60,000 - 80,000

Part time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A technology consulting firm is seeking an experienced PySpark Developer in London for a contract role focused on building and optimising data pipelines in the financial markets sector. Candidates should possess strong skills in Microsoft Fabric and Azure engineering, with hands-on experience in developing secure data solutions and optimising large-scale data processes. The role requires collaboration with analysts to deliver scalable solutions. Apply now for immediate consideration.

Qualifications

  • Strong hands-on experience with PySpark, Spark SQL, and Spark Streaming.
  • Experience with Microsoft Fabric, including dataflows and pipelines.
  • Familiar with Azure services like ADLS and cloud data engineering.

Responsibilities

  • Design, build, and optimize Spark-based data pipelines.
  • Develop Fabric dataflows and implement complex transformations.
  • Troubleshoot and improve reliability and workload performance.

Skills

PySpark
Spark SQL
Spark Streaming
Microsoft Fabric
Azure Data Engineering
Python Programming
Data Lake Optimization
DevOps Practices
Troubleshooting
Job description

We are looking for an experienced PySpark Developer with strong Microsoft Fabric and Azure engineering skills to join a major transformation programme within the financial-markets domain. This role is fully hands‑on, focused on building and optimising large‑scale data pipelines, dataflows, semantic models, and lakehouse components.

Key Responsibilities
  • Design, build and optimise Spark‑based data pipelines for batch and streaming workloads
  • Develop Fabric dataflows, pipelines, and semantic models
  • Implement complex transformations, joins, aggregations and performance tuning
  • Build and optimise Delta Lake / delta tables
  • Develop secure data solutions including role‑based access, data masking and compliance controls
  • Implement data validation, cleansing, profiling and documentation
  • Work closely with analysts and stakeholders to translate requirements into scalable technical solutions
  • Troubleshoot and improve reliability, latency and workload performance
Essential Skills
  • Strong hands‑on experience with PySpark, Spark SQL, Spark Streaming, DataFrames
  • Microsoft Fabric (Fabric Spark jobs, dataflows, pipelines, semantic models)
  • Azure : ADLS, cloud data engineering, notebooks
  • Python programming; Java exposure beneficial
  • Delta Lake / Delta table optimisation experience
  • Git / GitLab, CI / CD pipelines, DevOps practices
  • Strong troubleshooting and problem‑solving ability
  • Experience with lakehouse architectures, ETL workflows, and distributed computing
  • Familiarity with time‑series, market data, transactional data or risk metrics
Nice to Have
  • Power BI dataset preparation
  • OneLake, Azure Data Lake, Kubernetes, Docker
  • Knowledge of financial regulations (GDPR, SOX)
Details
  • Location : London (office‑based)
  • Type : Contract
  • Duration : 6 months
  • Start : ASAP
  • Rate : Market rates

If you are a PySpark / Fabric / Azure Data Engineer looking for a high‑impact contract role, apply now for immediate consideration

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.