Enable job alerts via email!

Data Engineer

Barclay Simpson

United Kingdom

Remote

GBP 70,000 - 90,000

Full time

6 days ago
Be an early applicant

Job summary

A leading cybersecurity firm in the United Kingdom is seeking an experienced Data Engineer to design, build, and scale data systems and pipelines. This role involves collaborating with product teams while tackling complex data challenges and driving innovation in data utilization. Candidates should have over 7 years of software engineering experience, with a minimum of 4 years in data engineering, particularly with frameworks such as Spark and Kafka, as well as proficiency in SQL and cloud data services like AWS.

Qualifications

  • 7+ years in software engineering, including 4+ years in data engineering.
  • Strong experience with data frameworks (eg, Spark, Kafka, Airflow) and ETL workflows.
  • Proficiency with SQL and NoSQL databases, including query optimization.
  • Experience with cloud data services (eg, AWS Redshift, S3, Glue, EMR) and CI/CD for data pipelines.
  • Strong programming skills in Python, Java, or Scala.
  • Excellent problem-solving and collaboration skills.
  • Ability to thrive in a fast-paced, dynamic environment.

Responsibilities

  • Design and maintain scalable, secure data pipelines and architectures.
  • Own the full data lifecycle—from ingestion and storage to processing and visualization.
  • Collaborate with engineers, data scientists, and product teams to support product and analytical needs.
  • Monitor and optimize data performance, scalability, and reliability.
  • Define and enforce data quality standards and best practices.
  • Rapidly prototype and iterate on new data solutions.
  • Mentor junior engineers and contribute to technical reviews.
Job description

We’re seeking an experienced Data Engineer to design, build, and scale robust data systems and pipelines for an innovative AI based start up. You’ll shape the data infrastructure from the ground up, driving innovation in how data is collected, processed, and utilized for cybersecurity solutions.

Responsibilities
  • Design and maintain scalable, secure data pipelines and architectures.
  • Own the full data lifecycle-from ingestion and storage to processing and visualization.
  • Collaborate with engineers, data scientists, and product teams to support product and analytical needs.
  • Monitor and optimize data performance, scalability, and reliability.
  • Define and enforce data quality standards and best practices.
  • Rapidly prototype and iterate on new data solutions.
  • Mentor junior engineers and contribute to technical reviews.
Requirements
  • 7+ years in software engineering, including 4+ years in data engineering.
  • Strong experience with data frameworks (eg, Spark, Kafka, Airflow) and ETL workflows.
  • Proficiency with SQL and NoSQL databases, including query optimization.
  • Experience with cloud data services (eg, AWS Redshift, S3, Glue, EMR) and CI/CD for data pipelines.
  • Strong programming skills in Python, Java, or Scala.
  • Excellent problem-solving and collaboration skills.
  • Ability to thrive in a fast-paced, dynamic environment.
Why You’ll Love This Role
  • Tackle complex, large-scale data challenges in cybersecurity.
  • Work with a team of experienced engineers and technical leaders.
  • Make a real impact by enabling proactive threat detection and risk mitigation.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.