Enable job alerts via email!

Data Engineer

JD GROUP

Bury

On-site

GBP 40,000 - 55,000

Full time

Yesterday
Be an early applicant

Job summary

A technology company in the United Kingdom seeks a Data Engineer to work on the automation and maintenance of data pipelines in a cloud-based environment. The ideal candidate will have strong skills in SQL and Python, with opportunities to specialize in different areas like infrastructure or ML/AI. This role involves analyzing large data sets and ensuring the integrity of data analysis for the team.

Qualifications

  • 1-2 years of experience in cloud data solutions required.
  • Strong experience in SQL and Python is essential.
  • Experience in Pyspark coding or equivalent is beneficial.

Responsibilities

  • Assist in automation and maintenance of data pipelines.
  • Analyze large data sets using Python and SQL.
  • Set up new pipelines for data processing.

Skills

SQL
Python
GCP/Azure
Pyspark
Problem-solving
Attention to detail
Stakeholder management
Communication skills
Job description
Overview

Data Engineering team – Data Engineer

Working with a wide array of tools, you will work through the full, streaming, enrichment & curation of data into a cloud-based data environment, verifying data, and making sure that links to other key data sets are obtained to allow for simple, effective data analysis for our insight team and data scientists.

The role will allow you to start looking at whether you would like to become a specialist in a specific field such as infrastructure, curation or ML/AI in order to further progress your career.

Responsibilities
  • Assisting in the automation & maintenance of pipelines within a cloud-based environment, the candidate would also be involved in the sourcing of data using a range of different methods, whilst carrying out verification that the data is acceptable for ingestion.
  • Analysis of large data sets using tools such as Python & SQL
  • Setting up new pipelines for the full stream/enrichment/curation process
  • Upkeep of source code locations
Objectives & KPIs
  • Analysis of large data sets using tools such as Python & SQL
  • Creation stream/enrichment/curation process utilising a wide variety of data sources
  • Upkeep of source code locations/Git Hub Repositories
  • Set up of tables/views/procedures
  • Data aggregation & manipulation
  • Building of large scale analytical data sets
  • Investigation of new/alternative technology
Competencies and Behaviours
  • Deciding and Initiating Action
  • 2.1 Working with People
  • 3.1 Relating and Networking
  • 4.2 Applying Expertise and Technology
  • 5.1 Learning and Researching
  • 6.1 Planning and Organising
  • 8.1 Achieving Personal Work Goals and Objectives
Skills and Experience
  • An excellent level of experience in tools such as SQL / Python
  • Ability to decide on the overall concept and vision
  • Creating road maps
  • 1-2 years of cloud data solution experience in GCP / Azure
  • Experience of Pyspark coding or equivalent
  • Excellent problem solving skills
  • Strong attention to detail
  • Strong stakeholder management
  • Strong communication skills
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.