Job Search and Career Advice Platform

Enable job alerts via email!

Data Engineer

JD GROUP

Bury

On-site

GBP 40,000 - 60,000

Full time

Yesterday
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A cloud-based data solutions company in the United Kingdom is seeking a skilled Data Engineer to work with various tools for data curation in a cloud environment. In this role, you will analyze large data sets, automate pipelines, and collaborate with stakeholders. Applicants should have 1-2 years of experience in cloud data solutions and proficiency in SQL and Python. Strong problem-solving and communication skills are essential. This position offers an opportunity for career progression in specialized fields such as ML/AI.

Qualifications

  • Excellent level of experience in tools such as SQL and Python.
  • 1-2 years of cloud data solution experience in GCP or Azure.
  • Experience of Pyspark coding or equivalent.

Responsibilities

  • Assist in the automation and maintenance of pipelines in a cloud-based environment.
  • Analyze large data sets using tools such as Python and SQL.
  • Set up new pipelines for the stream/enrichment/curation process.

Skills

SQL
Python
Pyspark
Problem solving
Stakeholder management
Communication

Tools

GCP
Azure
Job description
Data Engineering team – Data Engineer

Working with a wide array of tools, you will work through the full, streaming, enrichment & curation of data into a cloud-based data environment, verifying data, and making sure that links to other key data sets are obtained to allow for simple, effective data analysis for our insight team and data scientists.

The role will allow you to start looking at whether you would like to become a specialist in a specific field such as infrastructure, curation or ML/AI in order to further progress you career.

Responsibilities

Assisting in the automation & maintenance of pipelines within a cloud-based environment, the candidate would also be involved in the sourcing of data using a range of different methods, whilst carrying out verification that the data is acceptable for ingestion.

  • Analysis of large data sets using tools such as Python & SQL
  • Setting up new pipelines for the full stream/enrichment/curation process
  • Upkeep of source code locations
Role objectives and KPI's
  • Analysis of large data sets using tools such as Python & SQL
  • Creation stream/enrichment/curation process utilising a wide variety of data sources
  • Upkeep of source code locations/Git Hub Repositories
  • Set up of tables/views/procedures
  • Data aggregation & manipulation
  • Building of large scale analytical data sets
  • Investigation of new/alternative technology
Competencies and behaviours
  • Deciding and Initiating Action
  • 2.1 Working with People
  • 3.1 Relating and Networking
  • 4.2 Applying Expertise and Technology
  • 5.1 Learning and Researching
  • 6.1 Planning and Organising
  • 8.1 Achieving Personal Work Goals and Objectives
Skills and Experience
  • An excellent level of experience in tools such as SQL / Python
  • Ability to decide on the overall concept and vision
  • Creating road maps
  • 1-2 years of cloud data solution experience in GCP / Azure
  • Experience of Pyspark coding or equivalent
  • Excellent problem solving skills
  • Strong attention to detail
  • Strong stakeholder management
  • Strong communication skills
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.