We are looking for a data engineer with technical abilities in data modelling, transformation, and visualisation with proficient skills on data processing technologies and methods. Ideally, candidates will possess experience in event-driven architecture and analytics modelling.
Main Responsibilities
Create data pipelines that combine internal and external data across several APIs and databases (relational and non-relational), preparing and refining data for analysis.
Ensure accuracy and integrity of data at all times, including testing and validating data pipelines.
Data Analysis, Visualisation, and Dashboards: Creating clear, compelling, and accurate visualisations and dashboards that make complex data easily understandable.
Database Architecture and Optimisation: Developing database structures that are both efficient and scalable.
Engaging with business stakeholders to assess their requirements, suggest outcomes, and translate business objectives into effective data visualisation solutions.
Requirements
Python and SQL or related coding experience.
Proficiency with Git and CI/CD & DevOps tools.
Experience with scripting languages (Bash, shell scripting).
Proficiency in ETL processes and data modelling.
Proficiency in visualisation tools such as Tableau and Power BI.
Skilled with relational databases (SQL Server, Oracle, or PostgreSQL).
Person Specification
Strong communication skills, both written and verbal.
Effective problem-solving and critical-thinking abilities to tackle data-related challenges.
Able to multitask, switch focus, and prioritise tasks.
Can take ownership of any issues that arise and facilitate their resolution quickly using their own initiative.
Beneficial Requirements
Commodities industry experience.
Experience with Docker and containers is desirable.
Broad experience with cloud computing (e.g., Microsoft Azure or AWS) is beneficial.
Experience of working in the commodities market or prior knowledge of a trading environment.