As a Senior Data Engineer, you are a key player in designing, implementing, and maintaining data pipelines. You work closely with clients to understand their data needs and realise tailored solutions that drive business insights. Your technical expertise and strong methodology help you bring your projects and clients to success.
How you’ll make impact
- Being a trusted advisor, guiding our customers towards a successful technical solution to their data challenges.
- Communicating the what, why and how of proposed solutions to technical and non-technical stakeholders.
- Developing, testing, and monitoring distributed data processing pipelines.
- Extrapolating versatile data sets and sources to produce high quality, reproducible datasets in a scalable and maintainable way.
- Collaborating with other data roles such as Architects, Software Engineers and Data Scientists with ease.
- Understanding the needs of many types of producers and consumers for our data services, ensuring our prod-ucts meet their requirements.
- Delivering projects in an Agile way, building iteratively to produce value from data early and frequently.
- Keeping yourself technically sharp, being open to learning new concepts and technologies.
What’s important to us
- You have a university degree in computer science, software engineering, data science or a compa-rable education.
- At least 3 years in data or software engineering positions
- Experience of designing, building and maintaining data products that meet the needs of data consumers.
- An understanding of common approaches to data analysis, data visualization, and optionally data science so you can produce the right data for data consumers.
- Experience with a variety of approaches to data architectures (e.g. Data Lake, Data Mesh, Data Warehouse, streaming, batch processing)
- Experience with Cloud Data Platforms like Databricks, Microsoft Fabric or Amazon Sagemaker.
- Practical data programming skills in Python and SQL.
- Hands-on skills or a keen interest in such technologies as Apache Spark, Airflow, Kafka, Kubernetes, and Java, Typescript or .NET.
- Hands-on experience with both relational and non-relational databases.
- Familiarity with big data infrastructures and concepts for storing and processing large and/or heterogeneous da-ta volumes.
- Practical knowledge of handling varied types of structured and unstructured data (text, tabular, graph, time-series, geospatial, image, etc.).
- Experience with agile development and DevOps methodologies.