Social network you want to login/join with:
Senior Data Engineer opportunity to join a well-known financial services company working on a Remote first basis (once a month in the office on average).
Senior Data Engineer - Remote First (offices in Lake District) - Up to £65,000 + Great Benefits
Purpose of Job
Advise on and ensure the maximum return of value from the company's data assets through the use of best practice data engineering techniques that aid the delivery of the company's data management strategy and roadmap, aligning with regulatory requirements.
Contribute towards the support, maintenance, and improvement of the company's platform, including data pipelines, ensuring data quality and governance.
Responsibilities
- Advise on the design, implementation, and maintenance of complex data engineering solutions.
- Ensure the implementation and maintenance of data solutions to acquire and prepare data, adhering to best practices (Extract, Transform, Load).
- Contribute to and ensure the delivery of data pipelines that connect data within and between data stores, applications, and organizations.
- Ensure the delivery of complex data quality checks and remediation.
- Identify data sources, processing concepts, and methods.
- Design and implement on-premise, cloud-based, and hybrid data engineering solutions.
- Structure and store data for uses including analytics, machine learning, data mining, and sharing with applications and organizations.
- Harvest structured and unstructured data.
- Integrate, consolidate, and cleanse data.
- Migrate and convert data.
- Apply ethical principles and regulatory requirements in handling data.
- Ensure appropriate storage of data in line with legislation and company requirements.
- Guide and contribute to the development of junior and trainee Data Engineers.
- Provide technical guidance to Data Engineers.
Knowledge, Experience, and Qualifications
- Excellent knowledge and experience in Azure Data services, including DataLake, Data Factory, DataBricks, Azure SQL (Indicative experience = 5+ years).
- Build and test processes supporting data extraction, transformation, data structures, metadata, dependency, and workload management.
- Knowledge of Spark architecture and modern Data Warehouse/Data Lake/Lakehouse techniques.
- Build transformation tables using SQL.
- Moderate knowledge of Python/PySpark or equivalent programming languages.
- Experience with PowerBI Data Gateways, DataFlows, and permissions.
- Creation, optimization, and maintenance of relational SQL and NoSQL databases.
- Experience working with CI/CD tools such as Azure DevOps or GitHub, including repositories, source control, pipelines, and actions.
- Awareness of Informatica or similar data governance tools (desirable).
- Experience working in agile (SCRUM) and waterfall delivery teams.
- Experience with Confluence and Jira.
- Experience in a financial services or other highly regulated environment.
- Interest or experience in AI is desirable.