Salary: Competitive + 28% pension contributions
Job type: Permanent/full-time or 6 - 12 month contract (both options available)
Essential Experience:
- Strong Python programming knowledge, ideally Pyspark
- Knowledge of the Azure Databricks platform and associated functionalities
- Adaptable, with a willingness to work flexibly as the needs of the organisation evolve
- Working well within a team, and able to work closely with internal and external stakeholders
- An ability to take a logical and analytical approach, and to take a pragmatic, collaborative approach to solving problems
- Adept at communicating technical concepts to a non-technical audience
- Awareness of the modern data stack and associated methodologies
Key Responsibilities:
- Building and developing reusable pipelines for analytics and AI projects
- Pushing for innovation within the platform to enable efficiencies and detailed insights
- Leading key relationships between IT and Data to grow the platform and release new capabilities
- Deploying production AI models with automated monitoring from data pipeline to outputs
- Supporting team responsibilities such as:
- Extracting, Loading & Transforming (ELT) data sets across the enterprise technology stack, with a focus on Extract & Load
- Monitoring data workflows, identifying and mitigating risks, setting SLIs, and configuring alerts
- Adopting data governance best practices, including maintaining data catalogues, data dictionaries, and logical data models
- Developing coding standards for Python across the Data function
--- Fusion People are committed to promoting equal opportunities regardless of age, gender, religion, belief, race, sexuality, or disability. We operate as an employment agency and employment business. You'll find a wide selection of vacancies on our website.