Overview
We're looking for a Senior AI Engineer to build and scale cutting‑edge AI capabilities that drive smarter decisions and greater efficiency across Chambers and Partners. In this brand‑new role you’ll design and implement intelligent systems that enhance how we use data, streamline operations, and power innovation across our digital platforms. You’ll play a key role in establishing the technical foundations for AI at Chambers, developing scalable solutions that push our technology forward and transform how we deliver value to our users.
Main Duties and Responsibilities
Data & Retrieval
- Build robust ingestion pipelines for PDFs, Word, Excel, Audio, JSON and semi‑structured sources.
- Design RAG systems: chunking strategies, document schemas, metadata, hybrid/dense retrieval, re‑ranking, and grounding.
- Manage vector/keyword indexes (e.g. Azure AI Search, pgvector, Pinecone/Weaviate).
- Develop and deploy advanced NLP, information retrieval and recommendation systems to enhance Chambers and Partners’ research and product offerings, including document understanding, automatic summarisation, topic modelling, semantic search, entity recognition and relationship extraction.
- Design and implement intelligent tagging and metadata enrichment frameworks to categorize and organise legal and market data, improving search, discoverability and insight accuracy.
LLM & Machine Learning Application Engineering
- Design, build and maintain traditional ML and LLM models and pipelines.
- Build LLM apps using LangGraph/LangChain: tools/function calling, structured outputs (JSON schema), agents and multi‑step reasoning.
- Implement ASR/TTS and multimodal where relevant (e.g. Whisper).
- Choose customization paths pragmatically: prompt engineering, system prompts, tools, adapters/LoRA, and selective fine‑tuning only when needed.
- Fine‑tune and optimise ML models and LLMs to enhance performance, efficiency and relevance for Chambers’ research, analytics and product applications. Apply best practices for model adaptation, evaluation and deployment, ensuring solutions are scalable, reliable and aligned with business objectives.
Platform & Operations (LLMOps)
- Deploy and operate services on Azure (AKS, ACI, Azure Functions, API Management).
- Implement CI/CD (GitHub Actions, Azure DevOps), Infrastructure as Code (Bicep, Terraform), secrets via Azure Key Vault, private networking.
- Add observability: tracing/telemetry (OpenTelemetry, LangSmith), metrics, logs, cost and token usage monitoring and alerts.
- Apply evaluation & QA: regression suites, offline evaluation sets/golden data, RAG evals (faithfulness, answer relevance, citation correctness), A/B tests, win‑rate testing.
- Ensure reliability: rate‑limit handling, retries/ backoff, idempotency, circuit breakers, caching (e.g. Redis/semantic cache), fallbacks and degradations.
Governance, Safety & Security
- Enforce PII handling, data minimisation, redaction, access controls and auditability.
- Mitigate prompt injection/jailbreak risks; apply content filters/guardrails and track data residency.
- Establish and drive best practices for model versioning, reproducibility, performance monitoring, bias mitigation, data governance and ethical AI use.
- Document architectural decisions, runbooks and operational procedures.
Software Engineering & Collaboration
- Write clean, tested, maintainable code in Python (and optionally .NET).
- Apply SOLID, TDD/BDD where sensible, code reviews, refactoring and performance profiling.
- Collaborate in an Agile environment; contribute to technical specs and implementation plans.
- Build PoCs to de‑risk architecture and showcase value; harden PoCs into production services.
- Mentor and guide junior engineers and other team members, review code and contribute to technical design reviews; raise the collective standard of the team.
- Stay abreast of the AI/ML research landscape and legal‑tech/legal‑analytics domain to bring relevant innovations into our stack.
Skills and Experience
Professional experience
- Demonstrable experience in software engineering, with 2+ years building LLM/AI applications in production.
- Strong in Python, API design, asynchronous programming and integration patterns.
- Proven ability to scale LLMs and other AI models for high‑volume, real‑world applications, including optimising inference, managing computational resources and ensuring reliability and maintainability.
Programming & ML/LLM Frameworks
- Strong expertise in Python and relevant ML/LLM libraries/frameworks (PyTorch, TensorFlow, scikit‑learn).
- Strong in Python, API design, asynchronous programming, and integration patterns. Hands‑on with LangGraph/LangChain, LlamaIndex or Semantic Kernel for orchestration (tools, agents, guards, structured I/O).
- Familiarity with Azure OpenAI and at least one open model stack (Llama/Mistral via vLLM/TGI/Ollama).
- Proficient with front‑end frameworks such as Angular, for integration of AI‑powered applications.
- Experience with graph databases and knowledge graphs (Neo4j) for knowledge graphs and tool routing.
Cloud deployment & MLOps
- Production deployments on Azure (AKS, ACI, Functions), CI/CD and Infrastructure as Code (Bicep/Terraform).
Data & Information Management
- Experience with relational / semi‑structured databases (MS SQL and Cosmos DB) and vector search indexing (Azure AI Search, pgvector).