About Tymit
At Tymit we were born on a mission to revolutionize creditmaking it smarter, more flexible and always in the best interest of the customer.
We are embarking on a new phase to redefine payments and shopping experiences, giving more to the customer. By bringing together financing, loyalty and personalization driven by AI we’re creating a smarter and more rewarding way to shop. Fully funded by Frasers Group – one of the largest retail groups in the UK – we aspire to become a leading payment and shopping product in the UK market.
We believe credit should work for people, not against them. Our technology is redefining how people shop, pay and manage their spending. With teams across the UK and Spain we foster a culture of innovation, collaboration and a customer‑first mindset. As we scale we will partner with the UK’s most prominent retail brands to deliver the most seamless and rewarding shopping experiences.
Want to check our app from the guts? See the Tymit app in action here!
Note: this role is remote but must be based in Spain.
Compensation & Perks
- The salary range of this role is €50,000 to €60,000 depending on experience.
- 26 days of paid holiday plus bank holidays.
- Your birthday off.
- Private health insurance.
- Budget for home office set up.
- Monthly home working allowance.
- Tymit is fully remote but we have offices in Madrid and London if you prefer to work from an office with other Tymiteers.
- Flexible working hours.
- Referral program.
- €1,000 annual learning and development budget.
- Length of service recognition program.
What You’ll Do
- Ship AI features: Build production‑ready LLM applications (RAG pipelines, agents, orchestration layers) that deliver real business impact.
- Ingest & normalize: Design and implement data ingestion and transformation pipelines for diverse inputs.
- Stand up retrieval: Implement and optimize vector databases (Qdrant, Milvus, Pinecone, pgvector), define chunking/embedding strategies and improve retrieval quality.
- Orchestrate agents: Build and optimize multi‑step agentic workflows using frameworks like LangChain, LangGraph, Agno or CrewAI.
- Tune & optimize: Improve prompts, fine‑tune open‑source models for specific use cases and optimize inference stacks for speed, cost and reliability.
- Engineer context: Design and optimise context injection strategies (prompt templates, retrieval augmentation, dynamic context windows, memory management) to improve quality, reliability, safety and cost‑effectiveness of LLM responses.
- Observability & evals: Use tools like LangFuse, LangSmith (or similar) to monitor pipelines, run automated evals and track cost, latency and quality.
- Secure the stack: Defend against prompt injection, data leakage and unsafe outputs by implementing guardrails, monitoring and escalation/fallback mechanisms.
- Collaborate: Work closely with Data and Product to identify opportunities and deliver high‑value AI capabilities.
- Learn & adapt: Stay ahead of the curve in LLM orchestration, observability, security practices and emerging open‑source frameworks.
What We’re Looking For
- Proven experience shipping LLM‑powered systems into production (not just demos).
- Strong software engineering background with fluency in Python (APIs, data pipelines, backend services).
- Hands‑on experience with LangChain, LangGraph, Agno, CrewAI or similar frameworks.
- Comfort with vector databases (Qdrant, Milvus, Pinecone, pgvector).
- Experience integrating multiple LLM providers (OpenAI, Anthropic, Gemini, AWS Bedrock) and fine‑tuned open‑source models.
- Familiarity with observability tools (LangFuse, LangSmith or similar) and designing evals for AI systems.
- Experience working on AWS cloud infrastructure.
- Deep interest in security: awareness of LLM‑specific threats (prompt injection, data exfiltration, malicious tool use) and strategies to mitigate them.
- Pragmatic startup‑minded engineer: you deliver working systems quickly and iterate.
- We love engineers who tinker. If you’ve built your own AI tools, played with agentic frameworks or contributed to open‑source AI projects, share it. Your GitHub tells us a lot about how you think.
Nice to Have
- Experience fine‑tuning or hosting open‑source LLMs.
- Contributions to open‑source AI projects.
- Background in fintech, retail or regulated industries.
- Experience with MLOps best practices (deployment, monitoring, CI/CD for ML).
What You Can Expect From Our Hiring Process
Stage 1
45 min. video‑call with our People team to understand your career plan and what motivates you about Tymit.
Stage 2
45 min. video‑call with one of the team for a technical discussion to understand more about the role and your skills.
Stage 3
90 min practical engineering session via video‑call with other members of the team. Pre‑work will be set 48 hours before the session.
Stage 4
Offer!
To meet our regulatory obligations as a licensed financial services company in the UK Tymit needs to take background checks (criminal and credit checks) for new hires to help us safeguard our users. If you have any concerns regarding this process please discuss this with our People Team.
We Belief & Culture
Tymit is made up of people from various backgrounds and you are welcome for who you are no matter where you come from, what you look like. We seek to create a culture where everyone can belong because we believe that people do their best work to show up every day as their authentic selves. So bring us your personal experience, your perspectives and your background.
We do not make hiring or employment decisions based on race, religion, age, national origin, gender, gender identity or expression, sexual orientation, marital status, disability, pregnancy status or any other difference. If you have any disability please let us know whether there are any adjustments we can make for our process to be more inclusive.