Aktiviere Job-Benachrichtigungen per E-Mail!
Erhöhe deine Chancen auf ein Interview
Erstelle einen auf die Position zugeschnittenen Lebenslauf, um deine Erfolgsquote zu erhöhen.
Ein innovatives Unternehmen sucht einen AI Engineer, der an der Spitze der KI-Entwicklung arbeitet. In dieser spannenden Rolle werden Sie mit erfahrenen Ingenieuren und Forschern zusammenarbeiten, um intelligente Systeme zu entwerfen und zu implementieren, die in Unternehmensumgebungen agieren, planen und lernen. Sie werden die Möglichkeit haben, agentische Workflows zu erstellen, die reale Aufgaben durch die Koordination von Tools und APIs lösen. Wenn Sie eine Leidenschaft für Technologie haben und in einem dynamischen Umfeld arbeiten möchten, ist dies die perfekte Gelegenheit für Sie.
Our mission is to supercharge IT enterprise organizations with custom AI solutions. Headquartered in the heart of Munich with a hub in Lisbon, we’re a team of builders, dreamers, and problem-solvers tackling some of the most exciting and complex challenges in AI native development adoption.
We operate on speed, adaptability, and extreme ownership. That means we move fast, stay flexible, and take full responsibility for our impact. Our clients trust us because we don’t ship generic tools — we embed AI into real-world enterprise workflows with precision and empathy.
As a AI Engineer, you’ll work alongside experienced engineers and researchers to design and implement intelligent systems that act, plan, and learn in enterprise environments. You’ll contribute to building agentic workflows — autonomous or semi-autonomous AI agents that complete real-world tasks by coordinating tools, APIs, and reasoning steps.
Collaborate with the team to prototype and build agentic systems powered by LLMs and retrieval-augmented generation (RAG).
Help design agents that can interact with enterprise tools (Jira, Notion, internal APIs, etc.) to solve specific user problems.
Implement tools and frameworks to chain LLM calls, maintain memory/state, and handle multi-turn interactions.
Support training, fine-tuning, and evaluation of LLMs on internal and client datasets.
Write production-ready code and contribute to robust backend systems that support real-time AI workflows.
Document your experiments, communicate tradeoffs, and be involved in making technical decisions.
We don’t expect you to know everything — but we do expect you to learn fast and be ready to get your hands dirty.
Solid programming skills in Python or similar and experience working with LLMs (OpenAI, Hugging Face, etc.).
Familiarity with tools for orchestrating LLM chains or agents (LangChain, CrewAI, AutoGen, or similar).
Curiosity and understanding of how agents can interact with tools and external APIs to complete multi-step tasks.
Comfortable working with JSON, APIs, and task orchestration logic.
A startup mindset — eager to experiment, build, and iterate quickly in uncertain environments.
Experience with vector databases (like Weaviate, Pinecone, or FAISS) or RAG pipelines.
Exposure to prompt engineering and function calling with modern LLM APIs.
Experience deploying backend services (FastAPI, Flask) or using Docker.
Side projects, GitHub contributions, or open-source involvement in the AI space.
Work at the bleeding edge of AI applied to real enterprise use cases.
Be part of a small, fast-moving team where your voice matters.
Get mentorship from world-class engineers and researchers.
Shape the future of AI-native software inside real organizations.
We value ownership — you’ll be trusted to run with your ideas and drive impact.