🧠 About the Role
We are building intelligent systems that can see, speak, understand, and act. As an AI Research Engineer, you will work at the frontier of LLM-based agents and multimodal AI, helping us design and deploy interactive systems that reason, adapt, and collaborate with humans.
You’ll join a fast-moving team of researchers and engineers working on next-generation agentic architectures, exploring how LLMs can use tools, remember, plan, and respond dynamically across language, vision, and other sensory inputs.
This is not just model tweaking. You’ll design full-stack AI behaviors — from prompt design to memory systems to multimodal grounding — that push the boundaries of real-world AI applications.
🔧 Key Responsibilities
- Design, develop, and optimize LLM-based and multimodal agent architectures, integrating language, vision, audio, and structured data.
- Conduct experiments in prompt engineering, fine-tuning, RAG (retrieval-augmented generation), multimodal fusion, and reasoning.
- Prototype interactive AI agents that perceive context, understand intent, and perform goal-directed actions.
- Work closely with engineering and research teams on model evaluation, pipeline design, and large-scale experimentation.
- Keep abreast of the latest advancements in LLMs, multimodal models, and agentic workflows.
- Contribute to technical documentation, internal research reports, and (optionally) external publications.
- Collaborate cross-functionally with product, design, and infrastructure teams to bring cutting-edge AI into production.
✅ Requirements
- Master’s or PhD in Computer Science, Artificial Intelligence, Machine Learning, or related fields.
- Solid foundations in algorithms, data structures, and system architecture.
- Proficient in Python with hands-on experience using PyTorch or similar frameworks.
- Experience with LLM tooling and APIs (e.g., Hugging Face Transformers, LangChain, vLLM, OpenAI API, etc.).
- Familiarity with multimodal models (vision-language, audio-language, video-language) and integrating them into AI pipelines.
- Practical experience in fine-tuning, RAG, or agent workflows (e.g., tool-use, memory, planning).
- Strong analytical mindset and ability to reason from data and experiments.
🌟 Nice to Have
- Publications or open-source contributions in LLM, multimodal, or agentic AI.
- Experience with vector databases, knowledge graphs, or reasoning modules.
- Hands-on work building interactive AI apps (e.g., virtual assistants, autonomous agents, research copilots).
- Understanding of reinforcement learning, LLM evaluation frameworks, or human-AI interaction design.
🎁 What We Offer
- Work at the bleeding edge of AI research and real-world deployment.
- Collaborate with world-class talent across AI, product, and design teams.
- Competitive salary, performance bonus, and equity options (for qualified candidates).
- Flexible working arrangements and opportunities to publish, present, or contribute to open-source.
- A mission-driven environment building AI that understands and helps humans — not just answers queries.