Job Search and Career Advice Platform

Enable job alerts via email!

AI Project

Freelancing.my

Remote

MYR 100,000 - 150,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A dynamic IT consulting company in Kuala Lumpur is looking for a talented AI Engineer to develop and maintain LLM-powered applications. You will work on cutting-edge AI projects and integrate advanced AI solutions into production environments. The ideal candidate should have proven experience with LangChain and LangFlow, and be proficient in Python. This role offers a remote work setup and the chance to contribute to innovative AI solutions that empower businesses.

Qualifications

  • Proven experience in building LLM applications.
  • Strong knowledge of agentic AI concepts.
  • Hands-on experience with AI observability platforms.
  • Proficiency in Python for backend integration.
  • Experience with vector databases and embedding models.

Responsibilities

  • Design and maintain AI-powered applications using LLMs.
  • Develop workflows with LangChain and LangGraph.
  • Integrate vector databases for semantic search.
  • Build robust prompt engineering frameworks.
  • Develop multi-agent AI systems and ensure performance tuning.

Skills

Building LLM applications
Agentic AI concepts
Proficiency in Python
Experience with Docker and Kubernetes
Strong debugging skills

Tools

LangChain
LangGraph
LangFlow
LangFuse
Pinecone
Weaviate
FastAPI
Flask
Django
Job description

Develab is an IT consulting company operating in Malaysia, Singapore and Indonesia. We continuously seek innovation with a mission to help businesses realize their dreams with quality digital solutions and affordable IT consulting services. Our core services are Software Development, IT Consultancy, Project Management and Cloud Computing. Website: develab.io

We are seeking a talented and motivated AI Engineer in Malaysia and Indonesia to join our dynamic team. The ideal candidate will have hands‑on experience building LLM‑powered applications, implementing agentic AI workflows, and integrating AI into production environments.

You will work on projects involving LangChain, LangGraph, LangFlow, LangFuse, vector databases, embeddings, prompt engineering, and multi‑agent systems. A strong understanding of both AI model orchestration and application‑level deployment is essential. If you are passionate about AI, automation, and delivering production‑ready intelligent systems, we would love to meet you. Apply now: ***********@develab.io

Responsibilities
  • Design, implement, and maintain AI‑powered applications using LLMs (e.g., OpenAI, Anthropic, Mistral, LLaMA, Claude, Gemini).
  • Develop agentic AI workflows with LangChain and LangGraph, enabling multi‑step reasoning, memory, and dynamic tool usage.
  • Create interactive AI applications using LangFlow for visual orchestration and LangFuse for observability, debugging, and analytics.
  • Integrate vector databases (Pinecone, Weaviate, Milvus, ChromaDB, etc.) for embedding storage, semantic search, and retrieval‑augmented generation (RAG).
  • Build robust prompt engineering frameworks and prompt optimization strategies to ensure accuracy, reliability, and consistency in AI responses.
  • Implement and optimize retrieval pipelines combining embeddings, search algorithms, and metadata filtering.
  • Develop multi‑agent AI systems with specialized roles, inter‑agent communication, and autonomous task planning.
  • Utilize Docker and Kubernetes for containerization, orchestration, and scaling AI workloads across environments.
  • Collaborate with backend teams to integrate AI services via RESTful APIs or gRPC.
  • Monitor, log, and fine‑tune AI models in production using observability tools like LangFuse, Weights & Biases, or OpenTelemetry.
  • Apply AI safety best practices, guardrails, and policy‑based filtering to ensure responsible deployment.
  • Conduct performance tuning of AI pipelines for latency, throughput, and cost optimization.
  • Stay up‑to‑date with emerging AI frameworks, research papers, and model releases.
Requirements
  • Proven experience in building LLM applications with frameworks like LangChain, LangGraph, and LangFlow.
  • Strong knowledge of agentic AI concepts, including planning, reasoning, tool usage, and long‑term memory.
  • Hands‑on experience with LangFuse or similar AI observability platforms.
  • Proficiency in Python (FastAPI, Flask, or Django) for backend integration of AI services.
  • Experience with vector databases (Pinecone, Weaviate, Milvus, ChromaDB) and embedding models.
  • Understanding of RAG pipelines, semantic search, and knowledge base construction.
  • Experience with Docker and Kubernetes for deploying scalable AI applications.
  • Familiarity with CI/CD pipelines (GitLab, GitHub Actions) for AI deployment.
  • Knowledge of prompt engineering techniques and model fine‑tuning workflows.
  • Strong debugging skills with tools like LangFuse, Postman, and Python debuggers.
  • Ability to work in a remote, fast‑paced environment, both independently and collaboratively.
Preferred to Have
  • Experience with model fine‑tuning (LoRA, PEFT, QLoRA) and custom dataset preparation.
  • Exposure to multi‑modal AI (text, image, audio) applications.
  • Knowledge of agentic orchestration with frameworks like CrewAI, AutoGPT, BabyAGI, or OpenAI Assistants API.
  • Familiarity with AI cost optimization strategies (token budgeting, hybrid model routing).
  • Hands‑on experience with cloud AI services (AWS Bedrock, Azure AI, Google Vertex AI).
  • Contributions to open‑source AI projects.
  • Strong communication skills with the ability to articulate complex AI concepts to both technical and non‑technical stakeholders.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.