ABOUT US
Billigence Pty Ltd specializes in delivering market-leading Business Intelligence and CRM solutions. Headquartered in Sydney, Australia, with offices in Prague, London, Frankfurt, and Singapore, our passion is data, and our focus is delivering end-to-end solutions through a talented team of skilled professionals.
We partner with leading software platforms including Tableau, Alteryx, Collibra, Snowflake, GCP, and Salesforce.
Key Responsibilities
- Architect and develop scalable GenAI pipelines, APIs, and microservices for real-time and batch AI applications using frameworks such as FastAPI, Ray, or LangServe.
- Design robust prompt strategies for instruction-following, reasoning, and multi-turn conversations, focusing on RAG architectures for personalized, domain-specific use cases.
- Lead embedding model selection and tuning to optimize semantic search and RAG performance.
- Oversee LLM Ops workflows, including model orchestration, evaluation, deployment, rollback strategies, and monitoring in production environments.
- Drive model fine-tuning efforts to customize LLMs for proprietary datasets and regulated industries.
- Establish and govern AI testing frameworks, covering functional testing, regression testing, hallucination detection, safety filters, and output quality assessment.
- Implement enterprise-grade observability, lineage tracking, and CI/CD automation using tools such as MLflow, Databricks, Azure ML, or Vertex AI.
- Lead continuous improvement initiatives based on telemetry, user feedback, and cost-performance trade-offs.
- Demonstrate expertise in Python, with deep proficiency in GenAI frameworks, vector search systems, and MLOps toolchains.
Qualifications
- Minimum 5 years’ experience architecting and deploying scalable AI/ML and GenAI solutions in enterprise environments.
- Deep expertise in machine learning, deep learning, and generative AI technologies, including hands-on experience with frameworks like TensorFlow, PyTorch, and modern LLM orchestration tools.
- Strong familiarity with cloud platforms (AWS, Azure, GCP) and MLOps practices for end-to-end machine learning lifecycle management.
- Demonstrated leadership in managing agile, cross-functional teams and collaborating with stakeholders.
- Significant experience in prompt engineering and prompt design for LLMs and GenAI applications.
- Bachelor’s or Master’s degree in Computer Science, Data Science, Engineering, or a related field; advanced degrees or certifications (e.g., Azure AI Engineer) are advantageous.
- Experience with personalization, recommendation systems, or conversational AI is highly desirable.
If this sounds like something you are interested in, please apply with your most up-to-date CV, and we will be in touch!
Only successful candidates will be contacted.