Overview
We are seeking 5 Graduate AI Engineers for an early career opportunity within our client's newly established AI transformation initiative.
This is an opportunity to build AI capability from the ground up, working across the full AI stack—from infrastructure and data engineering through to application development and business implementation.
You’ll start by working across multiple areas to understand the AI ecosystem, then progressively specialize in 1‑2 areas aligned with your strengths and our strategic needs.
Responsibilities
- AI Solution Development & Deployment – Design, train, and fine‑tune ML/DL models (NLP, computer vision, time series, generative AI). Build and deploy AI applications using LLMs and ML frameworks. Implement prompt engineering, RAG architectures, and agent‑based systems. Create APIs and integrations. Deploy models to production via APIs and microservices. Write production‑quality Python code.
- Data Engineering & Pipeline Management – Gather, clean, transform, and engineer features from raw data. Design and implement data pipelines using modern ETL/ELT frameworks. Work with SQL, NoSQL, and vector databases. Build and optimise data warehouse architectures. Ensure data quality, governance, and preparation for ML models. Build real‑time data processing workflows and implement data observability.
- AI Operations & Infrastructure – Deploy and manage ML models in production. Integrate AI models with existing systems. Implement model evaluation, experimentation frameworks, and A/B testing. Build CI/CD pipelines for AI deployment. Work with containerisation (Docker) and orchestration (Kubernetes). Set up model versioning, monitoring, and lifecycle management. Monitor performance, detect drift, and manage retraining cycles. Manage cloud infrastructure for AI workloads. Debug production issues.
- Platform Engineering – Build an internal AI platform and tooling infrastructure. Implement infrastructure‑as‑code. Work with microservices architecture and API gateway patterns. Build workflow orchestration for complex AI pipelines. Deploy solutions on cloud platforms. Manage GPU computing resources.
- Business Collaboration – Work with product managers, business stakeholders, software engineers, and data scientists. Translate business problems into AI use cases and technical solutions. Identify automation opportunities. Evaluate business value and feasibility of AI use cases. Apply responsible AI and ethical practices. Ensure models are fair, explainable, and respect privacy laws. Implement data governance frameworks.
- Research & Continuous Learning – Keep up with the latest AI/ML advances and emerging technologies. Experiment with new architectures, techniques, and frameworks. Evaluate and recommend tools and approaches. Share learnings with the team.
Qualifications
- Education – Bachelor’s or Honours degree in Computer Science, Data Science, Engineering, Mathematics, Statistics, Information Systems, or related quantitative field. Graduated within the last 18 months or graduating soon. Strong academic record.
- Methodologies & Approaches – Understanding of software development lifecycle and best practices. Familiarity with Agile methodologies (Scrum, Kanban). Awareness of DevOps and CI/CD principles. Structured problem‑solving approach and analytical thinking.
- Essential Attributes – Strong interest in AI engineering and willingness to learn across the full technology stack. Comfort with ambiguity in a rapidly evolving field. Ability to learn new technologies rapidly. Excellent communication skills for both technical and business audiences. Self‑starter who can work independently and collaboratively. Willingness to take ownership of technical challenges.
Advanced Education (Advantageous)
- Master’s degree in Computer Science, AI, Data Science, or related field.
- Final year project work involving AI, ML, or data engineering.
- Research experience in AI/ML domains.
Technical Experience
- Experience with ML frameworks (TensorFlow, PyTorch, scikit‑learn, XGBoost). Hands‑on work with LLMs or generative AI tools. Experience building REST APIs.
- Familiarity with data visualisation tools (matplotlib, seaborn, Tableau, Power BI). Understanding of data warehousing concepts. Exposure to big data technologies (Spark, Hadoop, Kafka).
- Experience with Docker or Kubernetes. Knowledge of infrastructure‑as‑code tools (Terraform, CloudFormation). Understanding of Linux/Unix systems. Experience with workflow orchestration tools (Airflow, Prefect, Dagster).
- Experience with MLOps platforms (MLflow, Weights & Biases, DVC). Understanding of vector databases (Pinecone, Weaviate, ChromaDB). Knowledge of prompt engineering and LLM fine‑tuning. Familiarity with model explainability and fairness frameworks. Exposure to AutoML platforms. Understanding of feature stores and feature engineering practices. Knowledge of model evaluation and experimentation frameworks. Awareness of responsible AI and AI ethics principles.
Project Work & Demonstrated Interest
- Demonstrable projects in AI/ML (GitHub repository, portfolio, or technical work). Participation in data science competitions (Kaggle) or hackathons. Contributions to open‑source projects. Published research or technical blog posts. Internship experience in AI, data science, or software engineering. Online courses or certifications in AI/ML (Coursera, Fast.ai, DeepLearning.AI, etc.).
Benefits
- Highly competitive salary range. Opportunity to build and shape modern AI capability from the ground up.
- Professional development through training, certifications, and conferences.
- Clear specialisation pathways in AI/ML domains.
- Collaborative, technically rigorous culture.
- Cohort‑based approach – you’ll be learning alongside 4 peers.