Overview
As a QA Tester focused on Generative AI and Cloud Solutions, you will be responsible for the comprehensive quality assurance of our intelligent products, from the underlying cloud infrastructure to the outputs of our generative AI models. You will design, execute, and automate tests to ensure the functionality, performance, reliability, security, and ethical behaviour of our AI-powered cloud applications. This role requires a strong understanding of both traditional software testing principles and the unique challenges presented by Generative AI and distributed cloud systems.
Responsibilities
- Test Strategy & Planning: Develop comprehensive test plans, strategies and test cases for Generative AI features and cloud-based applications, covering functional, non-functional, integration, performance, security and regression testing.
- Generative AI Output Validation: Design and execute specialized test cases to evaluate the quality, coherence, diversity, creativity, factual accuracy (where applicable), and ethical compliance of content generated by AI models (e.g., text, images, code, audio). This may involve manual review, comparative analysis, and potentially automated content analysis tools.
- Cloud Infrastructure Testing: Validate the functionality, scalability, resilience, and performance of cloud-native components and services (e.g., APIs, databases, microservices, serverless functions, message queues) deployed on platforms like AWS, Azure, or GCP.
- Data Pipeline & MLOps Testing: Verify the integrity and reliability of data ingestion pipelines, model training workflows, model versioning, and deployment processes within the MLOps/LLMOps framework.
- Performance & Load Testing: Conduct performance, load, and stress testing on both backend cloud services and generative AI model inference endpoints to ensure responsiveness and stability under various loads.
- Security Testing: Collaborate with security teams to identify vulnerabilities and ensure compliance with security best practices in cloud environments and AI model usage.
- Automation: Develop and maintain automated test scripts and frameworks for backend APIs, cloud services, and potentially for aspects of AI model output evaluation.
- Defect Management: Identify, document, prioritize, and track bugs and issues using defect management tools, collaborating with development teams for timely resolution.
- Regression Testing: Perform thorough regression testing to ensure new features or bug fixes do not introduce unintended side effects.
- Collaboration: Work closely with AI/ML engineers, backend developers, product managers and UI/UX designers to understand requirements, provide timely feedback and ensure quality throughout the development lifecycle.
- Ethical AI Testing: Actively participate in testing for bias, fairness, robustness against adversarial attacks and other ethical considerations specific to generative AI.
Qualifications
- Bachelor's degree in Computer Science, Software Engineering, or a related field.
- Minimum of 3 years of experience in Software Quality Assurance (QA) or Testing.
- Demonstrable experience with testing cloud-based applications and services on platforms like AWS, Azure, or GCP.
- Strong understanding of software development lifecycle (SDLC) and QA methodologies (Agile, Scrum).
- Proficiency in designing and executing test cases, creating test plans, and managing defects.
- Experience with API testing tools (e.g., Postman, Swagger, curl).
- Familiarity with at least one programming/scripting language for test automation (e.g., Python, Java, JavaScript, Go).
- Experience with version control systems (e.g., Git).
- Excellent analytical, problem-solving, and debugging skills.
- Strong attention to detail and a methodical approach to testing.
- Good communication and interpersonal skills, with the ability to articulate complex issues clearly.
Preferred Skills (Bonus Points)
- Direct experience testing Generative AI models, Large Language Models (LLMs), or other complex AI systems.
- Experience with specific Generative AI frameworks or tools (e.g., Hugging Face, LangChain, Stable Diffusion).
- Experience with MLOps concepts and testing CI/CD pipelines for machine learning models.
- Hands-on experience with automated testing frameworks for backend services (e.g., Pytest, JUnit, Mocha).
- Familiarity with containerization (Docker) and orchestration (Kubernetes).
- Knowledge of performance testing tools (e.g., JMeter, LoadRunner, K6).
- Relevant cloud certifications (e.g., AWS Certified Cloud Practitioner, Azure Fundamentals).
- Experience in a highly regulated industry or with security compliance testing.
What We Offer
- The opportunity to be at the forefront of Generative AI and Cloud technology in Singapore.
- A challenging and stimulating work environment with diverse testing opportunities.
- A collaborative team that values quality and innovation.
- Competitive salary and comprehensive benefits package.
- Opportunities for continuous learning and professional growth in AI/ML and cloud domains.