Enhancing Learning Assistant Quality Through Automated Feedback Analysis and Systematic Testing in the LAMB Framework
Date
2025-06-22
Authors
Alier-Forment, Marc
Pereira-Valera, Juanan
Casañ-Guerrero, María José
García-Peñalvo, Francisco José
Journal Title
Journal ISSN
Volume Title
Publisher
Springer
Abstract
he Learning Assistant Manager and Builder (LAMB) is an open-source software framework that lets educators build and deploy AI learning assis-tants within institutional Learning Management Systems (LMS) without cod-ing expertise. It addresses critical challenges in educational AI by providing privacy-focused integration, controlled knowledge bases, and seamless deploy-ment through standard protocols. This paper presents major enhancements that enable systematic quality assurance and continuous improvement of these learning assistants.
The new LAMB includes mechanisms for structured feedback on real-world assistant behavior, transforming it into a test suite with curated prompts and expected correct or incorrect responses. When changes are made—such as prompt engineering, retrieval-augmented generation optimization, or knowledge base expansions—this suite enables automated validation of their impact.
A key innovation is using frontier large language models (LLMs) to evaluate responses automatically, generating detailed reports that reveal improvement areas and confirm performance gains. This systematic feedback-driven testing fosters continuous refinement while preserving quality standards.
Validation studies show measurable boosts in reliability and consistency. In various educational contexts, the framework identifies edge cases, maintains con-sistency across iterations, and provides actionable insights. Automated testing is especially beneficial for assistants with extensive knowledge bases and complex interaction patterns.
This work advances educational AI by providing a robust methodology for quality assurance and ongoing improvement of learning assistants. Its structured feedback and automated evaluations ensure alignment with educational goals while refining assistants over time. The enhanced LAMB framework offers a scalable and reliable solution for educators aiming to integrate AI-driven support into their LMS environments.
Description
Keywords
Learning assistants, artificial intelligence in education, automated testing, quality assurance, continuous improvement, retrieval-augmented generation, prompt engineering, LLM Evals
Citation
Alier-Forment, M., Pereira-Valera, J., Casañ Guerrero, M. J., & García-Peñalvo, F. J. (2025). Enhancing Learning Assistant Quality Through Automated Feedback Analysis and Systematic Testing in the LAMB Framework. In B. K. Smith & M. Borge (Eds.), Learning and Collaboration Technologies. 12th International Conference, LCT 2025 Held as Part of the 27th HCI International Conference, HCII 2025 Gothenburg, Sweden, June 22–27, 2025 Proceedings, Part II (pp. 3–12). Springer Nature Switzerland AG. https://doi.org/10.1007/978-3-031-93567-1_1