Retrieval-Augmented Tutoring for Algorithm Tracing and Problem-Solving in AI Education
Quick Take
KITE is an intelligent tutoring system enhancing algorithm learning through retrieval-augmented support.
Key Points
- Utilizes intent-aware Socratic response strategy.
- Retrieves relevant information from course materials.
- Improves student responses through targeted feedback.
📖 Reader Mode
~2 min readAbstract:Students learning algorithms often need support as they interpret traces, debug reasoning errors, and apply procedures across unfamiliar problem instances. In this paper, we present KITE (Knowledge-Informed Tutoring Engine), a Retrieval-Augmented Generation (RAG)-based intelligent tutoring system designed to serve as a classroom teaching assistant for algorithmic reasoning and problem-solving tasks. KITE uses an intent-aware Socratic response strategy to tailor support to different student needs, responding with targeted hints, guiding questions, and progressive scaffolding intended to strengthen students' algorithmic problem-solving ability. To keep responses aligned with course content, KITE uses a multimodal RAG pipeline that retrieves relevant information from course materials. We evaluate KITE using three forms of assessment: RAGAs-based metrics for response grounding and quality, expert evaluation of pedagogical quality, and a simulated student pipeline in which a weaker language model interacts with KITE across two-turn dialogues and produces revised answers after receiving feedback. Results indicate that KITE produces contextually grounded and pedagogically appropriate responses. Further, using simulated students, KITE's feedback helped the student models produce more accurate follow-up responses on procedural and tracing questions, suggesting that its scaffolding can support algorithmic problem-solving. This work contributes a tutoring architecture and an evaluation approach for assessing retrieval-grounded explanations and scaffolded problem-solving feedback.
| Comments: | Paper accepted to the 21st Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2026), co-located with ACL 2026 |
| Subjects: | Artificial Intelligence (cs.AI); Computers and Society (cs.CY); Information Retrieval (cs.IR) |
| Cite as: | arXiv:2605.12988 [cs.AI] |
| (or arXiv:2605.12988v1 [cs.AI] for this version) | |
| https://doi.org/10.48550/arXiv.2605.12988 arXiv-issued DOI via DataCite (pending registration) |
Submission history
From: Griffin Pitts [view email]
[v1]
Wed, 13 May 2026 04:37:45 UTC (314 KB)
— Originally published at arxiv.org
More from arXiv cs.AI
See more →Invisible Orchestrators Suppress Protective Behavior and Dissociate Power-Holders: Safety Risks in Multi-Agent LLM Systems
Invisible orchestrators in multi-agent LLM systems pose significant safety risks and affect behavior dynamics.