CogBench: Benchmarking Cognitive Alignment of Large Language Models in Educational Question Answering

Apr 7, 2026·
Tong Lu
Tong Lu
,
Zhichun Wang*
,
Sunyuan Hao
,
Yaoyu Zhou
,
Mingrui Li
,
Yiming Guan
,
Zhiyong Bai
· 0 min read
Abstract
Large language models (LLMs) possess strong capabilities in language understanding and generation, as well as remarkable problem-solving abilities. In the educational domain, a representative application is to employ LLMs as learning assistants that answer students’ questions and support their learning processes. In such scenarios, it is crucial for the model to perceive a student’s cognitive level and provide explanations that are appropriate to that level. However, whether LLMs can effectively accomplish this task has not yet been thoroughly investigated. To address this gap, we introduce CogBench, an evaluation benchmark designed to assess the cognitive alignment capabilities of LLMs in educational QA. CogBench comprises 2.1K mathematics questions, each associated with multiple valid solutions that rely on knowledge and reasoning at different cognitive levels. Building on this structure, we formulate three cognition-aware evaluation tasks and propose three complementary metrics to quantify cognitive alignment from multiple perspectives. Extensive experiments on 11 representative LLMs reveal that, while models can often produce correct answers, they still struggle to consistently generate explanations that are aligned with the intended cognitive level. These results highlight substantial room for improvement and establish CogBench as a diagnostic benchmark for advancing cognitively aligned educational AI systems.
Type
Publication
In the 64th Annual Meeting of the Association for Computational Linguistics
publications
Tong Lu
Authors
PhD candidate
I am a second year Ph.D. candidate in the School of Artificial Intelligence at Beijing Normal University, Beijing, China. I obtained a Bachelor of Science degree from Hebei GEO University and a Master of Engineering degree from Yunnan University. Now, I engage in research related to the field of natural language processing.
Authors
Authors