Large Language Models as Cooperative Cognitive Tools: Interface Design and Evaluation
Main Article Content
Abstract
Large language models are increasingly integrated into writing assistants, programming tools, and decision-support systems. However, current interfaces are primarily optimized for single-turn interactions and fail to support long-term human–machine collaboration. This paper discusses the design and evaluation of interactive mechanisms that facilitate iterative reasoning, user feedback integration, and contextual memory management. Several interface prototypes were developed to explore different interaction metaphors, including conversation timelines, confidence indicators, and revision histories. Observations from pilot user studies suggest that transparency and controllability play a central role in user trust and sustained engagement. Rather than treating language models as autonomous agents, the study emphasizes their role as cooperative cognitive tools.