User Adaptation and System Effectiveness in Large Language Model Applications
Main Article Content
Abstract
Large language models are increasingly embedded in productivity tools, customer service systems, and educational platforms. Despite extensive benchmarking of model accuracy, relatively little empirical evidence exists regarding real user interaction behavior. Interaction logs from a commercial writing assistant were collected over five months, involving 42,600 users and more than 8.9 million prompts. Query length, revision frequency, feedback patterns, and session duration were examined. More than 63% of users reformulated queries at least twice, and nearly 28% abandoned sessions after receiving low-confidence responses. A controlled interface experiment was conducted to compare default prompting with guided interaction templates. Users with structured guidance completed tasks 19% faster and reported lower cognitive load in post-study surveys. However, expert users showed weaker dependence on interface support. User adaptation appears to be as critical as model capability in determining practical system effectiveness.