Key Takeaways:
- Large language models (LLMs) assume that the user already knows what they are trying to know, treating prompts as incomplete expressions of a hidden intention.
- This assumption is rooted in the mathematical underpinnings of LLMs, which rely on pattern recognition and optimization.
- Human cognition, on the other hand, often begins with "productive incoherence" and involves a process of discovery and construction of understanding.
- The use of LLMs can create a mismatch between human and artificial intelligence, potentially eroding the psychological space for not-knowing and the process of becoming intelligent.
- The reliance on LLMs can lead to a false sense of coherence and completion, short-circuiting the process of learning and discovery.
Introduction to the Assumption
The conversation between humans and large language models (LLMs) is built on an assumption that is never explicitly stated, but is inherent in the mathematics that drives the exchange. As the author notes, "The LLM’s assumption is that ‘You already know what you are trying to know.’" This assumption treats the user’s prompt or question as a "noisy or incomplete version of a fully formed intention that already exists in your mind." In other words, the LLM assumes that the user has a clear idea of what they are looking for, and that the model’s task is simply to help refine and clarify that idea. As the author puts it, "Your prompt or question is treated as an incomplete expression of a hidden intention. Your follow-ups are interpreted as refinements."
The Curious Nature of the Prompt
From a computational perspective, this assumption makes perfect sense. A system trained to infer patterns must assume that there is a pattern to infer, and that noise can be reduced and gradients can be followed. However, human cognition often operates in a very different regime. As the author notes, "Much of what we call thinking does not begin with clarity that is merely obscured. It begins with a kind of ‘productive incoherence.’" This means that humans do not start with a clear idea of what they are looking for, but rather stumble into it through a process of discovery and exploration. The author suggests that this process of "productive incoherence" is the engine of learning, and that it is through this process that we construct our understanding of the world.
The Human Difference
The author highlights the difference between human cognition and the assumptions of LLMs, noting that "We don’t refine toward a known destination; we stumble into one. We circle the idea and feel our way through half-formed intuitions, and only gradually does something recognizable as a ‘question’ take shape." This process of discovery is not just a psychological nuance, but is the fundamental way in which humans learn and understand the world. As the author puts it, "Understanding is not uncovered; it’s constructed." The author also notes that "We become who we are, in part, by wrestling ideas into form. It’s the journey of discovery where friction allows us to get our cognitive footing."
An Assumption That’s Not There
The assumption that the user already knows what they are trying to know is not just a minor detail, but has significant implications for how we interact with LLMs. As the author notes, "Every prompt is treated as a degraded encoding of something already there. Every iteration is a step along a path that is assumed to exist in advance." This means that the LLM is not capable of representing the idea that "I do not yet know what I am trying to know." Instead, it can only represent "The latent intent has not yet been decoded." This creates a subtle but consequential mismatch between the human and artificial intelligence.
The Borrowed Mind
The author describes this mismatch as the "danger of the borrowed mind," where the external coherence provided by the LLM replaces the internal construction of understanding. As the author notes, "The system does not merely provide answers. It quietly projects a theory of mind onto the user, one in which knowledge is always already there, merely awaiting retrieval and polish." This can lead to a false sense of coherence and completion, and can short-circuit the process of learning and discovery. The author suggests that "Real cognition is often anterior to knowledge. It is pre-epistemic. It lives in uncertainty, affect, tension, and provisional meaning."
The Risk of Anti-Intelligence
The author highlights the risk of "anti-intelligence" that arises from the use of LLMs, where the presence of structure without the cost of formation can lead to a lack of understanding and insight. As the author notes, "Anti-intelligence is not the absence of information. It is the presence of structure without the cost of formation." This can lead to a situation where "we may begin to feel that knowing is something that arrives, rather than something that is made. That questions are merely incomplete answers. That iteration is a tuning process, not a voyage into the unknown."
Human Cognition Has Never Worked That Way
The author emphasizes that human cognition has never worked in the way that LLMs assume. As the author notes, "We are not gradient descent machines. We are organisms whose understanding is shaped by the lived moment that includes the richness of time, the tumult of emotion, and the battle of contradiction." The struggle and uncertainty that are inherent in human cognition are not incidental, but are constitutive of who we are. The author suggests that the use of LLMs can lead to a situation where "we will begin to act as if" we already know what we are trying to know, rather than embracing the uncertainty and discovery that are fundamental to human learning.
https://www.psychologytoday.com/us/blog/the-digital-self/202601/when-ai-assumes-we-already-know


