Key Takeaways:
- Artificial intelligence (AI) does not think like humans and operates in a way that is counter to human reasoning and learning.
- AI represents objects as mathematical vectors in a hyperdimensional space, rather than understanding them in a human sense.
- The use of AI can create an illusion of expertise, making users feel smarter and more productive, even as their underlying skills erode.
- Excessive and poorly designed AI use can lead to "quiet cognitive erosion," where humans lose confidence in their own abilities.
- The real risk of the AI era is not smarter machines, but humans learning to think backward and relying too heavily on AI for decision-making.
Introduction to Anti-Intelligence
The concept of artificial intelligence (AI) is often shrouded in mystery, with many people believing that it is a thinking machine that is edging closer to human intelligence. However, according to John Nosta, an innovation theorist and founder of NostaLab, this could not be further from the truth. Nosta argues that large language models do not think like humans at all, and instead, operate in a way that is counter to human reasoning, learning, and building understanding. As Nosta told Business Insider, "My conclusion is that artificial intelligence is antithetical to human cognition. I even call it anti-intelligence."
The Limitations of AI Understanding
At the heart of Nosta’s argument is the claim that AI does not understand objects in the same way that humans do. When people think about an object, such as an apple, they place it in space, time, memory, culture, and lived experience. In contrast, a large language model represents the word "apple" as a mathematical object inside an enormous, hyperdimensional space and searches for patterns that statistically align. As Nosta explained, "An apple doesn’t exist as an apple. It exists as a vector in a hyperdimensional space." This distinction matters, as it means that AI outputs are optimized for coherence rather than comprehension. The system is not reasoning its way to an answer, but rather producing the response that best fits a pattern of language.
The Inversion of Human Thinking
Nosta believes that AI is quietly reshaping how people think, especially at work. Human cognition typically follows a familiar path: confusion, exploration, tentative structure, and finally confidence. However, AI flips this sequence on its head. As Nosta noted, "With AI, we start with structure. We start with coherence, fluency, a sense of completeness, and afterwards we find confidence." This inversion creates a powerful illusion, as AI-generated answers sound polished and authoritative, leading people to accept them immediately without questioning or fully understanding them. As Nosta warned, "Coming to the answer first is an inversion of human cognitive process. That’s antithetical to human thought."
The Danger of Smooth Answers
The danger of AI is not that it will outperform humans in raw computation, but rather that people will outsource the most valuable parts of thinking to machines. As Nosta explained, "It’s the stumbles, it’s the roughness, it’s the friction that allows us to get to observations and hypotheses that really develop who we are." When companies push employees to rely heavily on AI for writing, analysis, and decision-making, they risk mistaking speed and fluency for understanding. Used as a partner, AI can enhance human thinking, but used as a shortcut, it can quietly weaken it. As Nosta said, "The magic isn’t necessarily AI. It’s the iterative dynamic between humans and machines."
A Growing Concern
Concerns about the impact of AI on human thinking are not limited to Nosta. Researchers at Oxford University Press have found that AI is making students faster and more fluent, but at the cost of depth and critical thinking. A report from the Work AI Institute also noted that generative AI can create an illusion of expertise, making users feel smarter and more productive, even as their underlying skills erode. Mehdi Paryavi, CEO of the International Data Center Authority, warned that excessive and poorly designed AI use is driving a "quiet cognitive erosion." As he told Business Insider, "If you come to believe that AI writes better than you and thinks smarter than you, you will lose your own confidence in yourself." The real risk of the AI era is not smarter machines, but humans learning to think backward and relying too heavily on AI for decision-making.
https://www.businessinsider.com/ai-human-intelligence-impact-at-work-2026-1

