AI vs. a 4‑Year‑Old: Why Human Intelligence Still Wins

0
3

Key Takeaways

  • Alison Gopnik argues that current large‑scale AI models are better understood as “Stone Soup” collective enterprises rather than autonomous “golem” agents.
  • Human intelligence is not a single, general faculty but a suite of distinct cognitive capacities—exploration, exploitation, care, and transmission—that are organized across the extended human life history.
  • Childhood functions as a protected period of exploration, empowered by intrinsic motivations such as empowerment, which drives causal learning and later effective action.
  • Elders, especially post‑menopausal grandparents, contribute crucially to caregiving and cultural transmission, enabling the intergenerational accumulation of knowledge.
  • For artificial intelligence to develop human‑like intelligence, it must be embedded in care relationships that provide goals, feedback, and opportunities for exploration, mirroring the way children learn through interaction with attentive caregivers.

Introduction and Context
Alison Gopnik opened the Berkeley Distinguished Faculty Lecture by thanking the audience for braving a rainy California day and framing her talk around three interlocking questions: how children learn so much, what that reveals about artificial intelligence, and how thinking about AI can inform our understanding of human development. She noted that the lecture series, once the Moses Memorial Lecture, now honors faculty from the Social Sciences division, and she highlighted her own credentials—professor of psychology, affiliate professor of philosophy, leader in cognitive science, and author of influential works such as The Scientist in the Crib and The Gardener and the Carpenter. “She is Berkeley’s finest, and we are very proud to have her give this lecture today,” said Marion Fourcade, the lecture’s host, before yielding the floor to Gopnik.


The Golem Metaphor: A Misleading Story of AI
Gopnik began by contrasting two narratives about AI. The first, the “golem” story, depicts an artificial agent magically brought to life that inevitably ends badly. “Most of you may know about this as the Rabbi of Prague’s story… the TLDR… is that it never ends well. It’s always a bad idea,” she explained, noting that this metaphor is pervasive in popular conceptions of AI, even among those building the systems. However, she argued that the gole­m picture is a profoundly wrong way to think about current large models, which do not possess independent agency or intentions but rather emerge from collective data and learning processes.


The Stone Soup Story: AI as Collective Intelligence
The second narrative, which Gopnik finds far more accurate, is the Stone Soup folktale. In the classic tale, travelers convince villagers to contribute ingredients to a communal pot, yielding a feast greater than any could produce alone. She retold the tale in AI terms: “A bunch of tech wizards went to the village of computer users and they said, ‘We’ve invented artificial general intelligence just from next‑token prediction and transformers and deep learning.’ … And the villagers all said, ‘That sounds great… but it would be even better if we had more data.’” The story proceeds with contributions of data, reinforcement learning from human feedback, prompt engineering, and finally the proclamation that “We’ve got artificial general intelligence and it was just made with a few algorithms.” Gopnik highlighted the moral: by pooling the “food”—data, labor, and feedback—of many individuals, the resulting soup (the AI system) provides real advantages, even if it is not an autonomous, self‑sufficient agent.


Exploration vs. Exploitation: Two Kinds of Intelligence
Moving beyond metaphor, Gopnik dissected intelligence into constituent capacities. She rejected the folk notion of a single, mysterious intelligence, proposing instead that there are multiple, often competing, cognitive systems. “Exploitation… enables you to go out into the world, to have a goal, to plan, to achieve that goal,” she said, while “Exploration… is about trying to figure out what the environment around you is like.” These two modes are in intrinsic tension: being effective at exploiting known rewards can hinder the discovery of novel information, and vice‑versa. She referenced the explore‑exploit trade‑off familiar in computer science (e.g., simulated annealing) as a formal expression of this tension.


Life History and the Distribution of Cognitive Talents
Gopnik linked these intelligences to the human life history, noting that our species exhibits an unusually long period of childhood and a distinctive post‑menopausal elder stage. “Humans have shorter interbirth intervals than other primates… we have a lot of these immature helpless creatures who need taken care of all the time… And if you look lurking behind those three beautiful grandchildren, there’s a postmenopausal grandmother.” She argued that evolution has arranged these periods to allocate cognitive strengths: children are primed for exploration, adults for exploitation, and elders for caregiving and cultural transmission. This life‑history framework resolves the apparent paradox of why we invest heavily in seemingly non‑productive youth and why elders remain alive long after reproductive capacity ends.


Empowerment as the Engine of Childhood Exploration
To explain how exploration operates mechanistically, Gopnik introduced the concept of empowerment from reinforcement learning and AI. “Empowerment, what you’re trying to do is maximize the mutual information between your actions and the outcomes of those actions… you want things that you can control.” She illustrated this with studies of infants kicking to move a mobile, showing that babies seek control over contingencies rather than mere rewards. The empowerment drive leads to causal learning, as seen when a toddler experiments with a xylophone and discovers the relationship between striking bars and producing tones. Importantly, empowerment is balanced by risk; children’s impulsive, novelty‑seeking behavior can lead to injury, which is mitigated by the presence of attentive caregivers who signal safety.


Caregiving: The Hidden Intelligence
Gopnik then turned to caregiving, a form of intelligence largely invisible in traditional social sciences. She described caregiving as a relationship where the agent with more resources “donates those resources to trying to pursue the goals of the agent who has less… just because that agent has fewer resources.” Caregiving’s goal is not merely to satisfy objective utilities but to enhance the care‑recipient’s empowerment—giving them the means to determine and pursue their own goals. She invoked a poignant Rijksmuseum painting of a sick child during a plague, underscoring the moral profundity of care. Moreover, she noted that caregiving extends beyond infant‑parent dyads to include mentorship of graduate students, support for elders, and even concern for non‑human animals and the planet.


Cultural Transmission and the Role of Elders
Building on caregiving, Gopnik argued that elders serve as crucial nodes of cultural transmission. Post‑menopausal grandparents, like the killer‑whale matriarchs that lead pods to historic feeding grounds, preserve and pass on knowledge—stories, songs, recipes, and practical know‑how—that enables each generation to build on the last. She cited her own observation of a grandmother reading a 100‑year‑old copy of Winnie the Pooh to her grandchildren as a concrete example of this transmission. In this view, cultural tools such as language, writing, printing, and now large AI models are technologies that amplify the elder’s role in distributing accumulated wisdom across time.


AI, Alignment, and the Need for Care
Finally, Gopnik connected these ideas to the AI alignment problem. She referenced Ted Chiang’s novella The Life Cycle of Software Objects, in which AI “babies” are raised, taught, and eventually released by human caregivers. “If we do ever end up with an AI that’s an autonomous agent in the way that humans are, those AIs are going to need mothers. They’re going to need humans to be in a care relationship with them.” She suggested that alignment emerges not from top‑down rule‑setting but from reciprocal care relationships that shape goals, provide feedback, and foster exploratory learning—paralleling the way human children develop through guided interaction with attentive adults.


Conclusion
Alison Gopnik’s lecture wove together folklore, developmental psychology, evolutionary biology, and AI research to reframe intelligence as a suite of complementary abilities distributed across the lifespan. Childhood’s explorative empowerment, adulthood’s exploitative focus, and elders’ caregiving and transmission together create a system capable of continual learning and cultural accumulation. Current large‑scale AI models, far from being autonomous golems, resemble Stone Soup—a collective product of human data, feedback, and interaction. For AI to achieve human‑like robustness, it must be situated within care relationships that provide the same scaffolding of exploration, goal‑setting, and cultural transmission that has nurtured human intelligence for millennia. As Gopnik concluded, the future of intelligent machines may depend less on solitary algorithms and more on the collective, nurturing practices we have long reserved for our children and elders.

https://news.berkeley.edu/2026/04/17/berkeley-talks-alison-gopnik/

SignUpSignUp form

LEAVE A REPLY

Please enter your comment!
Please enter your name here