Key Takeaways:
- The use of generative artificial intelligence in education poses significant risks to children’s cognitive development, social-emotional growth, and equity.
- AI can be useful in supporting language acquisition, writing, and automating tasks for teachers, but it should not replace human interaction and critical thinking.
- The benefits of AI in education can be maximized by using it as a supplement to human teaching, rather than a replacement.
- Governments, educators, and tech companies must work together to regulate the use of AI in schools, ensure equity, and promote holistic AI literacy.
- The time to address the risks of AI in education is now, and remedies are available to mitigate its negative effects.
Introduction to the Risks of AI in Education
The Brookings Institution’s Center for Universal Education has released a comprehensive study on the use of generative artificial intelligence in education, highlighting the significant risks it poses to children’s development. The study, which includes focus groups and interviews with students, parents, educators, and tech experts in 50 countries, found that AI can "undermine children’s foundational development" and that "the damages it has already caused are daunting," although "fixable." As Rebecca Winthrop, one of the report’s authors, warns, "When kids use generative AI that tells them what the answer is … they are not thinking for themselves. They’re not learning to parse truth from fiction. They’re not learning to understand what makes a good argument. They’re not learning about different perspectives in the world because they’re actually not engaging in the material."
The Pros and Cons of AI in Education
The report outlines both the benefits and drawbacks of using AI in education. On the positive side, AI can help students learn to read and write, particularly for those learning a second language. Teachers surveyed for the report noted that AI can adjust the complexity of a passage depending on the reader’s skill and offer privacy for students who struggle in large-group settings. Additionally, AI can help improve students’ writing by supporting their efforts and providing feedback on organization, coherence, syntax, semantics, and grammar. However, the report emphasizes that AI is most useful when it supplements, rather than replaces, human teaching. As one teacher noted, AI can "spark creativity" and help students overcome writer’s block, but it should not do the work for them.
The Risks of AI Dependence
The report highlights the significant risks of AI dependence, particularly in terms of cognitive development. The constant use of AI can lead to a kind of "doom loop" where students increasingly off-load their thinking onto the technology, resulting in cognitive decline or atrophy. As Winthrop notes, "Cognitive off-loading isn’t new. The report points out that keyboards and computers reduced the need for handwriting, and calculators automated basic math. But AI has ‘turbocharged’ this kind of off-loading, especially in schools where learning can feel transactional." This can have enormous consequences if young people grow into adults without learning to think critically. One student told the researchers, "It’s easy. You don’t need to (use) your brain."
The Impact of AI on Social-Emotional Development
The report also raises concerns about the impact of AI on social-emotional development. Survey responses revealed deep concern that the use of AI, particularly chatbots, is undermining students’ emotional well-being, including their ability to form relationships, recover from setbacks, and maintain mental health. Winthrop notes that if children are building social-emotional skills largely through interactions with chatbots that were designed to agree with them, "it becomes very uncomfortable to then be in an environment when somebody doesn’t agree with you." This can stunt a child’s emotional growth, as they learn empathy not when they are perfectly understood, but when they misunderstand and recover.
Recommendations for Mitigating the Risks of AI
The report offers several recommendations for mitigating the risks of AI in education. These include making schooling less focused on transactional task completion and more focused on fostering curiosity and a desire to learn. AI designed for use by children and teens should be less sycophantic and more "antagonistic," pushing back against preconceived notions and challenging users to reflect and evaluate. Tech companies could collaborate with educators in "co-design hubs" to develop, test, and evaluate new AI applications in the classroom. Holistic AI literacy is crucial, and governments have a responsibility to regulate the use of AI in schools, ensuring that it protects students’ cognitive and emotional health, as well as their privacy.
Conclusion
In conclusion, the use of generative artificial intelligence in education poses significant risks to children’s development, but these risks can be mitigated with careful planning, regulation, and collaboration between educators, tech companies, and governments. As the report’s authors argue, the time to act is now, and remedies are available to address the negative effects of AI in education. By working together, we can ensure that AI is used in a way that supports, rather than undermines, the development of future generations. As Winthrop notes, "We know that richer communities and schools will be able to afford more advanced AI models, and we know those more advanced AI models are more accurate. Which means that this is the first time in ed-tech history that schools will have to pay more for more accurate information. And that really hurts schools without a lot of resources."
https://www.npr.org/2026/01/14/nx-s1-5674741/ai-schools-education

