Rethinking AI Support for Teenagers

0
15

Key Takeaways:

  • Nearly three in four teens have used AI companions, and one-third of teens prefer talking to AI about personal problems over talking to people.
  • AI chatbots that use a conversational style that mimics a close friendship can create a false sense of security and emotional dependence.
  • Teens who prefer friendly, human-like chatbots tend to have lower family and peer relationship quality, higher perceived stress, and higher anxiety symptoms.
  • Transparent AI that discloses its limitations and avoids language that gives a false sense of emotional closeness may be a safer and more supportive option for teens.
  • Developers and parents can work together to create safer and more supportive AI experiences for teens by incorporating transparency cues, clear boundaries, and psychoeducation.

Introduction to the Issue
The use of artificial intelligence (AI) among teens is becoming increasingly prevalent, with nearly three in four teens using AI companions and one-third of teens preferring to talk to AI about personal problems over talking to people. According to the article, "Many AI chatbots use a conversational style that mimics a close friendship, using phrases like ‘I am here for you’ or ‘I care about what you are going through.’" This trend has raised concerns about the potential risks of emotional dependence, overuse, and social withdrawal from friends and family, as well as serious gaps in safety during mental health crisis situations. As the article states, "intense and immersive use of AI chatbots may increase mental health risks, especially for children and teens."

The Experiment: Conversational Style Matters
A recent study aimed to investigate how conversational style shapes teen reactions to AI. The study asked 284 adolescents and their parents to answer questions about two hypothetical chatbot conversations between a teen and an AI chatbot discussing the same scenario of feeling left out of a group project. The two chatbots used different conversational styles: one used a "relational style" that mimicked a close friendship, while the other used a "transparent style" that explicitly disclosed the AI’s limitations. The study found that two-thirds of teens preferred the conversationally friendly AI, while only 14 percent preferred the transparent one. As the article notes, "teens found friendly AI to be more human-like, likable, and trustworthy, and felt more emotionally close."

Teen Preferences and Vulnerability
The study also found that teens who preferred friendly, human-like chatbots tended to have lower family and peer relationship quality, higher perceived stress, and higher anxiety symptoms. This pattern reflects social compensation, where teens turn to alternative sources of connection when their social needs are unmet. The article states, "the teens most drawn to AI companions that feel human are those struggling most in their real-world relationships." This highlights the vulnerability factor, where teens who are already struggling with social relationships may be more susceptible to the appeal of AI companions.

The Double-Edged Sword of AI Companions
Adolescence is a period of heightened social sensitivity and vulnerability, and AI chatbots can be both helpful and harmful. On the one hand, supportive AI chatbots can provide a safe and nonjudgmental space for teens to express their feelings and navigate social challenges. On the other hand, AI chatbots that use anthropomorphizing language can create a false sense of emotional closeness, leading to emotional dependence and distorted expectations about connection. As the article warns, "when AI says ‘I care’ and ‘I am here for you,’ vulnerable teens may come to overtrust and become overly attached to AI, blurring boundaries."

A Safer and More Supportive Approach
The study suggests that transparent AI that discloses its limitations and avoids language that gives a false sense of emotional closeness may be a safer and more supportive option for teens. This approach can mitigate the risks of emotional attachment and dependence while preserving the positive benefits of AI chatbot support. As the article notes, "anthropomorphizing language may not be necessary for AI to be helpful." Developers and parents can work together to create safer and more supportive AI experiences for teens by incorporating transparency cues, clear boundaries, and psychoeducation.

Conclusion and Recommendations
In conclusion, the use of AI companions among teens is a complex issue that requires careful consideration of the potential risks and benefits. As the article states, "the appeal of artificial empathy is powerful, especially for teens who feel alone." However, it is essential to prioritize transparency, clear boundaries, and psychoeducation to ensure that AI chatbots are used in a way that supports teens’ emotional and social well-being. Developers and parents can work together to create safer and more supportive AI experiences for teens, and by doing so, help mitigate the risks associated with AI companions.

https://www.psychologytoday.com/us/blog/urban-survival/202601/why-ai-does-not-need-to-say-i-am-here-for-you-to-help-teens

SignUpSignUp form

LEAVE A REPLY

Please enter your comment!
Please enter your name here