Digital Distrust: The Erosion of Online Trust

0
6

Key Takeaways:

  • The rise of artificial intelligence (AI) is making it increasingly difficult to distinguish between real and fake content online
  • AI-generated images, videos, and audio can be highly convincing and are being used to spread misinformation and propaganda
  • Experts warn that the proliferation of AI-generated content is eroding trust online and creating a sense of "cognitive exhaustion" among users
  • Social media platforms are struggling to combat the spread of AI-generated misinformation, and experts are working to develop new strategies for media literacy education
  • The ability to detect AI-generated content is becoming increasingly important, and everyday internet users can boost their skills by paying attention and using common sense

Introduction to the Problem
The rapid advancement of artificial intelligence (AI) is making it increasingly difficult to distinguish between real and fake content online. As Jeff Hancock, founding director of the Stanford Social Media Lab, notes, "As we start to worry about AI, it will likely, at least in the short term, undermine our trust default — that is, that we believe communication until we have some reason to disbelieve." This is particularly concerning in the context of fast-moving news events, where manipulated media can have a significant impact. For example, after an Immigration and Customs Enforcement officer fatally shot a woman in her car, a fake, AI-edited image of the scene was circulated online, highlighting the potential for AI-generated content to spread misinformation.

The Spread of Misinformation
The spread of AI-generated misinformation is not limited to news events. Social media platforms, which often incentivize users to create and share engaging content, can inadvertently contribute to the proliferation of fake news. As Adam Mosseri, head of Instagram, notes, "For most of my life I could safely assume that the vast majority of photographs or videos that I see are largely accurate captures of moments that happened in real life. This is clearly no longer the case and it’s going to take us, as people, years to adapt." The use of AI-generated content can be particularly effective in spreading misinformation, as it can be highly convincing and difficult to distinguish from real content.

The Impact on Trust
The proliferation of AI-generated content is having a significant impact on trust online. As Renee Hobbs, a professor of communication studies at the University of Rhode Island, notes, "If constant doubt and anxiety about what to trust is the norm, then actually, disengagement is a logical response. It’s a coping mechanism." This can lead to a sense of "cognitive exhaustion" among users, making it increasingly difficult for them to navigate the online landscape and distinguish between real and fake content. Furthermore, as Hany Farid, a professor of computer science at the UC Berkeley School of Information, notes, "When I send you something that conforms to your worldview, you want to believe it. You’re incentivized to believe it. And if it’s something that contradicts your worldview, you’re highly incentivized to say, ‘Oh, that’s fake.’"

The Challenge of Detection
Detecting AI-generated content is becoming increasingly challenging. As Siwei Lyu, a professor of computer science at the University at Buffalo, notes, "In many cases, it may not be the media itself that has anything wrong, but it’s put up in the wrong context or by somebody we cannot totally trust." Everyday internet users can boost their AI detection skills by paying attention and using common sense. For example, they can ask themselves why they trust or distrust what they see, and consider the context in which the content is being presented. Additionally, experts are working to develop new strategies for media literacy education, including the use of AI detection tools and platforms.

The Future of Media Literacy
The future of media literacy will require a multifaceted approach. As Hancock notes, "The old sort of AI literacy ideas of ‘let’s just look at the number of fingers’ and things like that are likely to go away." Instead, users will need to develop a more nuanced understanding of the online landscape and the ways in which AI-generated content can be used to spread misinformation. This will require a combination of technical skills, such as the ability to use AI detection tools, and critical thinking skills, such as the ability to evaluate the credibility of sources and the context in which content is being presented. By working together, experts, policymakers, and everyday internet users can develop new strategies for combating the spread of AI-generated misinformation and promoting a more informed and engaged online community.

https://www.nbcnews.com/tech/tech-news/experts-warn-collapse-trust-online-ai-deepfakes-venezuela-rcna252472

SignUpSignUp form

LEAVE A REPLY

Please enter your comment!
Please enter your name here