Key Takeaways
- Hyper-realistic AI videos of dead celebrities, created with apps such as OpenAI’s Sora, have raised concerns over the control of dead people’s likenesses
- OpenAI’s app has unleashed a flood of videos of historical figures and celebrities, including Winston Churchill, Michael Jackson, and Elvis Presley
- The spread of synthetic content, also known as AI slop, could ultimately drive users away from social media and lead to a decrease in trust in real news
- Experts warn that the unchecked spread of AI-generated content could have real consequences, including traumatizing families of deceased individuals and spreading misinformation
Introduction to AI-Generated Content
The rapid spread of hyper-realistic AI videos of dead celebrities has sparked a heated debate over the control of dead people’s likenesses. OpenAI’s easy-to-use app, Sora, has made it possible for users to create realistic videos of historical figures and celebrities, including Queen Elizabeth II, Winston Churchill, and Michael Jackson. These videos have been shared widely online, with some depicting the deceased individuals in humorous and unexpected situations. However, not all videos have been well-received, with some prompting outrage and condemnation from the families of the deceased.
Concerns Over Synthetic Content
The creation and dissemination of synthetic content, including AI-generated videos, has raised concerns over the potential consequences for families of deceased individuals. Professor Constance de Saint Laurent, at Ireland’s Maynooth University, warns that interacting with artificial objects that are so human-like can trigger unease, and that receiving videos of a dead family member could be traumatic. The children of late actor Robin Williams, comedian George Carlin, and activist Malcolm X have also condemned the use of Sora to create synthetic videos of their fathers. Ms Zelda Williams, the daughter of Robin Williams, has pleaded with users to stop sending her AI videos of her father, calling the content "maddening".
OpenAI’s Response to Concerns
OpenAI has acknowledged the concerns over synthetic content and has taken steps to address them. The company has blocked users from creating videos of Martin Luther King Jr after the estate of the civil rights icon complained about disrespectful depictions. OpenAI has also stated that public figures and their families should have ultimate control over their likeness, and that authorized representatives or estate owners can request that their likeness not be used in Sora. However, experts warn that these safeguards may not be enough to prevent the misuse of synthetic content. Professor Hany Farid, co-founder of GetReal Security and a professor at the University of California, Berkeley, notes that while OpenAI may have stopped the creation of MLK Jr videos, they are not stopping users from co-opting the identity of many other celebrities.
The Risks of Synthetic Content
The risks of synthetic content are not limited to public figures and celebrities. As advanced AI tools proliferate, the vulnerability is no longer confined to well-known individuals. Dead non-celebrities may also have their names, likenesses, and words repurposed for synthetic manipulation. Researchers warn that the unchecked spread of synthetic content could ultimately drive users away from social media and lead to a decrease in trust in real news. Professor Saint Laurent notes that the issue with misinformation is not that people believe it, but that they see real news and don’t trust it anymore. The spread of synthetic content could exacerbate this problem, making it increasingly difficult for users to distinguish between real and fake information.
The Future of AI-Generated Content
The future of AI-generated content is uncertain, but it is clear that the technology is rapidly evolving. As AI tools become more advanced and widely available, the potential for misuse increases. Experts warn that the problem of synthetic content will only get worse, with new AI models emerging that do not have the same safeguards as OpenAI’s Sora. The alleged murder of Hollywood director Rob Reiner in December highlighted the vulnerability of individuals to synthetic content, with AI-generated clips using his likeness spreading online. As the technology continues to advance, it is essential that developers, policymakers, and users work together to address the risks and consequences of synthetic content.
Conclusion
The spread of hyper-realistic AI videos of dead celebrities has raised important questions about the control of dead people’s likenesses and the potential consequences of synthetic content. While OpenAI’s Sora has made it possible for users to create realistic videos of historical figures and celebrities, the company must also take responsibility for addressing the concerns and risks associated with the technology. As the technology continues to evolve, it is essential that developers, policymakers, and users work together to ensure that synthetic content is used responsibly and that the rights of individuals, both living and dead, are protected.