Key Takeaways
- Hyper-realistic AI videos of dead celebrities have spread online, prompting debate over the control of deceased people’s likenesses.
- OpenAI’s app, Sora, has been used to create videos of historical figures and celebrities, including Queen Elizabeth II, Winston Churchill, and Michael Jackson.
- The use of Sora has raised concerns about the potential for misuse and the need for greater control over the likenesses of deceased individuals.
- Experts have warned that the unchecked spread of synthetic content could ultimately drive users away from social media and erode trust in real news.
Introduction to AI-Generated Videos
In a parallel reality, Queen Elizabeth II is shown praising "delightfully orange" cheese puffs, while Saddam Hussein is depicted strutting into a wrestling ring with a gun. These hyper-realistic AI videos, created with apps such as OpenAI’s Sora, have rapidly spread online, prompting debate over the control of deceased people’s likenesses. OpenAI’s app, launched in September, has unleashed a flood of videos of historical figures, including Winston Churchill, as well as celebrities such as Michael Jackson and Elvis Presley. The videos have been met with a mix of amusement and concern, with some users creating respectful tributes, while others have used the technology to create mocking or disrespectful content.
The Ethics of AI-Generated Content
The use of Sora has raised concerns about the potential for misuse and the need for greater control over the likenesses of deceased individuals. In October, OpenAI blocked users from creating videos of Martin Luther King Jr. after the estate of the civil rights icon complained about disrespectful depictions. Some users had created videos depicting King making monkey noises during his celebrated "I Have a Dream" speech, illustrating how users can portray public figures at will, making them say or do things they never did. Experts have warned that this type of content can have real consequences, particularly for the families of deceased individuals. Constance de Saint Laurent, a professor at Ireland’s Maynooth University, noted that interactions with artificial objects that are so human-like can trigger unease, and that receiving videos of a deceased family member could be traumatizing.
The Impact on Families and Individuals
The children of late actor Robin Williams, comedian George Carlin, and activist Malcolm X have condemned the use of Sora to create synthetic videos of their fathers. Zelda Williams, the daughter of Robin Williams, recently pleaded on Instagram to "stop sending me AI videos of dad," calling the content "maddening." An OpenAI spokesman told AFP that while there were "strong free speech interests in depicting historical figures," public figures and their families should have ultimate control over their likeness. For "recently deceased" figures, he added, authorized representatives or estate owners can now request that their likeness not be used in Sora. However, experts have noted that this may not be enough to prevent the misuse of AI-generated content, and that more needs to be done to protect the likenesses of deceased individuals.
The Broader Implications of AI-Generated Content
As advanced AI tools proliferate, the vulnerability is no longer confined to public figures: deceased non-celebrities may also have their names, likenesses, and words repurposed for synthetic manipulation. Researchers warn that the unchecked spread of synthetic content could ultimately drive users away from social media and erode trust in real news. Constance de Saint Laurent noted that the issue with misinformation is not that people believe it, but that they see real news and don’t trust it anymore. The spread of AI-generated content could exacerbate this problem, making it increasingly difficult for users to distinguish between what is real and what is fake. Hany Farid, co-founder of GetReal Security and a professor at the University of California, Berkeley, warned that even with OpenAI putting some safeguards in place, another AI model may not, and the problem will only get worse.
The Need for Greater Control and Regulation
The use of Sora has highlighted the need for greater control and regulation over AI-generated content. While OpenAI has taken steps to address concerns about the misuse of its technology, more needs to be done to protect the likenesses of deceased individuals and prevent the spread of synthetic content. Experts have called for greater transparency and accountability in the development and deployment of AI tools, as well as more effective mechanisms for reporting and removing abusive content. Ultimately, the spread of AI-generated content raises important questions about the balance between free speech and the need to protect individuals and their likenesses from exploitation and misuse. As the technology continues to evolve, it is essential that we prioritize the development of effective safeguards and regulations to prevent the misuse of AI-generated content.