Digital Resurrections of Deceased Icons Spark Fascination and Outrage

Digital Resurrections of Deceased Icons Spark Fascination and Outrage

Key Takeaways

  • Hyper-realistic AI videos of dead celebrities have sparked debate over control of deceased people’s likenesses
  • OpenAI’s Sora app has unleashed a flood of videos of historical figures and celebrities, including Queen Elizabeth II, Winston Churchill, and Michael Jackson
  • The spread of synthetic content has raised concerns about the potential for misinformation and the impact on public trust in social media
  • Experts warn that the unchecked spread of AI-generated content could drive users away from social media and contribute to the erosion of trust in online information

Introduction to the Issue
In a parallel reality, it’s possible to see Queen Elizabeth II raving about cheese puffs, a gun-toting Saddam Hussein strutting into a wrestling ring, and Pope John Paul II attempting skateboarding. This is all thanks to hyper-realistic AI videos created with apps like OpenAI’s Sora, which have rapidly spread online and prompted a debate over the control of deceased people’s likenesses. The app, launched in September, has been dubbed a deepfake machine and has unleashed a flood of videos of historical figures, including Winston Churchill, as well as celebrities like Michael Jackson and Elvis Presley.

The Concerns and Consequences
Not all videos created with Sora have been well-received, however. In October, OpenAI blocked users from creating videos of Martin Luther King Jr. after the estate of the civil rights icon complained about disrespectful depictions. Some users had created videos depicting King making monkey noises during his celebrated "I Have a Dream" speech, illustrating how users can portray public figures at will, making them say or do things they never did. This has raised concerns about the potential for misinformation and the impact on public trust in social media. According to Constance de Saint Laurent, a professor at Ireland’s Maynooth University, "We’re getting into the ‘uncanny valley,’" where interactions with artificial objects are so human-like that they trigger unease. "If suddenly you started receiving videos of a deceased family member, this is traumatizing," she said.

The Reaction from Families and Experts
In recent weeks, the children of late actor Robin Williams, comedian George Carlin, and activist Malcolm X have condemned the use of Sora to create synthetic videos of their fathers. Zelda Williams, the daughter of Robin Williams, recently pleaded on Instagram to "stop sending me AI videos of dad," calling the content "maddening." An OpenAI spokesman said that while there were "strong free speech interests in depicting historical figures," public figures and their families should have ultimate control over their likeness. For "recently deceased" figures, authorized representatives or estate owners can now request that their likeness not be used in Sora. However, experts like Hany Farid, co-founder of GetReal Security and a professor at the University of California, Berkeley, argue that OpenAI’s actions do not go far enough to protect the likenesses of public figures.

The Broader Implications
The issue with synthetic content is not limited to public figures, as deceased non-celebrities may also have their names, likenesses, and words repurposed for synthetic manipulation. Researchers warn that the unchecked spread of synthetic content could ultimately drive users away from social media. According to Saint Laurent, "The issue with misinformation in general is not so much that people believe it. A lot of people don’t. The issue is that they see real news and they don’t trust it anymore. And this (Sora) is going to massively increase that." The vulnerability is no longer confined to public figures, and the potential for misinformation and manipulation is vast.

The Future of Synthetic Content
As advanced AI tools proliferate, the potential for synthetic content to spread and manipulate public opinion will only increase. The reality was underscored in the aftermath of Hollywood director Rob Reiner’s alleged murder, as AFP fact-checkers uncovered AI-generated clips using his likeness spreading online. The spread of synthetic content has raised concerns about the potential for misinformation and the impact on public trust in social media. Experts warn that the unchecked spread of AI-generated content could drive users away from social media and contribute to the erosion of trust in online information. As the technology continues to evolve, it is essential to consider the implications of synthetic content and the need for safeguards to protect public figures and individuals from manipulation and exploitation.

More From Author

Police Vow to Arrest Killers of Key Witness in Madlanga Commission Probe

Police Vow to Arrest Killers of Key Witness in Madlanga Commission Probe

Woman Arrested for Drunk Driving After Hen Party

Woman Arrested for Drunk Driving After Hen Party

Leave a Reply

Your email address will not be published. Required fields are marked *