Key Takeaways
- Deepfakes are AI‑generated synthetic media that can convincingly mimic a person’s voice, face, or movements, blurring the line between reality and fiction.
- Their rapid growth fuels disinformation, erodes public trust, and is increasingly weaponized for online abuse, especially against women and children.
- Surveys show 38 % of women have experienced online violence and 85 % have witnessed it, while deepfake pornography accounts for 98 % of all deepfake videos circulating online.
- At least 1.2 million children in 11 countries have had their images manipulated into sexually explicit deepfakes, highlighting a grave threat to child safety.
- Legal protections lag behind technology: fewer than half of countries have laws addressing online abuse, and even fewer cover AI‑generated deepfake content.
- Effective response requires stronger international cooperation, clear legislation, platform accountability, law‑enforcement training, and proactive safety‑by‑design in AI development.
What are deepfakes?
Deepfakes are “synthetic media generated or modified by artificial intelligence,” as the source explains, and can be images, audio, or video that make it appear a person said or did something they never did. UNESCO describes them as “digital forgeries” that are so realistic they can convincingly mimic a person’s voice or likeness, thereby blurring the line between reality and fiction while becoming increasingly convincing, accessible, and scalable.
How are deepfakes created?
These forgeries are produced by AI systems trained on large datasets of images, videos, or audio recordings. The models learn to replicate human features—faces, voices, and movements—allowing the generation of highly realistic synthetic media. As the technology matures, the barrier to creation drops, enabling even non‑experts to produce convincing deepfakes with relatively limited technical expertise.
Why are deepfakes dangerous?
Beyond creating false realities, deepfakes “erode the very mechanisms by which societies construct shared understanding,” contributing to a growing “crisis of knowing.” We are approaching a synthetic reality threshold where distinguishing authentic from fabricated content requires technological assistance. This undermines trust in digital information ecosystems and can be exploited for fraud, as cybercriminals imitate voices or video to breach security barriers and steal credentials.
Deepfakes and the spread of misinformation
Because deepfakes are hard to detect by the human eye or existing cybersecurity systems, they are ideal tools for spreading disinformation and manipulating public opinion. The source notes that deepfakes can “reinforce societal biases and stereotypes, trigger gender‑based violence, or escalate ethnic, religious, and political divisions,” especially in regions with low digital literacy where audiences may lack the tools to question what they see.
How AI is changing online abuse
The rise of powerful AI image‑generation tools has simplified the production and dissemination of harmful content, including sexual abuse material. Deepfakes enable abuse to become more scalable, targeted, and difficult to detect, disproportionately affecting women and children. As the article states, “deepfake abuse can include sharing private images without consent or creating AI‑generated sexual content through morphing, splicing, or superimposing photographs and videos.”
Disproportionate impact on women
Survey data reveal a stark gender gap: “38 per cent of women have experienced online violence, and 85 per cent have witnessed it.” Many deep‑fake tools are developed by male teams and are not even designed to work on images of men, underscoring the gendered nature of the technology. Consequently, deepfake pornography dominates the landscape, accounting for “98 per cent of all deepfake videos online,” with prevalence increasing 550 % from 2019 to 2023.
Threats to children
A 2025 study across 11 countries by UNICEF, INTERPOL, and ECPAT found that “at least 1.2 million children have disclosed having had their images manipulated into sexually explicit deepfakes.” In some nations this translates to roughly one child per classroom. Most of these children worry that AI could be used to create fake sexual images or videos, and the proliferation of such material fuels demand for abusive content while complicating law‑enforcement efforts to identify genuine victims.
The justice gap
Survivors of deepfake abuse often find justice out of reach. Underreporting remains a major barrier, and when cases do proceed, the justice system can retraumatize victims by focusing scrutiny on them rather than perpetrators. Prosecutions are rare, enforcement lags, and online platforms frequently fail to act. Notably, “fewer than half of countries have laws that address online abuse,” and even fewer have legislation specifically covering AI‑generated deepfake content, leaving an estimated 1.8 billion women and girls without legal protection from technology‑facilitated harassment.
What needs to change?
Addressing AI‑driven digital abuse demands stronger international cooperation and more effective legal frameworks. Governments should adopt legislation with clear definitions of AI‑generated abuse and ensure sanctions for child sexual exploitation offenses. Law‑enforcement agencies require training, resources, and dedicated capacity to collect and preserve digital evidence, supported by trauma‑informed professionals and free legal aid. Tech companies must be legally obligated to proactively monitor and remove abusive content, facing real financial consequences for inaction. Across the AI value chain—from dataset providers to model developers—safety‑by‑design must be embedded, and transparency about training data provenance is essential to anticipate issues of accuracy, bias, or licensing. Schools, parents, and caregivers also need education on AI‑enabled exploitation and the tools to support affected children.
What is the UN doing?
United Nations agencies are advocating for safer AI design, greater platform accountability, better detection tools, and stronger laws. UN Women has warned that AI is accelerating technology‑facilitated violence against women and continues to push for stronger protections. The International Telecommunication Union is developing global standards for watermarking, content authentication, and digital verification to help users distinguish authentic from AI‑generated media. These initiatives aim to close the detection and accountability gaps highlighted throughout the report.
By weaving together statistics, expert definitions, and direct quotations, this summary captures the scale of the deepfake threat, its gendered and child‑focused harms, the current legal shortcomings, and the multifaceted response required from governments, tech firms, law enforcement, educators, and the international community.
https://unric.org/en/artificial-intelligence-what-are-deepfakes/

