Key Takeaways
- The Department of Communications and Digital Technologies withdrew its draft National Artificial Intelligence Policy after discovering it was generated using AI tools that produced fictitious academic citations.
- Two senior officials have been placed on precautionary suspension pending an internal investigation into the matter.
- Ministers emphasized that the incident underscores the necessity of vigilant human oversight when employing AI in public‑policy drafting.
- The policy’s original aim was to strengthen responsible AI regulation, spur innovation, create jobs, improve skills access, and ensure ethical use of the technology.
- An internal review process has been launched, and the department pledges to communicate the investigation’s outcomes once completed.
Overview of the Incident
The Department of Communications and Digital Technologies (DCDT) took swift action this week by placing two officials on precautionary suspension and withdrawing the draft National Artificial Intelligence Policy from public consultation. The decision followed revelations that the policy document had been compiled with the assistance of artificial intelligence tools, which inadvertently inserted citations to non‑existent academic journal articles. The department characterized this as an “irresponsible use of AI tools” that compromised the integrity of the policy and triggered an immediate internal review to ascertain the facts and uphold accountability.
Background of the National Artificial Intelligence Policy
The draft policy was first gazetted on 10 April 2025, opening a window for public comment and stakeholder input. Intended as a cornerstone of South Africa’s strategy to harness AI responsibly, the document outlined plans for regulating AI deployment, fostering innovation, stimulating job creation, expanding access to digital skills, and ensuring ethical safeguards. Its release signaled the government’s commitment to positioning the nation as a competitive player in the global AI landscape while addressing potential risks associated with rapid technological adoption.
Discovery of AI‑Generated Content
During routine quality‑check procedures, officials noticed that several references cited in the draft pointed to journals that could not be located in any academic database. Further scrutiny revealed that the text exhibited patterns typical of large‑language‑model output, including overly generic phrasing and a lack of nuanced contextual analysis. The discovery prompted the department to acknowledge that the policy had been substantially drafted using AI assistance without adequate human verification, leading to the inclusion of fabricated scholarly sources.
Responsibility and Integrity Concerns
In a formal statement, the DCDT asserted that “the irresponsible use of AI tools compromised the integrity of the policy document.” The department stressed that policy‑making demands rigorous fact‑checking, peer review, and expert validation—processes that were bypassed when reliance on AI supplanted human judgment. By highlighting the breach, the department sought to reaffirm its commitment to maintaining high standards of credibility and transparency in all governmental publications.
Precautionary Suspensions
As a precautionary measure, two senior officials directly involved in the drafting and approval process have been placed on immediate suspension. The suspensions are intended to preserve the integrity of the ongoing investigation, prevent potential interference, and signal that the department treats the breach with utmost seriousness. While the individuals’ names have not been disclosed, the move underscores the gravity with which the DCDT views lapses in procedural diligence.
Internal Review Process
Concurrent with the suspensions, the department has instituted an internal review tasked with determining how AI was employed, who authorized its use, and why standard oversight mechanisms failed. The review will examine workflow documents, communication trails, and the specific AI tools utilized. The DCDT emphasized that this initial step is part of a broader pledge to accountability, promising that findings will inform corrective actions and policy revisions to prevent recurrence.
Ministerial Reaction – Solly Malatsi
Communications and Digital Technologies Minister Solly Malatsi condemned the episode, stating, “This should not have happened. This unacceptable lapse proves why vigilant human oversight over the use of artificial intelligence is critical.” His remarks highlighted a growing consensus among policymakers that while AI can enhance efficiency, it must never replace the discernment, ethical reasoning, and subject‑matter expertise that human officials bring to policy formulation. Malatsi’s comment serves as a call to institutionalize stronger governance frameworks around AI‑assisted work.
Policy Intent – Khumbudzo Ntshavheni
Minister in the Presidency Khumbudzo Ntshavheni explained that the withdrawn policy had been designed to strengthen the government’s capacity to regulate and adopt AI responsibly. The envisioned outcomes included stimulating innovation, fostering job creation within the tech sector, improving public access to digital skills training, and ensuring that AI applications adhere to ethical standards that protect citizens’ rights and privacy. Ntshavheni’s articulation underscores the policy’s aspirational goals, which remain valid despite the current setback.
Implications for AI Governance in South Africa
The incident exposes a vulnerability in the nascent AI governance landscape: the temptation to expedite document production through AI without adequate safeguards. It signals the need for clear guidelines on permissible AI use in official drafting, mandatory human review checkpoints, and training for officials on recognizing AI‑generated inaccuracies. Moreover, the episode may affect public trust in government‑issued AI policies, making transparent remediation essential to restore confidence in the state’s ability to steward emerging technologies responsibly.
Current Status and Next Steps
As of the latest update, the investigation remains ongoing, with the DCDT committing to communicate its outcomes “in due course.” The department has not disclosed a timeline but indicated that the internal review will be thorough and that any disciplinary actions will be based on its findings. In the interim, the draft policy remains withdrawn, and stakeholders awaiting further guidance on South Africa’s AI regulatory framework are advised to monitor official channels for announcements.
Conclusion
The suspension of officials and withdrawal of the draft National Artificial Intelligence Policy serve as a stark reminder that AI, while a powerful tool, must be deployed with stringent oversight in governmental contexts. The episode highlights the importance of marrying technological efficiency with human expertise to preserve the integrity of policy documents. Moving forward, South Africa’s experience can inform the development of robust AI‑use protocols that uphold accountability, ensure factual accuracy, and sustain public confidence in the state’s stewardship of artificial intelligence.

