Key Takeaways:
- The Italian Competition Authority (AGCM) has closed its investigation into the Chinese artificial intelligence system DeepSeek after the company agreed to make changes to better inform users about potential inaccuracies.
- DeepSeek’s AI system was found to have the potential to produce false or misleading information, which the AGCM referred to as "hallucinations".
- The company has accepted a set of binding commitments to improve how it communicates the risks of hallucinations to users.
- The AGCM has deemed these measures sufficient to address its concerns and has closed the consumer protection investigation.
Introduction to the Investigation
The Italian Competition Authority (AGCM) has recently concluded an investigation into the Chinese artificial intelligence system DeepSeek, as reported by Reuters. The investigation was launched in June of last year, with the AGCM expressing concerns that DeepSeek did not adequately warn users about the potential for its AI system to produce false or misleading information. According to Reuters, the AGCM has now closed the case after the companies behind DeepSeek accepted a set of binding commitments aimed at improving transparency and disclosure.
The Concerns Surrounding DeepSeek
DeepSeek is jointly owned and operated by Hangzhou DeepSeek Artificial Intelligence and Beijing DeepSeek Artificial Intelligence. The AGCM’s investigation centered on the company’s failure to properly inform users about the risks associated with its AI system, which can generate outputs that are inaccurate, misleading, or entirely fabricated in response to user prompts. These so-called "hallucinations" can have serious consequences, and the AGCM was concerned that DeepSeek was not doing enough to warn users about this potential. As the AGCM stated in its bulletin, "the commitments presented by DeepSeek make disclosures about the risk of hallucinations easier, more transparent, intelligible, and immediate."
The Outcome of the Investigation
The AGCM’s investigation has resulted in DeepSeek agreeing to make changes to its system to better inform users about the potential for hallucinations. According to Reuters, the commitments focus on improving how the risks of hallucinations are communicated to users. The AGCM has deemed these measures sufficient to address its concerns, stating that "the steps proposed by the companies addressed its concerns" and that the investigation has been closed. As Reuters reports, the AGCM’s decision was made public in the authority’s weekly bulletin released on Monday.
The Importance of Transparency in AI
The outcome of the AGCM’s investigation highlights the importance of transparency in AI systems. As AI technology becomes increasingly prevalent, it is crucial that companies prioritize transparency and disclosure, particularly when it comes to the potential risks and limitations of their systems. The AGCM’s investigation into DeepSeek serves as a reminder that companies must take steps to ensure that users are aware of the potential for inaccuracies or misleading information. As the AGCM’s bulletin notes, "the commitments presented by DeepSeek" aim to make disclosures about the risk of hallucinations "easier, more transparent, intelligible, and immediate."
Conclusion
In conclusion, the Italian Competition Authority’s investigation into DeepSeek has resulted in the company agreeing to make changes to its AI system to better inform users about potential inaccuracies. The outcome of the investigation highlights the importance of transparency and disclosure in AI systems, and serves as a reminder that companies must prioritize user awareness and understanding of the potential risks and limitations of their systems. As the AGCM’s investigation demonstrates, regulatory bodies are taking a close look at AI companies and their practices, and companies must be prepared to make changes to ensure compliance with consumer protection regulations.
Italy Closes Antitrust Probe Into DeepSeek After AI Disclosure Commitments


