Key Takeaways
- Singapore’s IMDA released a non‑binding Model AI Governance Framework for Agentic AI in January 2026, complemented by the Cyber Security Agency’s “Securing Agentic AI” discussion paper.
- Agentic AI systems combine planning, multi‑step execution, memory, external tools, and inter‑agent communication, operating on a spectrum from closely supervised to highly autonomous.
- Risks stem from traditional software flaws, LLM‑specific hallucinations, tool misuse, communication‑protocol vulnerabilities, cascading errors, and real‑world harms such as data breaches or unfair outcomes.
- The Framework structures mitigation into four dimensions: upfront risk assessment and bounding, meaningful human accountability, technical controls across the lifecycle, and end‑user responsibility.
- Practical measures include least‑privilege tool access, agent identity management, sandboxed development, systematic testing, gradual rollout, continuous monitoring, transparent user disclosures, and training/oversight programs.
- The Framework is a living document; IMDA invites feedback and case studies to refine guidance over time.
Introduction to Agentic AI Governance in Singapore
In January 2026, Singapore’s Infocomm Media Development Authority (IMDA) unveiled a non‑binding Model AI Governance Framework for Agentic AI, just months after the Cyber Security Agency published its discussion paper titled “Securing Agentic AI.” Together, these documents give organizations a structured, operational roadmap for tackling the security and governance challenges that arise as agentic AI moves from research labs into enterprise workflows. The Framework builds on IMDA’s 2020 Model AI Governance Framework, adapting its principles to the unique characteristics of agents that can plan, act, and interact autonomously.
Definition and Characteristics of Agentic AI
The Framework defines “Agentic AI” as systems capable of planning across multiple steps, taking actions, and interfacing with external systems or other agents to fulfil user‑defined goals. Core components include a central reasoning and planning engine—often a large language model (LLM)—a set of instructions, memory, tools for external interaction, and protocols for inter‑agent communication (e.g., the Agent2Agent Protocol). Autonomy varies: some agents operate under tight human supervision, while others execute complex workflows with minimal oversight, depending on the design and the level of human involvement granted.
Security and Governance Risks Overview
Agentic AI inherits familiar software vulnerabilities and LLM‑specific risks, but these manifest differently because of the agents’ planning, autonomy, and action‑taking abilities. Key risk categories include:
- Multi‑layer risks: hallucinated plans, inadvertent or malicious tool misuse (via prompt or code injection), biased tool calls, and weaknesses in emerging communication protocols that could be exploited to exfiltrate data.
- Cascading effects: a single agent’s mistake can propagate through multi‑agent pipelines, while parallel agents may unintentionally compete or coordinate, creating bottlenecks or conflicting actions.
- Real‑world harms: erroneous or unauthorized actions, biased outcomes, data breaches, and disruption of connected systems.
Assess and Bound Risks Upfront
The first dimension urges organizations to evaluate whether an agentic use case is appropriate by weighing impact and likelihood. Relevant factors encompass the domain and specific use case, access to sensitive data or external systems, the scope and reversibility of the agent’s actions, the level of autonomy granted, and overall task complexity. To “bound” risks, the Framework recommends defining agent limits at design time—such as granting only the minimum necessary tools and data, enforcing strict access controls, and implementing agent identity management. Each agent receives a traceable identity linked to a human accountable party, with permissions granted by that person, a measure also highlighted in the Discussion Paper.
Make Humans Meaningfully Accountable
Both the Framework and the Discussion Paper stress that responsibility for an agent’s actions remains with the deploying organization and the humans overseeing it. Clear allocation of duties across internal and external actors is essential, and end users must receive sufficient information to hold the organization accountable and to fulfil their own obligations. Human approval should be obtained at significant checkpoints—especially for high‑stakes or irreversible actions—and the design of those approval prompts should be carefully considered to ensure they are understandable and effective.
Implement Technical Controls and Processes – Design & Development
During design and development, the Framework advises applying least‑privilege access to tools and data, constructing agents in secure, sandboxed environments with whitelisted servers, and standardizing communication protocols where appropriate. Organizations should prompt agents to verify their understanding of instructions, ask them to summarize that understanding and request clarification if needed, and log the agent’s plan and reasoning for user review. These logs serve as a basis for verification and early detection of misalignment.
Implement Technical Controls and Processes – Pre‑Deployment Testing
Before deployment, rigorous testing is essential. The Framework recommends evaluating task execution accuracy, policy compliance, proper tool usage, and robustness against errors and edge cases. Testing must cover both individual agents and multi‑agent systems, using realistic environments and diverse datasets to uncover interaction‑related issues that might not appear in isolated unit tests.
Implement Technical Controls and Processes – Deployment & Post‑Deployment
After launch, continuous monitoring and testing remain critical. The Framework suggests a gradual rollout—limiting early deployment by user group, tool access, or system exposure—to contain potential fallout. Real‑time monitoring mechanisms should enable immediate intervention, incident review, debugging, and regular audits to confirm the system behaves as expected. Organizations must define what to log, set alert thresholds based on risk, and establish risk‑based response processes to address emergent or unexpected behaviors swiftly.
Enable End‑User Responsibility
The Framework distinguishes between two user categories. For those who merely interact with agents (e.g., customer‑service chatbots), transparency is paramount: disclose the agent’s capabilities, data access, and provide a human point of contact. For users who embed agents into their workflows (e.g., coding assistants), transparency must be complemented by education and training on oversight best practices, common failure modes, and the potential impact on tradecraft—such as the erosion of foundational skills when agents assume entry‑level tasks. The Discussion Paper further advises that, in sensitive contexts, end users act as auditors or red‑team testers, scrutinizing approval prompts and probing the system for weaknesses.
Conclusion and Living Framework
The Framework’s Annex A lists additional resources from industry leaders, offering further reading for practitioners seeking deeper insight. Recognizing that agentic AI technology and its associated risks will evolve, IMDA positions the Framework as a living document, inviting feedback and case studies on best‑practice implementations. By adhering to its four‑dimensional guidance—upfront risk bounding, accountable human oversight, technical safeguards, and empowered end users—organizations can harness the power of agentic AI while mitigating its security and governance challenges.

