AI Governance Framework
A World-First Blueprint for Safe & Accountable Autonomous AI
Version 1.0Published by IMDA on 22 Jan 2026
1
Assess & Bound Risks Upfront
- •Define agent objectives and autonomy limits
- •Restrict access to tools, systems & data
- •Use risk assessments tailored for agentic AI
2
Make Humans Meaningfully Accountable
- •Assign clear roles to humans & AI teams
- •Require approval for sensitive or high-stakes actions
- •Mitigate automation bias through oversight
3
Implement Technical Controls & Monitor
- •Adapt testing & evaluation for agentic AI
- •Conduct continuous monitoring in production
- •Log behaviour, enable alerts & auditing
4
Enable End-User Responsibility
- •Provide transparency on agent capabilities & limits
- •Educate users on safe agent usage
- •Define user responsibilities clearly
Key Agentic AI Risks
Hallucinations & Unpredictable Outcomes
Prompt Injection
Unauthorized Actions
Rogue Tool Use
Data Breaches
Cascading System Failures
Levels4 Levels of Human Involvement
1.Agent proposes, human operates
2.Agent & human collaborate
3.Agent operates, human approves
4.Agent operates, human observes
Coming into Effect
22 January 2026
All new agentic AI systems should align with the Model AI Governance Framework for Agentic AI to ensure trust and operational security.
Agentic AI can only be trusted if it is governable.