Government’s AI blueprint calls for a national-level institutional framework
New governance guidelines propose an innovation-led AI framework backed by robust safeguards to ensure safety, accountability, and public trust
New governance guidelines propose an innovation-led AI framework backed by robust safeguards to ensure safety, accountability, and public trust
India has unveiled a comprehensive AI governance framework that positions the technology as a national driver of inclusive growth, digital empowerment and global competitiveness, while instituting strong safeguards to manage emerging risks to citizens and institutions. The guidelines seek to accelerate large-scale AI adoption by balancing innovation with safety and societal resilience.
Grounded in seven core principles—trust, human-centric design, fairness, accountability, transparency, responsible innovation and sustainability—the framework envisions an ecosystem where AI empowers people, operates responsibly across sectors and is equipped with guardrails to prevent harm.
At the centre of the document is a six-pillar strategy spanning infrastructure, capacity building, policy and regulation, risk mitigation, accountability and institutional coordination. It calls for wider access to quality datasets, subsidised compute, and deeper integration of AI with Digital Public Infrastructure to drive adoption across agriculture, healthcare, education, governance and other sectors.
The report underscores the need for extensive skilling programs for citizens, regulators and public officials to enhance AI literacy and organisational readiness. It notes that many AI-related harms can be addressed through existing laws, while recommending targeted amendments—particularly in digital platforms, data protection and copyright—to close regulatory gaps and spur innovation. A significant portion focuses on risk mitigation through India-specific evaluation models, incident reporting systems and voluntary commitments supported by techno-legal measures. It flags concerns such as deepfakes, algorithmic bias, misinformation, national security vulnerabilities and loss-of-control scenarios, advocating layered safeguards, human oversight and privacy-preserving tools.
To ensure effective implementation, the framework proposes a national institutional mechanism anchored by a new AI Governance Group, supported by a Technology and Policy Expert Committee and the AI Safety Institute. These bodies will coordinate policy, set standards, assess risks and guide responsible deployment across sectors.
Overall, the guidelines establish a flexible, future-ready governance model designed to help India scale AI responsibly while protecting individuals, institutions and national interests.