PSA’s white paper urges techno-legal AI governance with safety audits, lifecycle controls, and a deepfake crackdown—just ahead of global AI summit in Delhi
The office of India’s Principal Scientific Advisor (PSA) has called for the creation of a National AI Risk Registry—a centralized database to log safety failures, algorithmic bias, security breaches, and misuse of AI systems. This recommendation, laid out in a newly released white paper, proposes a sweeping shift to techno-legal AI governance, favoring real-time oversight over rigid, one-size-fits-all laws.
Why It Matters Now
India is rapidly emerging as a global AI player. With the India AI Impact Summit 2026 set to host leaders like Sam Altman, Sundar Pichai, Jensen Huang, and Dario Amodei in New Delhi next month, the timing of this policy push isn’t coincidental.
“We need agile guardrails, not blunt regulations,” the report notes, underscoring the need for a nimble, evidence-backed approach to AI oversight.
So, what’s India’s AI governance playbook shaping up to look like?
Building the AI Risk Registry
At the heart of the proposal is a national database that would allow India to log, classify, and analyze AI-related incidents across sectors. This includes:
- India-specific AI risk taxonomy
- Tracking systemic threats and bias patterns
- Enabling data-driven audits and regulatory action
- Supporting post-deployment monitoring of AI models
Who would feed into this system? Everyone from government bodies and tech firms to researchers and civil society. The registry would be built with global best practices, but tailored for India’s socio-technical context.
Rhetorical hook: Can a national database actually catch rogue AI before it harms?
Why Not a Standalone AI Law?
Surprisingly, the PSA advises against enacting a standalone AI law—at least for now. Instead, the white paper proposes:
- Sector-specific guidelines (e.g., finance, healthcare)
- Targeted legal amendments to existing laws
- A shift from “command-and-control” compliance to techno-legal enforcement
That means legal duties should be embedded into AI system design—including algorithmic checks, kill switches, and audit trails.
Think of it as encoding the Constitution into the codebase.
Governance Stack: PSA’s Multi-Pronged Proposal
To operationalize this vision, the report recommends several new institutional structures:
- AI Governance Group (AIGG):
Chaired by the PSA, this body would coordinate across ministries and regulators. - Tech & Policy Expert Committee (TPEC):
Housed under MeitY, this cross-disciplinary team would cover law, ML, cybersecurity, and ethics. - AI Safety Institute:
Tasked with evaluating high-risk AI systems, developing safety tools, and facilitating capacity building.
Could India’s AI Safety Institute rival the UK’s or the U.S. NIST approach in scale?
From Deepfakes to Lifecycle Controls
The report singles out deepfakes as a systemic AI threat, calling for a techno-legal containment model:
- Mandatory content provenance
- Cryptographic metadata at generation
- Repeat-offender detection and incident logging
Beyond that, the PSA wants end-to-end lifecycle governance, including:
- Mandatory human oversight
- Kill switches for agentic AI
- Standardized audits, disclosures, and logs
- Lower compliance costs using India’s Digital Public Infrastructure (DPI)
And importantly, the paper slams the use of Western AI benchmarks that ignore India’s linguistic and cultural diversity.
What’s Next?
With India poised to host its most high-profile AI summit yet—and global companies eyeing India for expansion—the PSA’s white paper acts as both a blueprint and a warning.
AI must serve India’s pluralistic society without becoming another layer of digital exclusion or unchecked automation. The message is clear: balance ambition with accountability.
TL;DR:
India’s top science advisor calls for a national AI risk database and techno-legal governance instead of a standalone AI law. The report emphasizes lifecycle controls, audits, deepfake safeguards, and institution-building—just ahead of the AI Impact Summit 2026.
AI Summary:
- PSA recommends AI risk registry to log safety issues, bias, misuse
- No standalone AI law yet; push for sectoral and techno-legal fixes
- AI Governance Group, Safety Institute, and TPEC proposed
- Deepfake crackdown: provenance, cryptography, repeat offender tools
- DPI-backed, India-specific governance to lower compliance cost








