India's AI Governance Gap: Who Regulates the Algorithm?
No binding AI law. Digital India Act still in consultation. AI already determining credit, welfare, and hiring for hundreds of millions. India cannot lead global AI governance without governing it at home.
In February 2023, the Indian government issued an advisory requiring AI platforms to obtain government approval before deploying models capable of generating potentially harmful content in India. The advisory was widely criticised — by the technology industry for its vagueness and potential chilling effect, by civil liberties organisations for its implications for free expression, and by AI researchers for its technical unworkability. Within weeks, the government withdrew it, stating that no approval mechanism had been intended and that the advisory was guidance, not regulation.
The episode was revealing. India's government had recognised, however imperfectly, that AI systems operating at scale in India required some form of oversight. It had no regulatory architecture to deliver that oversight. And its improvised attempt to create one through an advisory notice collapsed under the weight of its own ambiguity.
Three years later, that fundamental gap remains. India has no binding AI regulatory framework. The Digital India Act — which was expected to include AI governance provisions when its drafting process began in 2022 — remains in consultation. The Ministry of Electronics and IT has published AI principles, digital governance frameworks, and consultation papers. It has not published enforceable rules. In a country where AI systems are already making decisions affecting access to credit, government welfare, healthcare triage, and employment screening for hundreds of millions of people, the regulatory vacuum is not an academic concern. It is a live governance failure.
What India Has and Has Not
India is not without AI governance instruments. The IT Act's intermediary liability framework creates accountability for AI platforms that host or generate harmful content in some circumstances. The data protection framework — the Digital Personal Data Protection Act 2023 — creates obligations for the processing of personal data that apply to AI systems using such data. The sector-specific regulators — RBI for fintech AI, IRDAI for insurance AI, SEBI for trading algorithms — have issued guidance for AI use within their respective domains.
But a framework of partial sectoral guidance and repurposed legacy regulation is not AI governance. It leaves unaddressed the most consequential applications of AI in India — the foundation models that power everything from search to content generation to government service delivery, the algorithmic systems that determine credit scores and welfare eligibility, and the autonomous decision systems that are beginning to appear in policing, urban management, and border security.
The EU AI Act — the world's most comprehensive AI regulation, now in its implementation phase — provides a reference framework that India has examined but not adopted. Its risk-based architecture, which places the most stringent requirements on high-risk AI applications while allowing lower-risk systems to operate under lighter-touch obligations, is the right design logic for a country like India that wants both AI innovation and AI accountability.
The Stakes for India
India's AI governance gap has three specific consequences that are already materialising.
The first is algorithmic discrimination. AI systems used in credit scoring, hiring, and benefit eligibility determination in India have been documented, in several cases, to produce outcomes that systematically disadvantage women, lower-caste applicants, and rural users — not because they were designed to discriminate, but because they were trained on historical data that reflects existing discrimination, and deployed without the audit and accountability mechanisms that would identify and correct the bias. Without a regulatory requirement for algorithmic auditing, these systems will continue to scale discrimination digitally.
The second is foundation model governance. Sarvam AI, Krutrim, and the growing cohort of India-developed large language models operate without any regulatory oversight of their training data, safety testing, or content policies. The risks — hallucination in high-stakes domains like healthcare advice, bias in content generation, susceptibility to misuse for electoral interference — are not hypothetical. They are documented failure modes of deployed AI systems globally. India's models are not exempt.
The third is the international credibility cost. India aspires to shape global AI governance — through its G20 AI principles, its participation in international AI safety conversations, and its ambition to be a trusted AI exporter. A country that cannot demonstrate credible domestic AI governance is not a credible voice in international AI governance conversations. The EU requires AI systems deployed within its market to comply with the AI Act — including systems developed in India. India's own regulatory vacuum limits its ability to negotiate the market access terms for its AI exports.
What the Digital India Act Must Include
The Digital India Act — whenever it is finalised — must include an AI governance chapter that addresses at minimum four requirements. A risk classification framework that identifies high-risk AI applications and imposes mandatory conformity assessment, transparency, and human oversight requirements. An algorithmic accountability mechanism that requires AI systems making consequential decisions about individuals to provide explanations and enable contestation. A foundation model registration and safety testing requirement for large-scale AI models deployed in India. And an AI incident reporting system that creates a national database of AI failures, bias events, and safety incidents that regulators and researchers can use to improve governance over time.
India has the talent to build world-class AI. It needs the governance architecture to ensure that AI builds a world-class India — not one stratified by algorithmic discrimination and ungoverned by institutions capable of accountability.
The Hind covers policy, power, and strategic affairs from India's perspective. Views expressed are analytical and editorial.