The Indian government has issued an advisory requiring technology companies to obtain approval before releasing artificial intelligence (AI) tools that are either in the trial phase or deemed “unreliable.” The advisory, issued by India’s Ministry of Information Technology on Friday, underscores the necessity for such tools, including generative AI, to be explicitly permitted by the government before being made accessible on the Indian internet.

Governments worldwide are in a race to formulate regulations governing the use of AI, and India is no exception. Notably, the country has been strengthening regulations for social media companies, recognising its status as a crucial growth market for these platforms.

This advisory follows criticism of Google’s Gemini AI tool on February 23 by a senior minister who accused it of delivering responses aligning with allegations that Indian Prime Minister Narendra Modi’s policies were “fascist.” Google responded the next day, acknowledging the tool’s potential unreliability, particularly concerning current events and political topics. Deputy IT Minister Rajeev Chandrasekhar emphasised on social media that platform safety and trust are legal obligations and that being labeled as “Sorry Unreliable” does not exempt from legal compliance.

Highlighting the upcoming general elections in India this summer, the advisory also directs platforms to ensure that their AI tools do not compromise the integrity of the electoral process. With the ruling Hindu nationalist party anticipated to secure a clear majority, there is a heightened focus on preventing any potential misuse of AI in influencing political discourse.

As countries grapple with the ethical and legal implications of AI, India’s move to proactively regulate the release of AI tools reflects a broader trend in shaping guidelines for responsible and accountable use of artificial intelligence technologies.