India Mandates Tech Firms to Obtain Approval for AI Tools Deemed ‘Unreliable’
In a significant move to regulate artificial intelligence (AI) technologies, the Indian government has issued an advisory requiring tech firms to gain official approval before publicly releasing AI tools that may be considered “unreliable” or are still under trial. The directive, announced last Friday by the country’s IT ministry, stipulates that such tools should be clearly labeled to indicate their potential to provide incorrect responses to user inquiries.
The advisory underscores India’s commitment to ensuring the safety and reliability of AI platforms, particularly generative AI, on the Indian Internet. This measure aligns with global efforts as countries worldwide strive to establish regulatory frameworks for the burgeoning AI industry. India, in particular, has been progressively tightening regulations for social media companies, which view the South Asian nation as a key market for growth.
This development follows an incident involving Google’s Gemini AI tool, which drew criticism on February 23 from a high-ranking minister for generating a response that implicated Indian Prime Minister Narendra Modi in adopting policies described by some as “fascist.” Google responded promptly, acknowledging that the tool’s reliability could be questionable, especially concerning current events and political subjects.
Deputy IT Minister Rajeev Chandrasekhar took to the social media platform X to address Google’s statement, emphasizing that “Safety and trust is platforms legal obligation. ‘Sorry Unreliable’ does not exempt from law.”
In addition to the reliability concerns, the advisory also calls on platforms to ensure their AI tools do not compromise the integrity of the electoral process. With India’s general elections approaching this summer, where the ruling Hindu nationalist party is anticipated to retain a strong majority, the government is taking no chances with technologies that could potentially influence electoral outcomes.
As India positions itself at the forefront of AI regulation, tech firms operating within its jurisdiction are now obliged to navigate this new approval process to ensure compliance with the government’s expectations for safety and accuracy in AI applications.