India’s New AI Regulation: Government Approval Required For Model Launches


India has made a significant policy shift in the field of artificial intelligence (AI) by issuing a new advisory that mandates tech companies to obtain government approval before launching new AI models. The Ministry of Electronics and IT in India released the advisory, which also emphasizes the importance of ensuring that AI services and products do not exhibit bias, discrimination, or pose a threat to the integrity of the electoral process.

Key Takeaway

India has issued a new advisory requiring tech firms to obtain government approval before launching new AI models, signaling a significant shift in the country’s approach to AI regulation.

Reversal of Approach

The advisory, while not legally binding, marks a notable departure from India’s previous hands-off approach to AI regulation. Less than a year ago, the ministry had opted not to regulate AI growth, recognizing the sector as crucial to India’s strategic interests. However, the new advisory signals a shift towards a more proactive stance on AI regulation.

Compliance and Reporting

The advisory cites the authority granted to the ministry through the IT Act, 2000 and IT Rules, 2021. It requires tech firms to comply with the advisory immediately and submit an “Action Taken-cum-Status Report” to the ministry within 15 days. Additionally, the firms are urged to appropriately label the potential fallibility or unreliability of the output generated by their AI models.

Industry Response

India’s move has surprised many industry executives, with some expressing concerns about its potential impact on the nation’s ability to compete globally in the AI race. Startups and venture capitalists in India have voiced apprehensions about the new regulation hindering the country’s competitiveness in the global AI landscape.

Leave a Reply

Your email address will not be published. Required fields are marked *