Description
Disclaimer: Copyright infringement not intended.
Context
- On March 1, the Ministry of Electronics and Information Technology (MeitY) issued an advisory to the Artificial Intelligence industry regarding generative AI products.
- The advisory stated that all such products, similar to large language models like Google’s Gemini, must be made available with explicit permission of the Government of India if they are under-testing or unreliable.
Government’s Stand
- The advisory represents a significant shift in the government's approach to AI research and policy.
- It came shortly after Minister of State for Electronics and Information Technology, Rajeev Chandrasekhar, criticized Google’s Gemini chatbot for queries if PM Narendra Modi was a fascist citing violation of India's IT law.
Key Takeaways from the Advisory
Permission must for AI models still in testing stage
- The advisory says, "The use of under-testing/unreliable artificial Intelligence model(s) /LLM/ generative AI, software(s) or algorithm(s) and its availability to the users on Indian Internet must be done so with the explicit permission of the Government of India and be deployed only after appropriately labeling the possible and inherent fallibility or unreliability of the output generated."
- This part of the one-page advisory has proved to be the most contentious, with startup founders slamming the directions. It has been described as a "bad move" and "demotivating".
- The outcry prompted minister of state for electronics and information technology Rajeev Chandrasekhar to clarify the direction would not apply to startups and was restricted to significant platforms.
AI platforms can’t threaten poll process, spread misinformation
- The advisory says that platforms have to ensure that AI models do not permit users to publish or host any unlawful content as defined under Rule 3(1) (b) of the information technology rules.
- Platforms also have to ensure that their "computer resource" do not permit any bias or discrimination "or threaten the integrity of the electoral process including via the use of artificial intelligence model(s)/ LLM/Generative AI/software(s) or algorithm(s)".
‘Permanent unique identifier’ for AI-generated content
- The advisory says if a platform creates any synthetic content that can be used to spread misinformation or deepfake, "it is advised that such information… is labeled or embedded with a permanent unique metadata or identifier..." This metadata or identifier can be used to identify the "creator or first originator of such misinformation or deep fake", the advisory says.
- The government has also advised platforms to use a "consent popup" mechanism about possible inaccuracies in any output that has been generated by AI.
Users ‘dealing’ with unlawful information can be punished
- The government has asked AI platforms to communicate that if users "deal" with unlawful information, then it can lead to suspension from the platform, and from the user's account, or may also incur punishment under applicable laws.
Non-compliance can lead to penal consequences
- "It is reiterated that non-compliance to the provisions of the IT Act and/or IT Rules would result in potential penal consequences to the intermediaries or platforms or its users when identified, including but not limited to prosecution under IT Act and several other statutes of the criminal code," the advisory read.
Reception
- The advisory has divided industry and observers on whether it was a mere reminder or a mandate.
- Legal Director at the Delhi-based Software Freedom Law Centre, Prasanth Sugathan, stated that it sounded like a mandate.
- The advisory instructed large tech platforms, including Google, to submit an action taken-cum-status report to the Ministry within 15 days.
Criticism and Legal Concerns
- Legal experts, including technology lawyer Pranesh Prakash, find the advisory legally unsound.
- They compare it to the Draft National Encryption Rules of 2015, a proposal quickly withdrawn due to controversy.
- There is also a lack of detailed explanation on how IT laws can apply to automated AI systems in this context.
Labeling Requirement for AI-Generated Imagery
- The advisory also included a requirement for AI-generated imagery to be labeled as such.
- However, the industry has been hesitant to implement serious efforts for labeling. Amazon Web Services, for example, has tried implementing an 'invisible' watermark but expressed concerns about its effectiveness.
Calls for Permissive Approach
- Technology lawyer Rahul Matthan urged a more permissive approach to AI systems, highlighting the benefits of innovation.
- He compared it to the aviation industry, where sharing information on failures led to improved air safety.
Government's Changing Approach to AI Industry
- While the government previously shared optimism about AI growth and stated it was not considering regulating artificial intelligence, Minister Chandrasekhar has expressed dissatisfaction with AI models spitting out uncomfortable responses. This has led to a shift in approach towards AI regulation.
Impact on Local Developers
- Initially, there was criticism from local AI firm co-founder Aakrit Vaish, who called it a poor job in communication.
- However, he later saw it as an opportunity for local developers, stating that it points to a need for local AI stacks, datasets, and GPUs. He sees it as a chance for local developers to surpass global giants like Microsoft and Google in the future.
PRACTICE QUESTION
Q. What are the challenges and implications of regulating the Artificial Intelligence (AI) industry in India, and how can the government ensure a balance between innovation and ethical standards in AI development?
|