AI platforms need permits, may be denied in case of misinformation or bias risk: Government – MASHAHER

ISLAM GAMAL2 March 2024Last Update :
AI platforms need permits, may be denied in case of misinformation or bias risk: Government – MASHAHER



NEW DELHI: With general elections on the anvil and following a controversy over unsubstantiated comments on PM Modi by Google’s AI platform Gemini, govt on Saturday said it has issued an advisory for artificial intelligence-led internet companies to label any unverified information as potentially false and error prone.
Any AI-led public information platform will need to have a permit before being allowed in India, said govt, while warning that it will not hesitate to deny permission in case of potential risks of misinformation or bias.

The notice comes almost two-and-a-half months after govt had issued an advisory on the matter of deepfakes after several incidents of synthetically-made content flowing into social media and internet channels. The latest advisory says AI-led platforms should not throw up unlawful, misinformed or biased content that has potential to threaten the integrity of the nation or of the electoral process.

IT & electronics MoS Rajeev Chandrasekhar said AI platforms such as OpenAI and Google’s Gemini will have to make disclosures about the nature of their responses to govt as well as India’s digital citizens, clearly mentioning that the content can be false, error-prone, and unlawful as the model is still under trial and testing.

“If you are an untested platform and you think the platform is still in early stages of training and therefore is unreliable, you need to do three things. Firstly, you need to tell govt that I am deploying it. Second, you need to inform consumers by having a disclaimer that I am a platform under trial. Third, you need to explicitly mention this to the consumer who is using it, and you need to get his or her consent for using the platform. Take Google Gemini as an example. It will have to tell the

government before launching that this is a bit buggy platform.”
Asked whether govt will have powers to refuse the launch of such a platform in the country if it finds it unreliable, Chandrasekhar told TOI, “We can reject it if we find there is more risk. It is very clear.”
He said companies have often apologised after discovering that their platforms were throwing up wrong and unreliable information or biased results. “That is not a defence that a company can take if it makes an unsafe car, or if there is a medicine that you take and it gives you after-effects”.
The minister said the fresh advisory talks about eliminating bias and discrimination on the public internet. “The advisory says you cannot have models that output unlawful content and then take the defence that it is untested and unreliable. If it is unreliable and untested, disclose upfront to onsumer and govt.”
The advisory clearly warns against hosting unsubstantiated and unreliable content. “All intermediaries or platforms to ensure that use of Artificial Intelligence model(s) /LLM/Generative AI, software(s) or algorithm(s) on or through its computer resource does not permit its users to host, display, upload, modify, publish, transmit, store, update or share any unlawful content… Non-compliance with provisions would result in penal consequences.”




Source Agencies

Leave a Comment

Your email address will not be published. Required fields are marked *


Comments Rules :

Breaking News