“I want to address the recent issues with problematic text and image responses in the Gemini app,” Pichai wrote in a letter to staff, which was published by the news website Semafor.
“I know that some of its responses have offended our users and shown bias – to be clear, that’s completely unacceptable and we got it wrong,” he added.
The controversy emerged within weeks of Google’s high-profile rebranding of its ChatGPT-style AI to ‘Gemini’. Social media users mocked and criticised the company for the historically inaccurate Gemini-generated images, such as US senators from the 1800s that were ethnically diverse and included women.
In India, days after Gemini came under severe censure over its unsubstantiated allegations in response to a query on PM Modi, IT minister Ashwini Vaishnaw last Monday warned that govt would “not tolerate” any racial and other biases in the running of platforms based on AI.
Google’s teams working around the clock to fix Gemini issues: Pichai
A Google spokesperson confirmed to AFP that Sundar Pichai’s letter to staff was authentic.
Pichai said Google’s teams were working “around the clock” to fix these issues but did not say when the image-generating feature would be available again. “No AI is perfect, especially at this emerging stage of the industry’s development, but we know the bar is high for us and we will keep at it for however long it takes,” he wrote.
India’s communications and IT minister Ashwini Vaishnaw had warned that the govt will “not tolerate” any racial and other biases in running of platforms based on AI. He had said that companies such as Google, which are developing AI-based solutions, need to train the underlying models powering their platforms “properly” in order to avoid biases and misinformation. The views of the cabinet minister followed a similar censure by minister of state for IT & electronics Rajeev Chandrasekhar who termed the results thrown up by Gemini as “violations” of certain provisions of the IT Act as well as the criminal code.
Tech companies see generative artificial intelligence models as the next big step in computing and are racing to infuse them into everything from searching the internet and automating customer support to creating music and art.
But AI models, and not just Google’s, have long been criticised for perpetuating racial and gender biases in their results. Google said last week that the problematic responses from Gemini were a result of the company’s efforts to remove such biases. Gemini was calibrated to show diverse people but did not adjust for prompts where that should not have been the case, also becoming too cautious with some otherwise harmless requests, Google’s Prabhakar Raghavan wrote in a blog post. “These two things led the model to overcompensate in some cases, and be over-conservative in others, leading to images that were embarrassing and wrong,” he said. Many concerns about AI have emerged since the explosive success of ChatGPT.
Experts and governments have warned that AI also carries the risk of major economic upheaval, especially job displacement, and industrialscale disinformation that can manipulate elections and spur violence.
Source Agencies