xAI Grok chatbot and ChatGPT logos are seen in this illustration taken, March 11, 2024.Â
Dado Ruvic | Reuters
Be careful what you tell a chatbot. Your conversation might be used to improve the artificial intelligence system that it’s built on.
If you ask ChatGPT for advice about your embarrassing medical condition, beware that anything you disclose could be used to tweak OpenAI’s algorithms that underpin its AI models. The same goes if, for example, you upload a sensitive company report to Google’s Gemini to summarize for a meeting.
It’s no secret that the AI models underpinning popular chatbots have been trained on enormous troves of information scraped from the internet, like blog posts, news articles and social media comments, so they can predict the next word when coming up with a response to your question.
This training was often done without consent, raising copyright concerns. and, experts say, given the opaque nature of AI models, it’s probably too late to remove any of your data that might have been used.
But what you can do going forward is stop any of your chatbot interactions from being used for AI training. It’s not always possible but some companies give users the option:
Google Gemini
Google keeps your conversations with its Gemini chatbot to train its machine learning systems. For users 18 or older, chats are kept by default for 18 months, though that can be adjusted in settings. Human reviewers can also access the conversations to improve the quality of the generative AI models that power Gemini. Google warns users not to tell Gemini any confidential information or give it any data they don’t want a human reviewer to see.
To opt out of this, go to the Gemini website and click the Activity tab. Click the Turn Off button and from the drop down menu, you can choose to stop recording all future chats or delete all your previous conversations. The company warns that any conversations that have been selected for human review won’t be deleted and are stored separately. Whether you choose to turn your activity off or leave it on, Google also says all chats with Gemini will be kept for 72 hours to “provide the service and process any feedback.”
Gemini’s help page also details the process for iPhone and Android app users.
Meta AI
Meta has an AI chatbot that’s been butting into conversations on Facebook, WhatsApp and Instagram, powered by its open-source AI language models. The company says those models are trained on information shared on its platforms including social media posts and photos and caption info, but not your private messages with friends and family. They’re also trained on publicly available information scraped from other parts of the web by “third parties.”
Not everyone can opt out. People in the 27-nation European Union and the United Kingdom, which have strict privacy regulations, have the right to object to their information being used to train Meta’s AI systems. From the Facebook privacy page, click Other Policies and Articles from the list on the left side, then click the section on generative AI. Scroll down to find a link to a form where you can object.
There’s a box to fill out with additional information to support your request, but no details about what you should say. I wrote that I was exercising my right as a U.K. resident to withdraw consent for my personal information to be used for AI training. I received an email almost instantly saying Meta had reviewed the request and would honor my objection. “This means your request will be applied going forward,” it said.
People in the United States and other countries without national data privacy laws don’t have this option.
Meta’s privacy hub does link to a form where users can request that their data scraped by third parties not be used to “develop and improve AI at Meta.” But the company warns it won’t automatically fulfill requests and will review them based on local laws. The process itself is cumbersome, requiring users to provide the chatbot request that produced a response with their personal info and a screenshot of it.
Microsoft Copilot
There’s no way to option to opt out for personal users. The best you can do is delete your interactions with the Copilot chatbot by going to your Microsoft account’s settings and privacy page. Look for a drop down menu labeled Copilot interaction history or Copilot activity history to find the delete button.
OpenAI’s ChatGPT
If you’ve got an OpenAI account, go to the settings menu on your web browser and then to the Data controls section, where you can disable a setting to “Improve the model for everyone.” If you don’t have an account, click on the small question mark at the bottom right of the web page, then click on settings, and you’ll get the same option to opt out of AI training. Mobile users can make the same choice on the ChatGPT Android and iOS apps.
OpenAI says on its data controls help page that when users opt out, their conversations will still appear in the chat history but won’t be used for training. These temporary chats will be kept for 30 days and reviewed only if needed to monitor for abuse.
Grok
Elon Musk’s X quietly activated a setting that allows the billionaire Tesla CEO’s AI chatbot Grok to be trained on data from the social media platform. This setting has been turned on by default and allows Grok to use data including your posts, “interactions, inputs, and results” for training and “fine-tuning.”
The change wasn’t publicized and only came to light after X users spotted it in July. To opt out, you need to go to settings on X’s desktop browser version, then click “Privacy and safety,” scroll down to “Grok” and then untick the box. You can also delete your conversation history with Grok if you have any. There’s no way to do it from the X mobile app, unfortunately.
Claude
Anthropic AI says its chatbot Claude is not trained on personal data. It also does not by default use questions or requests to train its AI models. However, users can give “explicit permission” for a specific response to be used in training by giving it a thumbs up or thumbs down or by emailing the company. Conversations that are flagged for a safety review could also be used to train the company’s systems to better enforce its rules.
Source Agencies