Protect Your Privacy: How to Opt Out of AI Chatbot Training with Your Data

Be cautious sharing sensitive information with chatbots; it may be used for AI training and improvement without consent.

Be cautious when engaging with chatbots, as your conversations may be utilized for the enhancement of the underlying artificial intelligence (AI) systems. For instance, if you seek advice from ChatGPT about a private medical issue, the information shared could contribute to modifying OpenAI's algorithms. Similarly, uploading a confidential company report to Google's Gemini for summarization could result in the alteration of machine learning models.

Implications of AI Models' Training Methods

The AI models that power popular chatbots have been trained on vast amounts of data obtained from sources like blog posts, news articles, and social media comments. This training is often conducted without explicit consent, raising concerns about copyright and data privacy. Experts suggest that given the opaque nature of AI models, it may be too late to remove any of your data that might have been used in the training process.

To mitigate these concerns, users are encouraged to restrict the use of their chatbot interactions for AI training. While not universally applicable, some companies offer the option to opt out. For example, Google retains conversations with its Gemini chatbot for 18 months by default for users aged 18 and older. Users can adjust this setting in the Gemini website under the Activity tab, where they can choose to stop recording future chats or delete their previous conversations. However, it's worth noting that conversations selected for human review will not be deleted and are stored separately.

Data Handling Practices of Different Companies

Google holds and reviews all chats with Gemini for a 72-hour period to provide the service and process feedback. The company warns users not to share confidential information or personal data they don't want a human reviewer to access. Similarly, Meta's AI chatbot, which operates across its platforms such as Facebook, WhatsApp, and Instagram, is trained on publicly available data as well as instances of user-generated content. However, private messages between users are excluded from this training process.

Users in the European Union and the United Kingdom, where strict data privacy regulations are in place, have the right to object to their information being used for AI training at Meta. The company offers a form on its privacy page where users can exercise this right. However, individuals in the United States and other countries without national data privacy laws do not have this option. Instead, they can only request that their data scraped by third parties not be used for AI development at Meta, a process that is not automatically fulfilled.

Options for Opting Out of AI Training

Users of Microsoft, OpenAI, and Anthropic AI also have varying degrees of control over their interactions with AI chatbots. For example, Microsoft allows users to delete their interactions with the Copilot chatbot, while OpenAI provides the option to disable the setting to "Improve the model for everyone." Anthropic AI's chatbot, Claude, does not by default use user interactions for training, although users can explicitly permit specific responses to be included in training.

In the case of Elon Musk's AI chatbot, Grok, on the X platform, users are automatically enrolled in a program that allows the AI to utilize data from the social media platform for training purposes. Opting out requires users to navigate through the X desktop browser settings, as there is no option available on the mobile app.

Share news

Copyright ©2024 All rights reserved | PrimeAi News

We use cookies to improve your browsing experience, offer personalized ads or content, and analyze our traffic. By clicking 'Accept', you consent to our use of cookies.

Cookies policy.