Google has recently revealed that conversations with its Gemini chatbot apps are stored for up to three years by default. This includes conversations that are “disconnected” from Google Accounts. The company also collects related data such as the languages and devices used, as well as the location of the user. These conversations are used to improve the service, with human annotators reading, labeling, and processing them.
Key Takeaway
Google’s Gemini chatbot apps retain conversations for up to three years by default, raising privacy concerns and regulatory scrutiny. As GenAI tools become more widespread, organizations are taking steps to mitigate privacy risks associated with these technologies.
Control Over Data Retention
Users are provided with some control over the data retention related to Gemini. They can switch off Gemini Apps Activity in Google’s My Activity dashboard to prevent future conversations from being saved to a Google Account for review. Additionally, individual prompts and conversations with Gemini can be deleted from the Gemini Apps Activity screen. However, even when Gemini Apps Activity is turned off, conversations will still be saved for up to 72 hours for the purpose of maintaining safety and security, as well as for improving the apps.
Privacy Concerns and Regulatory Scrutiny
Google’s data collection and retention policies for its GenAI tools are not significantly different from those of its competitors. However, these policies have raised concerns about privacy and data protection. Regulatory bodies have scrutinized the practices of companies like OpenAI, requesting detailed information about data vetting and protection. Italy’s data privacy regulator, for instance, has criticized OpenAI for the mass collection and storage of personal data to train its GenAI models.
Privacy Risks and Organizational Response
As GenAI tools become more prevalent, organizations are becoming increasingly cautious about the privacy risks associated with these technologies. A survey by Cisco revealed that a significant percentage of companies have established limitations on the data that can be entered into GenAI tools, and some have even banned their use altogether. Employees have also been found to enter problematic data into these tools, including sensitive information about their employers.