AI data & privacy

Home » ChatGPT, LLMs, and AI » AI data & privacy

“As artificial intelligence evolves, it magnifies the ability to use personal information in ways that can intrude on privacy interests by raising analysis of personal information to new levels of power and speed.” Protecting privacy in an AI-driven world

Generative AI makes an already challenging privacy environment more complex. Large language models benefit from being trained on as much data as possible. This leads to large-scale harvesting of information from the Internet and increased data tracking in other spaces. Once that data is in a learning model, it may be impossible to remove it. And if it is possible, it may require an unreasonable burden on the individual (as well as being dependent on where they live).

The AI model itself may allow unexpected and problematic interactions with existing data. The same emergent possibilities that make LLMs so interesting offer unknown risks for individuals. We may need to develop new privacy technologies to balance the advantages of LLMs while maintaining privacy.

 

An AI image created using Bing's image creator. It shows a watercolor painting of a circuit board.
An AI image created using Bing’s image creator.

Protecting your data while using AI tools

Using these tools may also result in any data you enter, and the interactions themselves, becoming part of the training model. Samsung banned using AI tools after internal data was leaked by an employee using chatGPT. The conversational (chat) model may obscure these risks from even savvy technology users.

Protecting your data from AI tools

At this time, the options to prevent the harvesting and use of your data are service specific. If you want to avoid having your data end up in LLMs, consider what data you publish on the open Internet and look carefully at the service-specific terms of use associated with the tools you use.

Correcting or removing data from AI tools

The removal and correction of data in LLMs is complex. At the moment, all LLMs warn that the information may be incorrect and needs verification. There are no paths to correct data.

Larger concerns

AI software has unique risks that differ from traditional software. It is already being used in ways that are negatively impacting large portions of the world.