HACKER Q&A
📣 sjw987

Is it likely AI training models could start training on personal files?


I've been sorting through my content on Google recently. Backing up and moving off of Gmail and Google Drive was relatively simple, but Google Photos is a bit more daunting. The Google Takeout process has delivered me almost 500 2GB zip folders, with scrambled metadata in supplemental data files, which is going to take a while to sort through. It's my own fault for sticking with one platform for so long, and I got hooked during the "unlimited storage" days of early Google Pixel phones.

The reason I've begun downloading and removing stored files is because I'm (maybe justifiably or not) concerned about the prospect of my personal photos being used to train AI models. The chance that some diffusion model might end up recreating a heavily biased image of my wife, family, friends, or myself, or referencing any of my files or documents and what that all may be used for (commercially or otherwise) concerns me.

Google is the only place I've ever put my personal photos. I've never bothered with anything public facing and trusted that a private cloud storage service would always stay private. So in my case, Google would be the sole place to leave to ensure data sovereignty.

Does anybody believe Google (and other companies) might soon start scanning personal files we hold on their storage facilities? Is that a legal possibility for them?

It seems to me that it's a huge pool of fresh training data that they would inevitably want to get their hands on. And given how much they have already trained on, it seems the next logical step from a business standpoint.

Clearly they would need to change their privacy policies and terms of agreements and inform users of these changes. Is it possible they could slip this sort of change in without much notice?

I was also wondering if anybody might have pointers for the best strategy to securely backup offline. I don't want to just shift my family photos from one company to another where business execs are training their own model. Anybody else handled this recently?


  👤 incomingpain Accepted Answer ✓
>I've been sorting through my content on Google recently.

There's allegations that gemini is already trained on this data.

>Does anybody believe Google (and other companies) might soon start scanning personal files we hold on their storage facilities? Is that a legal possibility for them?

Free accounts already have agreed to be used.

>It seems to me that it's a huge pool of fresh training data that they would inevitably want to get their hands on. And given how much they have already trained on, it seems the next logical step from a business standpoint.

Im actually not so sure they have or ever will do. The problem isnt quantity, it's quality. Sure it could train on a bunch of trash in people's but then when inferring, it'll produce trash.

>Clearly they would need to change their privacy policies and terms of agreements and inform users of these changes. Is it possible they could slip this sort of change in without much notice?

you've been agreeing to them being able to read the content of the files for antivirus and antispam reasons for a very long time. To start doing it for AI requires no change.

>I was also wondering if anybody might have pointers for the best strategy to securely backup offline. I don't want to just shift my family photos from one company to another where business execs are training their own model. Anybody else handled this recently?

One of the useful apps I found was 'foldersync' which makes backup to cifs shares possible.