Any data, prompts, or requests you share helps teach the algorithm—and personalized information helps fine tune it further, says Jake Moore, global cybersecurity adviser at security outfit ESET, who created his own action figure to demonstrate the privacy risks of the trend on LinkedIn.
Uncanny Likeness
In some markets, your photos are protected by regulation. In the UK and EU, data-protection regulation including the GDPR offer strong protections, including the right to access or delete your data. At the same time, use of biometric data requires explicit consent.
However, photographs become biometric data only when processed through a specific technical means allowing the unique identification of a specific individual, says Melissa Hall, senior associate at law firm MFMac. Processing an image to create a cartoon version of the subject in the original photograph is “unlikely to meet this definition,” she says.
Meanwhile, in the US, privacy protections vary. “California and Illinois are leading with stronger data protection laws, but there is no standard position across all US states,” says Annalisa Checchi, a partner at IP law firm Ionic Legal. And OpenAI’s privacy policy doesn’t contain an explicit carve-out for likeness or biometric data, which “creates a grey area for stylized facial uploads,” Checchi says.
The risks include your image or likeness being retained, potentially used to train future models, or combined with other data for profiling, says Checchi. “While these platforms often prioritize safety, the long-term use of your likeness is still poorly understood—and hard to retract once uploaded.”
OpenAI says its users’ privacy and security is a top priority. The firm wants its AI models to learn about the world, not private individuals, and it actively minimizes the collection of personal information, an OpenAI spokesperson tells WIRED.
Meanwhile, users have control over how their data is used, with self-service tools to access, export, or delete personal information. You can also opt out of having content used to improve models, according to OpenAI.
ChatGPT Free, Plus, and Pro users can control whether they contribute to future model improvements in their data controls settings. OpenAI does not train on ChatGPT Team, Enterprise, and Edu customer data by default, according to the company.
Trending Topics
The next time you are tempted to jump on a ChatGPT-led trend such as the action figure or Studio Ghibli–style images, it’s wise to consider the privacy trade-off. The risks apply to ChatGPT as well as many other AI image editing or generation tools, so it’s important to read the privacy policy before uploading your photos.
There are also steps you can take to protect your data. In ChatGPT, the most effective is to turn off chat history, which helps ensure your data is not used for training, says Vazdar. You can also upload anonymized or modified images, for example, using a filter or generating a digital avatar rather than an actual photo, he says.
It’s worth stripping out metadata from image files before uploading, which is possible using photo editing tools. “Users should avoid prompts that include sensitive personal information and refrain from uploading group photos or anything with identifiable background features,” says Vazdar.
Double-check your OpenAI account settings, especially those related to data use for training, Hall adds. “Be mindful of whether any third-party tools are involved, and never upload someone else’s photo without their consent. OpenAI’s terms make it clear that you’re responsible for what you upload, so awareness is key.”
Checchi recommends disabling model training in OpenAI’s settings, avoiding location-tagged prompts, and steering clear of linking content to social profiles. “Privacy and creativity aren’t mutually exclusive—you just need to be a bit more intentional.”