According to cybersecurity experts, the images uploaded to AI chatbots could be retained for an unlimited period of time, potentially leading to impersonation scams if they fall into the wrong hands. When users upload a photo of themselves featuring a company logo and details about their role and ask OpenAI’s ChatGPT for a caricature, in turn gains specific knowledge about them.
When a user uploads an image to an AI chatbot, the system processes it to extract data such as the person’s emotion, environment and information that could disclose their location. A data breach at a company like OpenAI could expose sensitive information such as images and personal information collected by the chatbot. If these fall into the wrong hands, a single high-resolution image could be used to create fake social media accounts or realistic AI deepfakes to carry out scams, according to Charlotte Wilson, Head of Enterprise at Check Point.
Users should avoid uploading images that reveal any identifying information. It is better to keep the background plans and avoid location clues that ties you to your employer. All personal information such as job titles, city or company names should be excluded from the prompts. Additionally, users can opt out of having their conversations used for training by turning off the “improve the model for everyone” setting in ChatGPT. Additionally, EU law allows users to delete the deletion of personal data collected by the companies. However, OpenAI says that it may retain certain information even after deletion to prevent frauds in light of rising security concerns.