How AI can leak your sensitive corporate data: what you need to know and how OTYS protects you
The rapid rise of AI technologies such as ChatGPT offers organizations unprecedented opportunities to accelerate processes and work more efficiently. From customer support to automated reporting, AI is playing an increasing role in improving business processes. However, with these benefits also come new risks, especially when sensitive information is accidentally shared through these AI systems. A recent NOS (Dutch Broadcasting Foundation) investigation, for example, found that even general practitioners are entering sensitive patient data into AI systems, which can lead to serious data breaches. This trend therefore raises important questions about how we handle data and privacy in the modern digital world.
Every company wants to avoid a data breach when using AI. In this blog, read more about the dangers and get tips.
AI and data breaches: a growing challenge
The risks are real. If company data, customer information or other sensitive data is shared through AI systems, it could end up outside your company's secure environment. This is because AI models such as ChatGPT are trained on massive amounts of data, and although they do not directly remember what you input, traces of confidential information may still be left in their algorithms. Moreover, sensitive information can then be used in this way to generate future responses. Subsequently, then, your sensitive information may be shared unintentionally.
Incidentally, a Gartner study found that by 2025, nearly 60% of all companies will have experienced at least one incident in which sensitive information is leaked through the use of AI. This makes it all the more important to take preventive measures. Awareness of these challenges is the first step toward secure use of AI technologies.
The role of awareness and regulation
OTYS Data Protection Officer Bastiaan Brans underscores the importance of handling data consciously “Nothing is free in the world. If you don't pay in money, you pay in data. For example, in 2005 Google took over one of the best web analytics tools at the time (which cost hundreds of dollars per month), greatly improved the product and then made it available 'for free' as 'Google Analytics'. Risks are always there in life; you want to minimize them. The moment you provide data to an AI algorithm, do it through European servers.”
As Bastiaan pointed out, risks are always there. Sometimes people are not aware of them. “An example is Suno, a tool for AI-generated songs. You can very innocently put quite emotional and sensitive information in there, but you don't know what happens to the information you provide to Suno.” So as a recruiter, you certainly shouldn't blindly put information about candidates into ChatGPT either; you never know what happens to it afterwards.
Moreover, privacy laws such as the AVG (General Data Protection Regulation) are clear: companies are responsible for ensuring the security of personal data, regardless of what technologies they deploy. This means the onus is on companies to ensure that AI systems are used securely and that sensitive data is not shared casually.
How OTYS meets the challenge
At OTYS, we understand these risks and take them very seriously. This is why we have not only implemented technical solutions, but also put a lot of effort into making our users aware. Bastiaan Brans explains how we approach this: “Training, repetition, mentioning often, we do the usual things you do for awareness. But what is actually much more important is offering people an alternative. For example, many people know that catching a plane to Bali is not exactly good for the environment, but if you think about it, there are very few good alternatives if you really want to go to Bali. So awareness is step one; but simply 'banning' ChatGPT is not going to work, in our opinion. The OTYS AI Assistant is a good alternative to using ChatGPT.”
OTYS continues to continually work to improve our systems and processes so that we best protect our clients from potential risks. This ranges from real-time monitoring to secure data storage. We ensure that our clients' data security is always paramount. So preventing a data breach when using AI is a wise choice with the AI Assistant.
Innovations for the future
In addition to being in the present, our focus is also on the future. We are currently working on advanced innovations, including new data encryption techniques and AI tools that actively help companies prevent data breaches.
Conclusion
The inadvertent sharing of sensitive information through AI systems such as ChatGPT is a growing problem affecting businesses worldwide. At OTYS, we take these challenges extremely seriously. With our innovative solutions and focus on data security, we ensure that our clients' data remains protected, now and in the future.