“Proper control over trust” How to protect your business from shadow AI

It’s tempting to use free generative AI tools like OpenAI’s ChatGPT or Google’s Bard for work. The results will have to be edited, but they’re a much faster solution for writing emails, editing text, making lists, preparing presentations, or generating code.

However, unauthorized use of such AI services, which are mainly aimed at individual users, in a work environment causes various problems.

ⓒ Getty Images Bank

The most dangerous data breach problem

Many generative AI platforms train their models using data submitted by users, which puts sensitive or copyrighted data, source code, etc. at risk of being exposed externally. For example, in April 2023, Samsung banned its employees from using ChatGPT after sensitive data, including confidential source code, was accidentally sent to its generative AI platform on three occasions. Many large companies have since followed suit.

As the Google research team revealed in a research report in late 2023, it is possible to make ChatGPT reveal private user data with just a few prompts. The research team was able to find out the names, phone numbers, and addresses of individuals and companies by inputting ridiculous commands into ChatGPT and causing it to malfunction.

According to security researcher Johan Leverger, who first discovered and described the vulnerability, OpenAI quickly fixed the bug, but the security vulnerability still exists. And because AI chatbots are trained on open-source (or stolen) data, there is a possibility that the code generated by ChatGPT could contain malicious code inserted by hackers.

AI illusions with ripple effects

It is now well known that generative AI does not always deliver the results that are expected. For example, generative AI can summarize long texts relatively well, but it struggles when it comes to generating its own content. AI tools often cause hallucinations, refer to fictitious sources, and are particularly vulnerable to mathematics.

Therefore, there is a high risk that users will be impressed by the lengthy explanations or lightning-fast code generated by generative AI and accept the results without checking them. If used internally, it will only damage the user’s reputation, but if there is a blatant error in external communication, the image of the entire company can be damaged.

Proof of cost-effectiveness

Other types of shadows, like IT, have a positive side to the professional use of generative AI tools, as they demonstrate that users need specific tools to do their jobs easier, faster, and more efficiently.

If your organization allows the use of generative AI under certain data protection guidelines, IT leaders can respond relatively quickly to data breaches with business solutions. This can be done by purchasing licenses for generative AI tools such as Microsoft Copilot for Windows 365, ChatGPT Teams, or Enterprise.

However, since the license fee is more than 20 euros per user per month, it is not easy to get approval until it can prove real productivity improvement or cost savings, not to mention the related training required to achieve real useful results.

Systematic approach and education

By 2024, any business with more than two computers will be considering AI in its operations. But that will involve considering how much, in what form, for what purpose, and, if budgets are tight, who can use the technology.

Based on this, appropriate guidelines should be defined so that employees (and management) can use it safely and wisely. The following key elements should be defined:

  • Employees and departments authorized to use generative AI models in their work
  • Business steps that can be automated or improved with generative AI
  • Internal applications and data that this model can access and how

The next step is to educate your employees on how to use the models safely and effectively. And because generative AI is a very dynamic market, it’s a good idea to regularly review your policies even if you use a platform that’s right for your business.

Proper control is better than trust

Nonetheless, IT leaders need to be vigilant about unauthorized use of generative AI, and more importantly, take steps to prevent leaks of sensitive data. Fortunately, generative AI platforms are not much different from other entities on the Internet that need to protect sensitive data.

Because they are accessed through a browser, they are not as easy to detect as traditional shadow IT, such as free tools or SaaS applications like Salesforce that bypass the IT department with the department head’s credit card. However, with the right tools, you can block access to these platforms (URL filtering) or prevent user actions such as uploading and transmitting sensitive data on these platforms (content filtering).

In this context, it also makes sense to categorize your data. If you don’t want to completely ban your employees from using generative AI, this step allows you to select the data that’s appropriate for a particular use case and exclude other information from your AI system.

These steps will not only minimize the inherent risks of using generative AI, but also ensure that companies and employees do not miss out on technological advancement opportunities due to overly strict rules. Because beyond the current hype, it is now clear that generative AI is not a short-term trend, but a technology with enormous disruptive potential.
editor@itworld.co.kr

Source: www.itworld.co.kr