Your use of AI in the office may be putting your business at risk, experts warn

The proliferation of artificial intelligence-based tools in offices has opened up a new vulnerability in the business network.

Is your AI-powered chatbot endangering your office? The adoption of numerous artificial intelligence-based tools in businesses and government has created new risks for network security. Companies like Samsung now prohibit the use of ChatGPT internally with good reason: engineers in the semiconductor division had used the conversational agent by providing it with confidential source code that they wanted to improve. We can also cite the city of Montpellier which applied the same instructions and blockages to the positions of municipal employees.

Many large groups have therefore favored a different approach: developing their own internal artificial intelligence, based on the “business” bases of ChatGPT or Claude. Data sometimes hosted locally, employees who optimize the internal tool, everything seems impeccable in theory. Except that this tool, like others, is hackable. And the gain can be all the more interesting once the cybercriminal has stolen access to this program offered to all employees.

Concretely, what are the risks? “ If a hacker gains access to the professional account of a popular chatbot, he or she will be able to find sensitive documents transferred to the one“this,” explains Adrien Merveille, cybersecurity expert at Check Point. “ An internally developed program is not immune either. The cybercriminal can attempt code poisoning to modify the results produced by an AI and cause harm to the company » adds the specialist. A company that analyzes camera images could, for example, be deceived in the analyzes provided by its program.

Modules to block requests to ChatGPT

Solutions are starting to emerge to secure these new internal tools. The Check Point company now offers modules on search engines to block an employee’s requests on ChatGPT when they disclose private information.

If you request a summary based on the results of the different branches of the company, it will be blocked since it contains information considered sensitive.

An example of a blocked request. // Source: Check PointAn example of a blocked request. // Source: Check Point
An example of a blocked request where the employee would provide financial information to the chatbot. // Source: Check Point

ANSSI, the sentinel responsible for digital protection of the French administration, also looked into the subject and provided last spring an online booklet to secure the use of AI in business.

« The most basic error is to believe that artificial intelligence is a separate subject, to the point of forgetting that security rules do not apply to its use » warns Vincent Strubel, director general of ANSSI. “ We notice some errors in practice, AI which refreshes itself online, which is not monitored… before letting the program make a decision, we recommend mastering it well. » he explains to us.

Vincent Strubel, Director General of ANSSI. // Source: Patrick GaillardinVincent Strubel, Director General of ANSSI. // Source: Patrick Gaillardin
Vincent Strubel, Director General of ANSSI. // Source: Patrick Gaillardin

To move forward, ANSSI would consider labels to use secure artificial intelligence programs. These national certifications already exist for cybersecurity solutions, but the criteria for safe AI still need to be found.

In the meantime, you can always ask ChatGPT not to keep chat history to avoid feeding its AI.


Source: www.numerama.com