3 factors to consider when evaluating generative AI solutions for cybersecurity

Earlier this year, I predicted that 2024 would be the year that cybersecurity practitioners overcome the cyber gap. The use of AI in cybersecurity is expected to increase, with technology providers incorporating generative AI into their security products and services, and using AI as a countermeasure against attacks such as malicious simulations and deepfakes.

ⓒ Getty Images Bank

As time goes by and we enter the final quarter of this year, cybersecurity teams at many companies are beginning to explore ways to utilize generative AI in cybersecurity. In most cases, the goal is to expedite the detection, containment, and removal steps in incident management and improve risk assessment. Generative AI can be of great help in identifying the root cause of an incident and expediting containment and elimination steps.

However, some solution companies are trying to ride the generative AI wave without providing appropriate safety measures in order to dominate the market. These companies rush to launch products and often postpone improvements to roadmaps rather than trying to resolve potential problems before release.

Meanwhile, the author believes that generative AI has passed the Gartner Hype Cycle’s ‘Peak of Inflated Expectations’ and is entering the ‘Trough of Disillusionment’ stage. This is the process of finding the sweet spot where end users align expectations with reality.

Here, based on our experience testing and evaluating several generative AI solutions for cybersecurity, we have outlined three key factors users should consider. It will be helpful to cybersecurity teams looking to start adopting generative AI.

1. Reliability of use

Usage confidence refers to the reliability of the output results provided by the prompt or prompt book. Because generative AI runs the risk of ‘hallucination’, many solution providers often include language requiring users to verify output.

In my experience, some companies are not confident in presenting specific prompts or playbooks. Nonetheless, these companies argue that generative AI can help companies solve problems at ‘machine speed’. Therefore, when evaluating generative AI solutions in cybersecurity, it is important to clearly identify which outputs can be trusted and which require further verification.

Ultimately, when taking an Assumed Breach approach, the most important thing in incident management is the ability to respond quickly. If the output results are unreliable, false and non-detection results occur, preventing real problems from being solved.

Security teams also rely on the accuracy and completeness of the summary data generated. Therefore, it is essential to clearly understand how accurate and reliable the generative AI solution provided by the company is.

2. Friction used

Writing a good prompt is now more of an art than a science. Multiple adjustments and repetitions are required to obtain the desired output. Additionally, some generative AI solutions exhibit weaknesses in ad-hoc and open security queries. This leads to results that are contrary to the expectation of solving problems at ‘machine speed’.

Some generative AI solutions may not yet comprehensively integrate sufficient log sources. As a result, the completeness and accuracy of the output may be reduced, making users reluctant to use generative AI solutions. Additionally, usage friction is further compounded when generative AI prompts operate on a utility rate model. This is because users become hesitant to use prompts in situations where they are responsible for the number of times they use the prompt.

Therefore, to effectively adopt generative AI solutions, it is important to recognize and address factors that cause usage friction.

3. Usage governance

Lastly, some solution companies charge usage fees based on activating generative AI functions. Just as you are constantly charged for water if you turn the faucet on and don’t turn it off, costs can increase regardless of whether you use it or not. A governance structure is needed to prevent unnecessary waste like this. Ideally, minimal safeguards should be implemented with role-based access control and an appropriate accounting system.

But no matter how the governance structure is designed, adopting a utility billing model can ensure elasticity, as in cloud computing. Therefore, end users must check the maturity of the governance structure to prevent abuse and waste.

In short, technology buyers should consider these three points amid the flood of generative AI solutions. As with cloud computing, as the adoption of generative AI in cybersecurity matures, it will move beyond the valley of disillusionment, into the ‘Slope of Enlightenment’, and eventually reach the ‘Plateau of Productivity’.
editor@itworld.co.kr

Source: www.itworld.co.kr