Embrace Generative AI for Security, But Use Caution

There’s a lot of talk out there about the impact of generative AI on cybersecurity—good and bad.

On one side, you have the advocates convinced of its potential to help fend off bad actors; on the other, you have the skeptics that fear generative AI will dramatically accelerate the volume and severity of security incidents in the coming years.

We’re in the early innings of generative AI. But its potential has become hard to ignore.

It’s already proving its value as an accelerant to automation—which is an attractive proposition for any CISO looking to shift their team’s focus from tedious day-to-day tasks to more strategic projects.

We’re also getting a glimpse of the future. Security teams worldwide are already experimenting with LLMs as a force multiplier to:

  • Scan large volumes of data for hidden attack patterns and vulnerabilities;
  • Simulate tests for phishing attacks;
  • Generate synthetic data sets to train models to identify threats.

I personally believe generative AI will be a net positive for security, but with a large caveat: It could make security teams dangerously complacent.

Simply put, an overreliance on AI could lead to a lack of supervision in an organization’s security operations, which could easily create gaps in the attack surface.

Look, Mom, no Hands!

There’s a general belief that if AI becomes smart enough, it will require less human oversight. In a practical sense, this would result in less manual work. It sounds great in theory, but in reality, it’s a slippery slope.

False positives and negatives are already a big problem in cybersecurity. Ceding more control to AI would only make things worse.

To break it down, LLMs are built on statistical and temporal text analysis and don’t understand context. This leads to hallucinations that are very tough to detect even when thoroughly inspected.

For example, suppose a security pro uses LLM-based guidance on remediating a vulnerability related to the remote desktop protocol. In that case, it’s likely to recommend the most common remediation method rather than the actual best fit. The guidance might be 100% wrong yet appear plausible.

The LLM has no understanding of the vulnerability or what the remediation process means. It relies on a statistical analysis of typical remediation processes for that class of vulnerabilities.

The Accuracy and Inconsistency Conundrum

The Achilles heel of large language models (LLMs) lies in the inconsistency and inaccuracy of their outputs.

Tom Le, Mattel’s chief information security officer, knows this all too well. He and his team have been applying generative AI to amplify defenses but are finding that, more often than not, the models “hallucinate.”

According to Tom, “Generative AI hasn’t reached a ‘leap of faith’ moment yet where companies could rely on it without employees overseeing the outcome.”

His sentiment reinforces my point that generative AI poses a threat by way of human complacency.

You Can’t Take the Security Pro out of Security

Contrary to what the doomers may think, generative AI will not replace humans—at least not in our lifetime. Intuition is just unbeatable at detecting certain security threats.

For example, in application security, SQL injection and other vulnerabilities can create huge cybersecurity risks detectable only when humans run reverse engineering and fuzzing on the application.

Using humans to write code also results in code that is much easier for other humans to read, parse and understand. In code that AI auto-generates, vulns can be far more difficult to detect because there is no human developer familiar with the app’s code. Security teams that use AI-generated code will need to spend more time ensuring they are familiar with the AI’s output and identifying issues before they become exploits.

Looking to generative AI for fast code should not cause security teams to lower their guard and may mean spending more time ensuring code is safe.

But AI is Not All Bad!

Despite both the positive and negative sentiment today, generative AI has the potential to augment our capabilities. It just has to be applied judiciously.

For instance, deploying generative AI in conjunction with Bayesian machine learning (ML) models can be a safer method to automate cybersecurity. This method makes generative AI safer by making training, assessment, and measurement of output easier. This combination of generative AI and Bayesian ML models is also easier to inspect and debug when inaccuracies occur. This method can be used to either create new insights from data or to validate the output of a generative AI model.

Alas, cybersecurity pros are people, and people are not perfect. We may be slow, exhausted after long workdays and error-prone, but we have something AI does not: Judgment and nuance. We have the ability to understand and synthesize context; machines don’t.

Handing security tasks entirely to generative AI with no human oversight and judgment could result in short-term convenience and long-term security gaps.

Instead, use generative AI to surgically augment your security talent. Experiment. Ultimately, the work you put forth up front will save your organization unnecessary headaches later.

Avatar photo

Rob Gurzeev

Rob Gurzeev, CEO and Co-Founder of CyCognito, has led the development of offensive security solutions for both the private sector and intelligence agencies.

rob-gurzeev has 4 posts and counting.See all posts by rob-gurzeev