Skip to main content

The Double-Edged Sword of Generative AI in Cybersecurity

Generative AI Takes Center Stage as Businesses’ Personal Security Experts

While there is a lot of talk about the potential security risks introduced through generative AI, there are real and beneficial applications already happening today that people neglect to mention. As AI tools become more versatile and accurate, security assistants will become a significant part of the Security Operations Center (SOC), easing the perennial manpower shortage. The benefit of AI will be to summarize incidents at a higher level — rather than an alert that requires analysts to go through all the logs to connect the dots, they’ll get a high-level summary that makes sense to a human, and is actionable.

The Context is Crucial

Of course, we must keep in mind that these opportunities are within a very tight context and scope. We must ensure that these AI tools are trained on an organization’s policies, standards, and certifications. When done so appropriately, they can be highly effective in helping security teams with routine tasks. If organizations haven’t taken note of this already, they’ll be hearing it from their security teams soon enough as they look to alleviate workloads for understaffed departments.

AI Models as the Next Focus of AI-Centered Attacks

Last year, there was a lot of talk about cybersecurity attacks at the container layer — the less-secured developer playgrounds. Now, attackers are moving up a layer to the machine learning infrastructure. I predict that we’ll start seeing patterns like attackers injecting themselves into different parts of the pipeline so that AI models provide incorrect answers, or even worse, reveal the information and data from which it was trained. There are real concerns in cybersecurity around threat actors poisoning large language models with vulnerabilities that can later be exploited.

The Cybersecurity Field Will Rise to the Occasion

Although AI will bring new attack vectors and defensive techniques, the cybersecurity field will rise to the occasion, as it always does. Organizations must establish a rigorous, formal approach to how advanced AI is operationalized. The tech may be new, but the basic concerns — data loss, reputational risk, and legal liability — are well understood and the risks will be addressed.

Concerns About Data Exposure Through AI are Overblown

Concerns about data exposure through AI are overblown. People putting proprietary data into large language models to answer questions or help compose an email pose no greater risk than someone using Google or filling out a support form. From a data loss perspective, harnessing AI isn’t necessarily a new and differentiated threat. At the end of the day, it’s a risk created by human users where they take data not meant for public consumption and put it into public tools. This doesn’t mean that organizations shouldn’t be concerned. It’s increasingly a shadow IT issue, and organizations will need to ratchet up monitoring for unapproved use of generative AI technology to protect against leakage.

Disclaimer

The views expressed are solely of the author and ETCISO does not necessarily subscribe to it. ETCISO shall not be responsible for any damage caused to any person/organization directly or indirectly.

Published On Jan 9, 2025 at 11:02 AM IST

Join the Community of 2M+ Industry Professionals

Subscribe to our newsletter to get the latest insights and analysis.

Download ETCISO App

Get Realtime updates and save your favorite articles.


Source Link