Skip to main content

Cisco Unveils AI Defense Solution to Address AI Security Challenges

Cisco is taking a radical approach to AI security in its new AI Defense solution. In an exclusive interview with Rowan Cheung of The Rundown AI, Cisco Executive Vice President and CPO Jeetu Patel stated that AI Defense is "taking a radical approach to address the challenges that existing security solutions are not equipped to handle."

Addressing AI Security Risks

AI Defense, announced last week, aims to address risks in developing and deploying AI applications, as well as identifying where AI is used in an organization. The solution can protect AI systems from attacks and safeguard model behavior across platforms with features such as:

  • Detection of shadow and sanctioned AI applications across public and private clouds
  • Automated testing of AI models for hundreds of potential safety and security issues
  • Continuous validation safeguards against potential safety and security threats, such as prompt injection, denial of service, and sensitive data leakage

Enhancing Data Protection and Compliance

The solution also allows security teams to better protect their organizations’ data by providing a comprehensive view of AI apps used by employees, create policies that restrict access to unsanctioned AI tools, and implement safeguards against threats and confidential data loss while ensuring compliance.

Expert Insights on AI Security

"The adoption of AI exposes companies to new risks that traditional cybersecurity solutions don’t address," Kent Noyes, global head of AI and cyber innovation at World Wide Technology, said in a statement. "Cisco AI Defense represents a significant leap forward in AI security, ensuring that AGI evolves responsibly, minimizing risks like rogue decision-making or unintended consequences."

The Importance of AI Security

"AI security isn’t just a ‘nice-to-have’ or something to think about in the years to come," Noyes added. "It’s critical as we move toward AGI."

Existential Doom?

While AI Defense is a step in the right direction, its adoption across organizations and major AI labs remains to be seen. Interestingly, the OpenAI CEO Sam Altman acknowledges the technology’s threat to humanity but believes AI will be smart enough to prevent AI from causing existential doom.

"I see some optimism about AI’s ability to self-regulate and prevent catastrophic outcomes, but I also notice in the adoption that aligning advanced AI systems with human values is still an afterthought rather than an imperative," Adam Ennamli, chief risk and security officer at the General Bank of Canada, told TechNewsWorld.

The Risks of Unchecked AI

"The notion that AI will solve its own existential risks is dangerously optimistic, as demonstrated by current AI systems that can already be manipulated to create harmful content and bypass security controls," added Stephen Kowski, field CTO at SlashNext, a computer and network security company.

Human Oversight and Technical Safeguards

"Technical safeguards and human oversight remain essential since AI systems are fundamentally driven by their training data and programmed objectives, not an inherent desire for human well-being," Kowski told TechNewsWorld.

A Positive Outlook

"Human beings are pretty creative," Gold added. "I don’t buy into this whole doomsday nonsense. We’ll figure out a way to make AI work for us and do it safely. That’s not to say there won’t be issues along the way, but we’re not all going to end up in ‘The Matrix’."


Source Link