Skip to main content

Cyber Security Advice: Be Cautious When Using AI Apps

The Computer Emergency Response Team of India (CERT-In), the national technology arm to guard the Indian Internet space and combat cyberattacks, has issued a warning regarding the potential risks associated with Artificial Intelligence (AI) applications. According to the advisory, not all AI apps are safe, and users should exercise caution when signing up for them.

Vulnerabilities in AI Design

The advisory highlights several technical issues with AI design, training, and interaction mechanisms, including data poisoning, adversarial attacks, model inversion, prompt injection, and hallucination exploitation. These vulnerabilities can be exploited by threat actors to create fake AI apps that trick users into downloading malware, which can steal sensitive information.

Risks Associated with AI Usage

As AI becomes increasingly advanced and ubiquitous, the associated risks also increase. The advisory warns that AI applications can be targeted by attacks that take advantage of flaws in data processing and machine learning models, posing significant threats to their security, reliability, and trustworthiness. Furthermore, AI tools should not be relied upon to make critical decisions, especially in legal or medical contexts, as they can be fooled by "bad data" or malicious hackers.

Precautions to Take

To minimize AI cybersecurity risks, users are advised to:

  • Avoid sharing personal and sensitive information with AI service providers
  • Use anonymous accounts when signing up for AI services
  • Avoid using generative AI tools for professional work involving sensitive information
  • Be cautious when downloading AI apps and practice due diligence to avoid installing malware
  • Use AI tools only for their intended purpose and not for making critical decisions

Potential Risks Linked to AI Usage

The advisory also highlights several potential risks linked to AI usage, including:

  • Data poisoning: manipulating training data to make the model learn incorrect patterns
  • Adversarial attacks: changing inputs to AI models to make them produce wrong predictions
  • Model inversion: extracting sensitive information about a machine learning model’s training data
  • Prompt injection: manipulating AI models to hijack their output and bypass safeguards
  • Backdoor attacks: implanting hidden triggers within an AI model during its training process

Conclusion

In conclusion, while AI has revolutionized various industries and has become a hallmark of innovation, it is essential to be aware of the potential risks associated with its usage. By taking precautions and being cautious when using AI apps, users can minimize the risks and ensure a safe and secure experience.

Published On Mar 29, 2025 at 08:57 AM IST

Join the community of 2M+ industry professionals and subscribe to our newsletter to get the latest insights and analysis.

Download the ETCISO App to get real-time updates and save your favorite articles. Available on the Play Store and App Store. Scan the QR code to download the app.


Source Link