Many organizations are in the process of adopting GenAI solutions or evaluating strategies for integrating these tools into their business plans. However, to make informed decisions and develop effective plans, it is crucial to have access to reliable data. Unfortunately, such data is still relatively scarce.
The “Enterprise GenAI Data Security Report 2025” published by LayerX provides unparalleled insights into the practical application of AI tools in the workplace, highlighting critical vulnerabilities in the process. By leveraging real-world telemetry from LayerX’s enterprise clients, this report offers one of the few trustworthy sources that detail actual employee usage of GenAI.
One of the key findings of the report reveals that approximately 90% of enterprise AI usage occurs outside the visibility of IT, leaving organizations vulnerable to significant risks such as data leakage and unauthorized access.
Below, we summarize some of the report’s key findings. To refine and enhance your security strategies, utilize data-driven decision-making for risk management, and advocate for resources to bolster GenAI data protection measures, we recommend reading the full report.
To register for a webinar that will discuss the key findings in this report, click here.
GenAI Usage in the Enterprise: A Mixed Bag
Although the hype surrounding GenAI might suggest that the entire workforce has transitioned to using these tools, LayerX’s findings indicate a more lukewarm adoption. Approximately 15% of users access GenAI tools on a daily basis, which, while notable, does not represent the majority.
However, we concur with LayerX’s analysis at The New Stack, predicting that this trend will accelerate rapidly. This is particularly evident given that 50% of users currently utilize GenAI every other week.
Furthermore, the report finds that 39% of regular GenAI tool users are software developers. This means that the highest potential for data leakage through GenAI is related to source and proprietary code, as well as the risk of incorporating risky code into your codebase.
The Mystery of GenAI Usage: What’s Going On?
Given LayerX’s position in the browser, the tool has visibility into the use of Shadow SaaS. This enables them to monitor employees using tools that have not been approved by the organization’s IT department or accessed through non-corporate accounts.
While GenAI tools like ChatGPT are used for work purposes, nearly 72% of employees access them through their personal accounts. If employees do access these tools through corporate accounts, only about 12% use Single Sign-On (SSO). As a result, nearly 90% of GenAI usage remains invisible to the organization. This leaves organizations in the dark regarding ‘shadow AI’ applications and the unauthorized sharing of corporate information on AI tools.
Corporate Data in GenAI: A Concerning Trend
Recall the Pareto principle? In this context, while not all users utilize GenAI on a daily basis, those who do paste data into GenAI applications frequently, including potentially confidential information.
LayerX discovered that the pasting of corporate data occurs almost four times a day, on average, among users who submit data to GenAI tools. This could encompass business information, customer data, financial plans, source code, and more.
Planning for GenAI Usage: A Call to Action for Enterprises
The findings in the report underscore the urgent need for new security strategies to manage GenAI risk. Traditional security tools fall short in addressing the modern AI-driven workplace, where applications are often browser-based, lacking the ability to detect, control, and secure AI interactions at the source – the browser.
Browser-based security provides visibility into access to AI SaaS applications, unknown AI applications beyond ChatGPT, AI-enabled browser extensions, and more. This visibility can be leveraged to employ Data Loss Prevention (DLP) solutions for GenAI, enabling enterprises to safely incorporate GenAI into their plans and future-proof their business.
To access more data on GenAI usage, read the full report.