A survey conducted by Cybernews has found that a majority of U.S. employees are using AI tools in the workplace without formal approval from their employers, with many sharing potentially sensitive data through these platforms. The findings highlight ongoing gaps in corporate AI governance and suggest that existing policies, where they exist, may be insufficiently enforced or communicated.
According to the study, 59% of employees reported using AI tools that were not sanctioned by their organizations. Among these, 75% admitted to inputting possibly sensitive information, such as internal documents, customer data, or employee details. Despite these risks, the use of unapproved tools appears to be tolerated or overlooked by many managers: 57% of respondents using unsanctioned AI tools said their direct managers supported the practice, and 16% said their managers were indifferent.
The research also indicates that the use of unapproved AI tools—often referred to as “shadow AI”—is more prevalent among executives and senior managers. A reported 93% of individuals in these roles acknowledged using such tools at work. This trend appears to run counter to expectations that leadership would model compliance with security protocols and company policy.
While 89% of employees stated they are aware of the potential risks of AI tools, including data breaches and loss of control over proprietary information, this awareness has not necessarily translated into more cautious behavior. IBM has estimated that the use of shadow AI can increase the average cost of a data breach by approximately $670,000.
The absence of clear policies may be contributing to this environment. Cybernews found that 23% of employers do not have any official guidelines regarding the use of AI tools at work. Where policies are in place, they are not always accompanied by adequate resources: only one-third of employees using approved tools felt those tools fully met their needs.
The report includes commentary from security professionals who emphasized the importance of formal governance around AI use. Mantas Sabeckis, a security researcher at Cybernews, noted that without oversight, companies cannot track what data is being shared or where it may end up. Žilvinas Girėnas, head of product at nexos.ai, stated that once data enters an unregulated AI system, it may be stored, reused, or exposed without the organization’s knowledge.
