6. 2. 2024
The integration of artificial intelligence (AI) into the workplace has become both a boon and a potential minefield. One emerging challenge is the phenomenon known as shadow AI.
In this article, we take a closer look at shadow AI and how it differs from the longer-standing issue of shadow IT.
Shadow AI refers to the use of AI tools in a workplace without the explicit awareness or consent of the IT department.
The excitement around AI’s potential to boost workplace performance has led to a speedy uptake by employees. But in their hurry to enjoy the perks, many workers begin using these tools without looping in their IT department. As a result, we’re seeing a rise in shadow AI happening across businesses worldwide.
Large language models (LLM), such as ChatGPT, are a popular AI tool that can produce human-like text, images, and other content almost instantaneously.
An employee using an LLM for content creation purposes can be a common example of shadow AI. For instance, someone working to a tight deadline might enlist the help of ChatGPT to write a series of emails. If the company’s IT department has not given approval for the use of ChatGPT, and the employee uses it without telling them, this qualifies as shadow AI.
Generative AI tools can be used to scale and improve customer support. For example, AI-powered chatbots can handle basic, repetitive tasks and improve the speed and accuracy of human support agents. However, circumventing the IT department to deploy this technology is another example of shadow AI.
Shadow IT refers to any software, hardware, or IT resource used on an organization’s network without the awareness or consent of the IT department.
While shadow IT has been around for some time, it has become more prevalent following the widespread adoption of public cloud services.
The availability of a diverse range of cloud-based apps has undoubtedly enhanced employee productivity. However, many employees, eager to avoid bureaucratic obstacles, opt to sidestep their IT teams when using these applications. According to Gartner, in 2022, 41% of employees downloaded and used them without the knowledge of their IT departments.
Accommodating external stakeholders is one of many reasons why an employee might introduce shadow IT. For example, an employee who has been authorized to use Microsoft Teams might download Zoom, a non-sanctioned app, to share information with an important client.
Employees can also introduce Shadow IT in the form of hardware. For instance, using a personal laptop for work purposes that has not been approved by their IT department.
The use of shadow AI poses considerable risks for businesses. Without proper oversight and established frameworks, the potential for sensitive business information to make its way into a third-party AI tool is very real.
In fact, a June 2023 report revealed that 15% of employees regularly post company data into ChatGPT, and over a quarter of that data is sensitive.
According to the UK’s National Cyber Security Centre (NCSC), all queries made through LLMs are visible to the organizations that own them. While this alone is a significant worry, it merely scratches the surface of the potential issues.
The NCSC also emphasizes that queries employees input into an LLM will almost certainly be used to train it at some stage. As a result, any information entered into an LLM could conceivably reappear as output when another user submits a related query. The ramifications of company-sensitive information suddenly appearing before millions of users are potentially disastrous.
Additionally, there’s also the threat of company information stored by LLMs being exposed in a data breach. In March 2023, questions were raised about ChatGPT’s security after a bug allowed some users to see the query titles from the chat history of other users.
Lastly, when businesses feed sensitive data into an AI model, they also expose themselves to the risk of violating certain industry standards. The fines associated with non-compliance violations can be substantial. For example, violating GDPR requirements can cost businesses up to 20 million euros or 4% of their annual turnover, whichever is higher.
The risks associated with shadow AI have elicited a significant response from enterprises and governments globally. Samsung introduced a ban on employees using generative AI tools on company-owned devices after sensitive data was accidentally leaked to ChatGPT. Apple, Amazon, Deutsche Bank, Goldman Sacks, and JPMorgan Chase are among others that have banned or limited employee’s use of generative AI tools.
Unlike an authorized asset, Shadow IT that remains out of sight is not covered by protection measures put in place by the IT team. For example, an unapproved cloud-based application will not be protected by the user-based security processes.
Unvetted shadow IT assets are more likely to fall victim to brute force attacks, phishing attacks, and malware injections. Since many shadow IT applications have features for file sharing and file storage, the risk of sensitive data leaks is a major concern.
Shadow IT can also cause serious compliance issues. For example, apps hiding in the shadows might not adhere to regulatory standards concerning data privacy, such as GDPR, or other industry-specific security requirements.
Lastly, shadow IT contributes to unnecessary costs. For instance, when an employee procures an application without following proper channels, they may remain unaware that a similar application is already in use elsewhere. This can result in a convoluted network of applications across the organization, all serving the same purpose.
The risks of shadow AI are significant, particularly when you consider that every employee in an organization can become a source of data exposure. Minimizing these risks requires a consistent effort from all employees to make security-conscious decisions whenever they engage with third-party AI platforms.
However, this need for caution clashes with the significant productivity gains that AI tools offer. And, if we have learned anything from shadow IT, it’s that employees are inclined to use any tools available to optimize their work.
Moreover, the emergence of shadow AI as a relatively new phenomenon has caught many businesses off guard. According to a BCG report, 20% of organizations that use third-party AI tools fail to undertake any sort of risk assessment.
Of course, the risks of shadow IT are also substantial. However innovative new technologies are providing avenues for managing and reducing these risks. Asset management and discovery solutions, for instance, provide a comprehensive view of all assets within a technology estate. This empowers IT teams to identify and oversee shadow IT with ease.
These advanced solutions have allowed businesses to support the adoption of IT from outside the IT department, referred to as ‘business-led IT’. This approach allows teams swift access to external technologies, enhancing efficiency without compromising security.
In summary, while both shadow AI and shadow IT pose considerable threats to businesses, shadow AI presents a more imminent danger, owing to its recent emergence and the ongoing scramble among businesses to proactively address associated risks.
To counteract shadow AI risks, companies should, at the very least, implement fundamental governance and policies for AI, and communicate these to all employees. For businesses seeking to manage the risks associated with shadow IT, an asset management and discovery solution is pivotal.
For expert advice on implementing a comprehensive asset management and discovery solution, reach out to a member of our team.