Skip links

Shadow AI: The problem you’ve already got and what to do about it.

Modern AI promises to revolutionise how we work, automating tedious tasks like data entry, proofreading, and spreadsheet management. While it’s an excellent time to invest in workplace AI, new solutions bring familiar challenges around data protection and third-party handling of sensitive information. If you’re a risk owner or senior decision maker, it’s worth considering that AI is likely already in use at your organisation – whether you know it or not.

Our piece today focuses on how to properly and safely integrate AI into your organisation so that you can properly leverage the new features of large language models without compromising on your responsibilities as a data handler. We’re also tackling some other areas of interest that are bound to come up in any internal conversation about the pros and cons of using AI.

Define AI…

We’ll spare the deep dive for another blog, as we’re mostly focused on the data protection and upkeep of integrity at work when using AI; but just so we’re on the same page…

Artificial Intelligence or AI is the general term of reference for the capability of a computer system to exercise traits traditionally associated with human intelligence, such as reasoning, decision making, problem-solving, or ‘learning’. It’s not a new term, and is the long-standing term for the area of interest in computer problem solving or the development of decision making in computer systems. In recent years we’ve gained affordable access to Large Language Models (such as Gemini, ChatGPT, Claude, and all your favourite assistants).

Large Language Models (or LLMs) are computational systems that make use of a mathematical process to transform input text into output text – often using a chat based interface. The scale and size of these mathematical processes are mind-boggling, and they are undertaken in the context of large datasets of ‘training data’ taken from the internet, books, and other sources of human text – reinforced by human directed learning. This produces a system that can reliably generate text that ‘feels right’, based on its occurrence in its training data.

In short – an LLM is a modern type of AI that can conduct ‘inference’, that is, it can infer what is likely to come next. This opens up all the wonderful possibilities we see in day to day use. It’s not just text generation of course – these LLMs can also be given access to ‘tooling’ like access to a web browser, or the ability to produce not only text, but imagery, code, spreadsheets and tables!

What then is Shadow AI?

Shadow AI is the catch all term for AI or LLM technology being used in an unsanctioned way by users of a network or workplace, without approval or oversight. This type of use can return short term gains to employees but contribute to compounding risks over time. There’s a large problem-book of issues that can arise from this type of usage.

So what are the problems?

Here are some of the problems we’ve been helping people unpick that you may wish to take a proactive approach to thinking about.

The big one is data handling. Users using unauthorised AI at work means the use of a third-party data handler and processor, which is the same violation as handing over data to an unvetted third party. As we saw last week – just because the company is well funded, it doesn’t mean there aren’t avenues for the precise and exact chat history to leak. Equally, if employees get comfortable using third party tooling and websites, they’re more likely to fall foul of fake or malicious AI tooling, or spam or phishing sites.

Integrity of hiring or maintaining employee confidence or skill is challenged by AI tools that allow for undetectable overlays, or access to quick relevant text that can be regurgitated by a candidate or employee under scrutiny to convince decision makers of a subject matter expertise.

Over-reliance on AI websites or tools that aren’t formally understood or ingested to the business as a part of a spending budget is going to be subject to spasmodic or reactive changes made by the companies providing the tools. Not only is there no recourse if work is lost or made inaccessible – with personal use of tools offered with an ‘as-is’ SLA that has no obligations of uptime or availability, but downtime for an AI tool site might then directly translate to downtime for your organisation.

A lack of relevant usage or sensitivity training mean that there may not be suitable awareness or understanding of the shortcomings of LLMs such as hallucination, bias in training data, or of the propensity of AI tools to agree where mentors would challenge. Embedding AI-enabled processes into work without the ground work for context and understanding risks embedding biases or hallucinations without formal accountability.

Solutions?

The reality is that AI is going to happen to your organisation whether you like it or not – as people get more comfortable with AI assisted computing. Employees are already using these tools without declaration, bringing consumer AI services into their daily work routines – for better and for worse. Rather than fighting this inevitable adoption, the smart approach is to accommodate and properly ingest AI into your business framework. Here are some areas we’ve found that bring particular clarity and value to the problems we’ve outlined.

Establishing clear governance policies is your first line of defence. Create a concise and accessible AI policy that defines acceptable use cases and maintains a whitelist of approved tools that have undergone security assessments. If you use certain office software that has AI tooling, you can appropriately license your workers to ensure that they keep their AI-assisted work strictly under the SLA you have with these providers.

Integrating AI risks into existing management frameworks makes practical sense rather than reinventing the wheel. If your organisation already operates under ISO 27001 or holds IASME Cyber Assurance certification, you have robust machinery for identifying, assessing, and treating AI-related risks. These frameworks provide structured approaches for classifying data by sensitivity levels, implementing data loss prevention tools, and ensuring GDPR compliance. Simple controls can be used to modify or restrict the way employees are allowed to use AI tooling.

Use your existing risk management processes to evaluate AI tools and determine appropriate risk treatment options – whether that’s accepting, mitigating, transferring, or avoiding specific AI applications or ways of working. While the tooling is new, these are new problems sitting on old concepts – and asking the fundamentals like “Where does my data sit at rest?” still get you good mileage when it comes to classifying and reacting to the AI landscape.

AI literacy training transforms potential risks into opportunities. Employees need to understand AI capabilities and limitations, including hallucination risks and training data bias. Role-specific training programmes address different organisational needs – what works for HR differs from what finance requires. By establishing guidelines for prompt engineering, result verification with ‘human in the loop’ checks, and critical thinking around AI-generated content, you create a workforce capable of using AI responsibly. Bringing this conversation out from Shadow AI into a collaborative environment will drastically improve the way that folks relate to the tooling, and identify blindspots.

Building approved AI infrastructure provides better alternatives to shadow IT. Deploy enterprise-grade AI solutions with proper security controls and service level agreements relating to data protection and uptime. For highly sensitive applications or circumstances, consider developing internal AI capabilities or on-premises solutions that keep your most valuable data within your control.

The goal isn’t to eliminate AI usage but to channel it productively whilst maintaining security, compliance, and business integrity. By providing clear guidance, proper tools, and adequate training, organisations can harness AI’s transformative potential whilst mitigating the risks that come with unmanaged adoption.

There’s an opportunity here.

This is a largely top-down innovation space, where new updates or feature sets creep into use at work through employees who have a genuine interest in upskilling themselves and enhancing their performance. As with all data-focused controls and risk assessments, it can feel like plugging holes in a sinking ship; when taken properly in hand and given the time needed though, this shift in how we work is an awesome opportunity to make more effective use of the limited time we have, and leverage the expertise that AI gives us access to.

If you’re trying to juggle AI-enabled workflows with complex stakeholder and data protection requirements, you should get in touch. We’ve supported organisations of all shapes and sizes to mature their AI posture, determine the right fit for infrastructural requirements, and develop policy and training to ensure value in the long run.

Explore
Drag