In many law firms and legal departments we work with, a recurring theme is emerging, although still rarely discussed: the growing use of personal artificial-intelligence tools by staff, associates, or collaborators, even when the organisation already provides approved systems. This practice is increasingly known as shadow AI.
Think about a junior associate drafts a memo using a generative-AI tool downloaded on a personal laptop. About a paralegal that summarises discovery documents through the free version of an external chatbot. About an employee asks a virtual-assistant app at home to support a client briefing, even though the firm already provides a licensed internal platform.
On the surface, nothing seems problematic. Everyone wants to be efficient, curious, and proactive.
Under the surface, however, risks accumulate.
Why it matters
Before moving forward, we suggest pausing to consider three key risk domains that typically emerge when shadow AI becomes part of daily work.
- Malware and phishing exposure. When AI tools are adopted outside standard IT governance, the protections we rely on may not apply. Updates, security patches, malware scanning, and network segmentation are not guaranteed. A personal tool may contain unverified code, request broad permissions, or store data off-site and unencrypted. The next phishing attempt or zero-day vulnerability might exploit exactly that unauthorised system.
- Data anonymisation and confidentiality issues. Law firms and corporate legal departments handle privileged, sensitive, and often regulated information. If someone uploads client documents into an external AI service with unclear data-usage terms, we may inadvertently breach confidentiality. Even worse, we might rely on the tool to anonymise data without verifying the process. We should always ask: Who retains the input? Who controls the output? Does the service train on that data? These questions rarely emerge when the tool is used off the radar.
- Internal policy non-compliance and audit-trail gaps. Activities performed outside approved workflows tend to escape oversight. How do we track which tools were used, how outputs were validated, or which document versions were generated? In a compliance review, the shadow-AI route may leave no trace. Accountability weakens, and risk management suffers.
What to do
We propose a three-step approach to address the phenomenon without stifling innovation.
- Step 1. Map and acknowledge the phenomenon. Begin with a simple question: “Are our people using AI tools outside the structure? And why?”. This stage is about bringing behaviours to the surface, not judging them. An anonymous survey can help map which tools are used, how frequently, and for what purpose. Understanding must come before any form of evaluation or decision.
- Step 2. Define clear policy and safe pathways. Instead of banning all personal AI tools, craft a policy that recognises why people use them and brings their use into governance. For example, an approved AI-services catalogue, rules for data upload, anonymisation, and retention; training and certification for staff.
- Step 3. Provide sanctioned alternatives and integrate them. If the goal is to discourage personal tools, offer something better. Deploy a firm-approved AI platform, integrate it with the document-management system, and ensure it meets confidentiality and audit requirements. Promote its use by showing how it fits the existing workflow, not as an external add-on.
The Path Ahead
In a profession where trust, confidentiality, and precision are essential, the rise of generative AI cannot be approached only as a technical project. It is a behavioural, organisational, and cultural shift. Shadow AI is one expression of this shift: individuals taking initiative and seeking efficiency, but doing so outside the guardrails.
We suggest focusing on behaviours, motivations, and governance rather than merely chasing the technology.
Ignoring shadow AI means accepting that data, reputation, and compliance might move into spaces we cannot fully monitor.
Engaging with it, instead, turns risk into opportunity. Staff feel supported, workflows improve, and the organisation retains control.
Do you want us to help you developing an ai polocy or addressing shaodw ai in your law firm or legal department? Write us at talk@betteripsum.net