Arrochar Consulting
ARROCHAR
CONSULTING
← Insights|AI Governance

Shadow AI in government: the risk hiding in plain sight

Arrochar Consulting·January 2025·6 min read

What is shadow AI?

Shadow AI refers to the use of artificial intelligence tools by employees without formal organisational approval, oversight, or integration into governance frameworks. It is the AI equivalent of shadow IT and just as shadow IT created security and compliance nightmares in the 2010s, shadow AI is creating the same problems today, but faster and with higher stakes.

In government, shadow AI takes many forms: a policy officer pasting a sensitive ministerial brief into ChatGPT to get a quick summary; a data analyst using an AI coding assistant that sends code snippets to a US cloud provider; a procurement officer using an AI tool to draft a contract without disclosing it to legal. None of these individuals are acting with malicious intent. They are trying to do their jobs more efficiently. But each action potentially breaches data classification rules, privacy legislation, and procurement policies.

How widespread is it?

Surveys of knowledge workers across government and enterprise consistently find that 40 to 60 percent of employees are using AI tools that have not been approved by their organisation. In many cases, line managers are aware and quietly encourage it because the productivity gains are visible and the risks feel abstract.

The gap between what employees are doing with AI and what CIOs and CISOs know about it is often vast. By the time an agency discovers a significant shadow AI exposure through an audit, a privacy breach notification, or a media story the data has already left controlled environments and remediation is costly and complex.

Why government agencies are particularly exposed

Government agencies handle data that is uniquely sensitive: personal information about citizens, Cabinet-in-Confidence documents, law enforcement intelligence, financial records, and health data. Many employees work with multiple data classifications simultaneously. The consequences of that data being processed by an unapproved offshore AI service are not limited to reputational risk they include potential breaches of the Privacy Act, the Protective Security Policy Framework, and sector-specific legislation.

At the same time, government agencies are often slower to approve and deploy sanctioned AI tools than private sector employers. When employees see a clear productivity benefit from an AI tool and no approved alternative is available, shadow usage is a predictable response.

The governance response: detection, policy, and approved alternatives

Addressing shadow AI requires three parallel workstreams. First, detection: understanding what AI tools are currently in use through a combination of network traffic analysis, software inventory review, and employee surveys. This gives you an accurate baseline rather than an assumed one.

Second, policy: updating acceptable use policies, data handling guidelines, and procurement frameworks to explicitly address AI tools. Most government agencies' existing policies do not contemplate the specific risks of AI they treat it like any other SaaS tool, which it is not.

Third, approved alternatives: the most effective way to reduce shadow AI is to give employees access to approved AI tools that meet their actual needs. If people use ChatGPT because it helps them write faster, the solution is not just to ban ChatGPT it is to provide a sanctioned AI writing tool with appropriate data controls.

How Arrochar Consulting can help

We conduct AI inventory assessments that give agencies an accurate picture of their shadow AI exposure, and we design AI governance frameworks that are practical enough for employees to follow and robust enough to satisfy auditors. Book a free consultation to discuss your agency's situation.

Ready to build the foundations that make AI actually work?

Book a free consultation. We'll map your current AI readiness, identify your biggest gaps, and give you a clear picture of where to start.

The 'No Pitch' Promise

This is a 30-minute diagnostic call, not a disguised sales pitch. If at the end of the 30 minutes you feel we wasted your time with fluff or aggressive selling, tell me and I'll immediately send $100 to the charity of your choice.

Actionable Blueprint Guarantee

By the end of our 30-minute consultation, you will have a minimum of 3 actionable steps to reduce your shadow AI risk and formalize data governance - whether you ever work with us or not.