ASD Essential Eight and AI systems: what you need to know
The Essential Eight was not designed with AI in mind
The Australian Signals Directorate's Essential Eight Maturity Model remains the baseline cyber security framework for Commonwealth entities and is widely adopted across state government and critical infrastructure. It covers patch management, application control, multi-factor authentication, daily backups, and four other mitigation strategies. It is a well-tested framework for hardening traditional software environments.
The problem is that AI systems introduce attack surfaces, data flows, and failure modes that the Essential Eight does not directly address. As agencies accelerate AI adoption, there is a growing gap between what the Essential Eight requires and what responsible AI operation actually demands.
Where the Essential Eight applies to AI
Several Essential Eight controls apply directly and are frequently overlooked in AI deployments:
- Application control: AI inference software, model serving frameworks (such as Ollama, vLLM, or TorchServe), and API clients all need to be included in your application allowlist. Many agencies deploy AI tooling outside their standard software deployment processes, creating gaps.
- Patching applications: AI frameworks and libraries (PyTorch, Transformers, LangChain) update frequently, and many releases address security vulnerabilities. The same 30-day patch cycle that applies to your other software applies here.
- Restricting administrative privileges: Access to AI APIs, model weights, training data, and inference infrastructure should follow least-privilege principles. This is often not the case in early AI deployments where broad access is granted for speed of development.
- Multi-factor authentication: AI API keys and service accounts need MFA-protected management interfaces. API keys stored in plaintext in code repositories are a common and serious exposure.
Where the Essential Eight falls short for AI
The framework does not address several AI-specific risks that government agencies need to manage:
Prompt injection attacks occur when malicious content in user input or external data sources manipulates the AI's behaviour in unintended ways. This is analogous to SQL injection for traditional databases, but the Essential Eight has no specific control for it.
Model data exfiltration is a risk when AI systems have access to sensitive data through RAG pipelines or tool use. A compromised AI agent could be directed to exfiltrate data in ways that bypass traditional DLP controls.
Supply chain risks in AI models go beyond software libraries. When you download a model from a public repository, you are trusting the provenance of that model's weights. Malicious actors have demonstrated the ability to embed backdoors in publicly available models.
Practical recommendations for ASD-aligned AI deployments
Until ASD updates the Essential Eight to specifically address AI, agencies should supplement their Essential Eight compliance with additional controls specifically for AI systems. This includes model provenance verification, prompt injection testing as part of pre-deployment security reviews, network segmentation for AI inference infrastructure, and monitoring for anomalous AI output patterns that may indicate model compromise.
Arrochar Consulting helps agencies design AI security architectures that satisfy both the Essential Eight and the additional controls required for responsible AI operation in government environments. Talk to us about your AI security posture.
Ready to build the foundations that make AI actually work?
Book a free consultation. We'll map your current AI readiness, identify your biggest gaps, and give you a clear picture of where to start.
The 'No Pitch' Promise
This is a 30-minute diagnostic call, not a disguised sales pitch. If at the end of the 30 minutes you feel we wasted your time with fluff or aggressive selling, tell me and I'll immediately send $100 to the charity of your choice.
Actionable Blueprint Guarantee
By the end of our 30-minute consultation, you will have a minimum of 3 actionable steps to reduce your shadow AI risk and formalize data governance - whether you ever work with us or not.