Adding AI to business workflows increases your attack surface. Every new tool means new credentials, new API connections, new data flows moving between systems, and new ways for information to end up somewhere it should not be. Most of this risk is manageable, but only if you treat AI security as part of your workflow design from the start, rather than a concern you address after something goes wrong.
The NCSC's guidance on secure AI system development is clear that AI deployments introduce specific risks that standard cyber hygiene does not fully address. Prompt injection, data leakage through model context, over-privileged integrations, and inadequate audit trails are all issues that require deliberate controls. None of them are exotic or difficult to mitigate. They just need to be on the checklist before you connect a tool to live systems.
This guide covers the baseline controls for an SME deploying AI in operational workflows. It is not exhaustive, but it covers the areas where most small businesses have gaps.
Security baseline for SME AI deployments
These controls are not specific to AI, but they matter more when AI tools are in the picture. An AI tool connected to your CRM, document store, or customer data has broader system access than most of the software your team uses. A compromised account does not just expose one person's files. It potentially exposes everything the AI tool can reach.
- Single sign-on and multi-factor authentication for all AI tools. If a staff member's credentials are stolen, MFA stops the attacker using them. If you have SSO across your tools, you can revoke access to everything in one action when someone leaves. Neither is difficult to implement. Both are widely skipped in small businesses and both matter more as your AI footprint grows.
- Role-based access with least privilege permissions. The AI tool that drafts client emails does not need access to your payroll data. The tool that processes invoices does not need write access to your CRM. Limit each tool to the minimum access it needs to do its job. This reduces blast radius when something goes wrong.
- Central logging for prompt activity and API usage. If an AI tool starts behaving unexpectedly, you need to be able to see what happened. Logs also help you spot unusual usage patterns, which is often the first sign that credentials have been compromised or that a workflow is being used for something it was not designed for.
- Secrets management for API keys: no keys in code repositories. API keys in code files get committed to version control and end up in places they were never meant to be. Use a proper secrets manager or environment variables. This is one of the most common sources of AI tool credential exposure and one of the most preventable.
- Patch and dependency update schedule for AI-adjacent systems. AI tools often connect to other software via integrations or APIs. Those connection points need to be maintained. An unpatched dependency in a workflow integration is an attack surface that grows over time if nobody is responsible for keeping it current.
Prompt injection and data leakage: what they actually mean
These terms come up a lot in AI security discussions. They are not exotic attack vectors. They are practical risks that apply to any business using AI in workflows that handle real data or take real actions.
Prompt injection means an attacker, or a piece of untrusted content processed by your AI tool, causes the tool to behave in a way you did not intend. A simple example: a customer submits a support request that includes text designed to override the AI's instructions and extract information it should not share. If your AI customer support tool does not sanitise input before processing it, that kind of attack can work. The controls are not complex but they need to be designed in, not added after.
Practical controls
- Sanitise untrusted input before it reaches downstream tools
- Block outbound actions unless policy checks pass
- Require human approval before high-impact actions
- Use retrieval scopes so the model sees only the data it needs for the specific task
Security for AI is workflow security. The questions to ask are: what can this tool reach, what can it do, and who can see the output?
AI-specific risks that standard cyber hygiene misses
Standard Cyber Essentials controls cover the basics: access control, patch management, firewalls, malware protection, secure configuration. These matter and should be in place. But AI deployments introduce additional risk categories that Cyber Essentials does not address directly:
- Prompt injection. An attacker can craft input that causes an AI system to behave in unintended ways, including bypassing safety controls, exfiltrating context data, or taking unauthorised actions in connected systems. This is particularly relevant for AI agents with tool access or document processing workflows that accept external input.
- Context window leakage. If an AI model has access to sensitive data in its context window and a user can craft prompts to extract it, that data can leak. This applies especially to retrieval-augmented generation setups where the model can access internal documents or records.
- Over-privileged integrations. AI tools that connect to your systems should have the minimum permissions required to do their job. A tool that can read your CRM data to draft responses does not need write access. Review every permission granted to every integration.
- Unsanctioned shadow AI. Staff will use AI tools that you have not reviewed if sanctioned options do not cover their needs. This creates uncontrolled data flows outside your security perimeter. The response is not to ban AI but to make the sanctioned options good enough that shadow use becomes unnecessary.
Incident response essentials
Define in writing, before an incident occurs: how you detect abuse or anomalous AI behaviour, who is alerted and in what timeframe, how credentials are rotated when a tool is compromised, and how you communicate impact to customers or partners if their data was involved.
Test this process at least quarterly with a tabletop exercise. The point is not to pass an audit. It is to ensure that the people who would need to respond in a real incident know what they are supposed to do before they are under pressure. A brief written runbook, tested once every three months, is sufficient for most SMEs.
If you are reviewing your AI security position ahead of a new deployment and want to check what is in place before you go live, the Operational Efficiency Audit covers your current tool integrations, access controls, and data flows as part of the process mapping. Security gaps in AI workflows are easier to close before the workflows are live than after they are running in production.