If your team enters personal data into AI tools, you are handling UK GDPR risk whether you call it a pilot or not. The ICO has been clear that the same data protection rules that apply to any other processing activity also apply to AI. The size of your business does not change your obligations, though it can inform how you document and manage them.

This checklist is designed for the owner or operations lead of a small business who needs real controls without spending weeks on legal process. It covers the five areas the ICO expects you to have addressed if you are using AI in any workflow that touches personal data.

35% of UK SMEs now actively use AI, up from 25% in 2024
65% report improved employee performance when using gen AI
45% report measurable cost savings from AI adoption
Sources: British Chambers of Commerce, September 2025 / OECD, Generative AI and the SME Workforce, 2025

The starting point most small businesses miss: you do not have to be doing something sophisticated to have a data protection obligation. If someone on your team pastes a client's name, email, or complaint into a chat-based AI tool, that is personal data processing under UK GDPR. The fact that it feels like ordinary work does not change the legal position. The ICO does not ask whether you called it a pilot. It asks whether you identified the lawful basis and managed the risk.

Step 1: classify your AI use cases

Before you apply any controls, you need to know what you are actually doing with personal data. Most SMEs have a mix of genuinely low-risk AI uses and higher-risk ones sitting side by side, often without anyone having drawn a distinction.

Low risk examples

  • Drafting internal notes with no personal data
  • Summarising public documents
  • Generating marketing concepts without customer records

Higher risk examples

  • Processing CVs, health data, customer complaints, or payment records
  • Automated decisions that affect employment or service outcomes
  • Sending personal data to tools without clear processor terms

The ICO's approach to enforcement is risk-proportionate. Low-risk uses attract minimal scrutiny. High-risk uses require formal documentation, and in some cases a formal Data Protection Impact Assessment before you go live. The gap between the two is not about which AI tool you use. It is about what data goes in and what decisions come out the other side.

Step 2: document lawful basis and purpose

For each AI workflow that touches personal data, you need to be able to answer: what are we doing with this data, why are we doing it, and what legal basis are we relying on? That does not mean a lengthy legal document. It means a written note that covers those questions, kept somewhere you could find it if asked.

For most SME AI workflows the lawful basis is either legitimate interest or performance of a contract. Legitimate interest applies when the processing is necessary, proportionate, and would not be overridden by the individual's own interests. Contract necessity applies when you are processing data to deliver something the person explicitly asked for. Neither requires consent in most cases, but both require you to have made the decision consciously and recorded it. Choosing a lawful basis after something goes wrong is not acceptable to the ICO. The decision has to be made before the processing starts.

Step 3: run a DPIA where risk is material

A Data Protection Impact Assessment sounds more complicated than it is. For an SME, it is a structured exercise where you describe the processing, identify the risks, assess how serious they are, and document what you are doing to mitigate them. For most standard business AI uses, the whole thing can be done in an hour with a clear template.

A DPIA is genuinely required when you are making or contributing to significant automated decisions about individuals, processing special category data at scale, or using a technology in a way that could produce unexpected outcomes for the people involved. For most document processing, draft communication, or reporting workflows, it is not required. A brief written risk note is sufficient, and it demonstrates you thought about it rather than treating it as someone else's problem.

AI output quality is not a substitute for legal accountability. You still own the processing decision, regardless of which tool made it.

Step 4: tighten vendor controls

The vendor relationship is where most SME data protection failures originate. Standard AI vendor terms often allow the vendor to use your inputs to train their models, retain data longer than you need, store it outside the UK, and limit their incident response commitments to whatever their legal team thought was acceptable. None of this is disclosed prominently. It is in the terms and conditions most people click through.

Under UK GDPR, if you send personal data to a third-party AI tool, you need written Data Processing Agreements in place. This is not optional, and the ICO expects you to be able to produce them. The question is not whether you trust the vendor. It is whether their terms are actually compliant, and whether you can demonstrate that if asked.

  • Confirm whether prompts and outputs are used for provider model training, and whether this can be disabled
  • Confirm data residency and subprocessors: where is your data stored, and who else has access to it
  • Review contractual processor terms and confirm they meet UK GDPR Article 28 requirements
  • Set role-based access for staff and prohibit personal accounts for any workflow involving company or client data

Step 5: build an operational policy

A policy does not need to be long. For most SMEs, a single page that everyone in the team has seen and understood is more valuable than a 20-page document that nobody reads. The minimum sections are:

Minimum policy sections

  • Approved use cases and explicitly prohibited data inputs (names, NI numbers, health data, financial account details unless explicitly in scope)
  • Mandatory human review points before AI output is used to make a decision affecting a person
  • Incident reporting process: who is told, when, and what happens next
  • Versioned prompt and workflow controls so changes are tracked
  • Named quarterly audit owner and review date

What the ICO expects if you are investigated

The ICO's AI and data protection guidance makes clear that organisations are expected to demonstrate accountability. In practical terms for an SME, this means being able to show:

  • You identified the lawful basis for each AI processing activity before it started
  • You assessed the risks and documented them, even if informally, before going live
  • You have written processor terms with any AI supplier that handles personal data on your behalf
  • Staff who use AI tools with personal data have received basic training on what is and is not permitted
  • You have a process for handling data subject access requests that includes AI-processed records

None of this requires a legal team. It requires discipline and a paper trail. The businesses that face the most difficulty in an ICO inquiry are not the ones with the most complex AI projects. They are the ones who started without any documentation and have nothing to show when questions are asked.

If you want to map your current AI workflows against these checks before committing further, the Operational Efficiency Audit covers data handling and vendor controls as part of the process review. It is a practical exercise, not a compliance audit, and it gives you a clear picture of where the gaps are and what needs addressing first.

Sources

  1. Information Commissioner's Office, Artificial intelligence and data protection guidance
  2. Information Commissioner's Office, Accountability and governance
  3. UK Government, AI Playbook for the UK Government
  4. National Cyber Security Centre, Guidelines for secure AI system development
  5. UK Government, Cyber Essentials guidance