There is a meaningful gap between what AI can do in a controlled demo and what is genuinely worth automating in a real small business. Vendors demo their best case. Your business is not a best case. It has legacy data, inconsistent processes, staff with varying levels of interest in new technology, and constraints that do not feature in any marketing video.

This is not an argument against AI. It is an argument for being precise about which tasks you target and why. The businesses that get clear returns from AI are the ones that identify specific, bounded tasks and test them properly. The ones that do not are usually the ones that started with a tool and worked backwards.

13 hrs/week the average UK worker spends on low-value repetitive tasks
65% of SMEs using generative AI report increased employee performance
45% report measurable cost savings
DocuSign Digital Maturity Report, 2024 / OECD, Generative AI and the SME Workforce, 2025

What works well

The tasks that AI handles reliably in SME contexts tend to share a common characteristic: they involve applying a consistent rule to variable input. The rule is stable, the input changes, and a person currently applies the rule manually each time. To identify where these tasks sit in your own business, it helps to map your operations across the six key performance areas before deciding what to automate.

  • Document processing. Reading invoices, job sheets, delivery notes, planning applications, tender documents, and pulling out specific fields. If your team opens a document and types information from it into another system, this is highly automatable. The accuracy depends on document quality, but for clean, typed documents it is consistently good.
  • First-draft communications. Quotes, follow-up emails, site survey summaries, maintenance reports, responses to standard enquiries. AI produces a workable first draft; a person reviews and approves. The time saving is substantial because drafting from scratch is the slow part.
  • Data extraction and summarisation. Reading a contract and pulling out key dates, obligations, and payment terms. Summarising a long email thread into three bullet points. Converting meeting notes into action items. These are all tasks where a person reads something in order to produce a shorter or more structured version of it.
  • Scheduling and allocation workflows. Matching jobs to available resources based on defined criteria, flagging scheduling conflicts, identifying anomalies in datasets. Works well where the rules are explicit and the edge cases are manageable.
  • Classification and routing. Categorising inbound enquiries, sorting documents into folders, routing tasks to the right person based on content. Reliable where the categories are well-defined and the inputs are reasonably consistent.

What does not work as well as people expect

Customer service beyond the simplest tier is harder than most vendors will admit. AI can handle "what are your opening hours" and "where is my delivery." It cannot reliably handle a frustrated customer who wants to feel heard. The problem is not vocabulary, it is the absence of genuine acknowledgment. People detect it, and it makes the situation worse.

Sales conversations are similar. The parts of a sales conversation that close deals involve reading the room, adjusting tone, and responding to subtle signals that change from person to person. These are not pattern-based tasks.

Anything that needs to feel personal does not automate well. A well-crafted client update letter, a condolence response, a difficult conversation about a project going wrong. AI can produce words for these, but if the recipient can tell the words were generated rather than considered, the value is zero or negative.

Complex decisions requiring judgment are also poor candidates. Deciding whether to take on a contract, working out how to handle a difficult supplier, choosing between two approaches to a job with unusual constraints. These require knowledge that lives in people's heads and cannot currently be captured reliably enough for AI to replace the decision.

The test for whether a task is automatable

The most useful test I know is this: if you wrote the rules down, could someone follow them without asking questions?

If yes, the task is likely automatable. The key word is "rules." Not guidelines. Not judgment calls. Rules. Step one is this, step two is this, if X then do Y, if Z then do W.

If the answer is "it depends on the context," that is a signal that judgment is involved. AI can assist with those tasks, but it cannot replace the judgment reliably. You would need a human in the loop reviewing its output, and the question becomes whether that is still faster than just doing it manually.

If you wrote the rules down, could someone follow them without asking questions? If yes, the task is likely automatable. If it depends on context, you need a human in the loop.

What this looks like in practice

At Vanda Coatings, the clearest automation wins came from processes where the same information was being handled multiple times in slightly different forms. Quoting stages, job tracking, cost data: all of it existed somewhere but needed pulling into one place before it was useful. Adding AI features to the ERP system we built reduced the manual work at specific steps in those workflows. The tasks that automated well were the ones where the rules were explicit: this information goes here, this status triggers this action, this figure appears on this report. The tasks that required judgment stayed with the people doing them.

The failure worth including: there was a process I wanted to automate end to end. No human in the loop, nothing left manual. I spent time trying to make it work completely, and it did not. The tool handled roughly 80% of the process reliably and fell apart on the edge cases that required judgment. My instinct was to keep pushing until it worked for the remaining 20%. That was the wrong call. Eighty percent of a repetitive process automated reliably is still 80% of the time saved, and the 20% that needed a person was the 20% that genuinely warranted one. The useful conclusion: AI works best as an aid to a process, not a wholesale replacement for it. If you go in expecting 100% automation, you will walk away from tools that would have saved you real time.

Two client examples cover different ends of the same principle. A manufacturing business with a large documentation system needed staff to find the right document without knowing exactly where it was stored or spending time searching manually through folders. AI-based document retrieval handled natural language queries and returned the relevant files. The value was not in replacing a complex process. It was in making a simple but time-consuming task consistently faster for every person in the business, across a library too large to navigate efficiently by hand.

A financial advisory firm was sending and collecting client feedback manually, which meant it happened inconsistently and depended on whoever remembered to follow up. Automating the scheduling, sending, and collection of responses moved the adviser's time from managing the logistics of getting feedback to actually reading it and acting on it. That shift is the most common form of value AI delivers in smaller businesses: not replacing the person, but removing the low-value part of their job so they can spend more time on the part that matters.

80% of a repetitive process automated reliably still means 80% of the time saved. Chasing 100% automation often means walking away from a tool that would have made a real difference. Practitioner observation, Anthony Jones / Vanda Coatings

A checklist of automatable task types

If your business does any of the following regularly, they are worth looking at:

  • Manually copying data between systems or documents
  • Writing the same type of email or letter with different names and details each time
  • Reading documents and extracting specific fields into a spreadsheet or database
  • Categorising inbound requests and routing them to the right person
  • Producing weekly or monthly summaries from operational data
  • Checking documents for completeness before processing
  • Scheduling or allocating resources based on defined rules
  • Generating standard reports from data you already hold

If you want to work through this for your own business, see how the consulting engagement is structured or book a free call. A short conversation is usually enough to identify where the real opportunities are.

Sources

  1. DocuSign Digital Maturity Report, 2024
  2. OECD, "Generative AI and the SME Workforce", 2025
  3. ONS, "Management practices and AI in UK firms", March 2025