Most business owners who have been through a failed technology project do not need convincing that the failure rate is high. They lived it. Bought the software, paid for the implementation, watched their team struggle with it for six months, and then quietly went back to spreadsheets. If that sounds familiar, you are not unusual. Most AI and automation projects in small and medium businesses produce no lasting value. It is a pattern that matches the broader history of technology project failure in UK businesses, and one I have seen play out enough times to know the causes well.

That is not an argument for not trying. It is an argument for understanding why the failures happen before you start. The cause is almost never the technology. The three most common reasons sit in the process thinking that happens before any tool gets selected: specifically, the process thinking that does not happen.

35% of UK SMEs now actively use AI, up from 25% in 2024
33% still have no plans to adopt AI, down from 43% in 2023
65% of those using generative AI report increased employee performance
British Chambers of Commerce, September 2025 / OECD, Generative AI and the SME Workforce, 2025

The first reason: starting with the technology

Someone reads about a tool, watches a demo, or hears what a competitor is supposedly using. They pick a tool that looks impressive, then spend the next few weeks trying to find a use for it inside their business. This produces a lot of activity and very little return.

The decision sequence matters. Technology is the last decision, not the first. The first decision is: what is the specific task we want to improve, how long does it take each week, and what would it mean to the business if we cut that time by half? Answer that precisely, and the tool selection becomes straightforward. Skip it, and you are evaluating tools without any basis for comparison.

At Vanda Coatings, I sat down and mapped out every manual task across the business before I looked at a single piece of software. When I measured the duplication properly, I found around 30,000 points of duplicated data across our processes. That number made the cost of doing nothing concrete rather than abstract. It also told me exactly what any solution needed to address. If I had started by looking at ERP systems first, I would have been evaluating them against a vague sense that things could be better, which is not a useful evaluation criterion.

The technology is the last decision, not the first. If you cannot describe the specific task and the time cost before you open a browser, you are not ready to buy anything yet.

The second reason: automating a broken process

AI and automation work on pattern-based tasks: situations where someone takes a piece of information and applies a known rule to it. Invoice received, check supplier is on the approved list, code it to the right account, post it. The same rule applied to the same type of input, repeatedly. That kind of task is automatable.

The problem is that most SME processes are not documented. Different people handle the same task differently. One person does the check one way, another person shortcuts it, a third person has invented a workaround that everyone else finds confusing. When a process runs differently depending on who is doing it, there is no consistent rule for AI to apply. There is just a collection of individual habits.

Automating that produces exactly what you would expect: the inconsistency gets automated, not the process. The errors that used to happen occasionally now happen at speed. The variation that was manageable when one person was doing the task becomes a structural problem when a system is doing it.

Before any automation project, the question is not "can we automate this?" but "do we agree on how this task should be done?" If the answer is no, or if no one can write down the steps without a disagreement, the automation project needs to start with a process documentation exercise. That is unglamorous work, but it is what separates a project that sticks from one that gets abandoned.

If different people in your business handle the same task differently, automating it will make the inconsistency faster. It will not fix it.

The third reason: the data is not in good enough shape

AI learns from patterns in data. If the data is incomplete, inconsistently formatted, or spread across systems that do not talk to each other, there are no clean patterns to learn from. The model will do its best and produce outputs that look plausible but are wrong often enough to need constant checking. At that point, you have created a new task rather than removed one.

This is the failure mode that catches the most technically ambitious projects. A business decides it wants to build something using its own historical data, gets halfway through, and discovers that the data quality is not sufficient to produce reliable outputs. The project either gets shelved or the team spends months cleaning data before building anything.

The data readiness question needs to be answered at the start, not discovered midway through. Which data does this project depend on? Where does it live? Is it consistent across records? Are there gaps, duplicates, or fields that different people interpret differently? If the answers are uncertain, a data audit is the first deliverable, not an optional extra.

The three questions that cut through it

Before any AI or automation project, these three questions will tell you whether it is worth starting.

What is the exact task, and what does it cost in time each week? Name the task precisely. Not "we have a lot of manual admin" but "we spend around 4 hours a week copying order details from email into our job management system." If you cannot be specific, you are not ready to start. The time cost also tells you whether the project is worth the effort. A task that takes 30 minutes a week is not a priority. One that takes 6 hours across three people probably is.

Does everyone handle this task the same way? Ask two or three people who do the task to walk you through how they do it. If the descriptions differ in any meaningful way, document the correct process first and get everyone aligned on it. The automation comes after that, not before. This conversation takes an afternoon. Rebuilding a poorly specified automation project takes months.

Where does the data come from, and is it consistent? Pull a sample of the data the project will depend on and look at it. Are the fields consistently filled in? Are the formats standard? Are there records that look different from others for no clear reason? If the data is messy, the project scope needs to include cleaning it. That changes both the timeline and the cost estimate, and it is better to know that before you start than three months in.

  • Name the specific task and measure the actual weekly time cost
  • Confirm the process is consistent and documented before touching any tool
  • Audit the source data before committing to a build
  • Set a clear measure of success so you can tell whether the project worked
  • Involve the people doing the task from the start, not after the build

None of this is technically complicated. It is discipline. The businesses that get lasting value from AI are the ones that do this groundwork before they open a product trial. The ones that skip it are the ones that end up adding a failed technology project to their list.

At Vanda, the bespoke ERP build that came out of this kind of process analysis has saved at least 5 hours a week in administration across the business. That time has gone back into client work and growth activity, not into doing the same admin more carefully. The improvement cycle is still running because the people using the system keep finding things that could work better, which is what happens when you build on a solid foundation rather than on a wish.

If you want to work through this for your own business, see how the consulting works or book a free call.

Sources

  1. British Chambers of Commerce, "The Turning Point for SMEs", September 2025
  2. OECD, "Generative AI and the SME Workforce", 2025
  3. ONS, "Management practices and AI in UK firms", March 2025