Most businesses approach automation with a list of things they want to fix and no method for choosing between them. Every problem on the list feels urgent to someone. Without a structured way to prioritise, the tendency is either to chase the most vocal sponsor's pet project or to start with whatever is easiest to demo rather than whatever produces the most value.
A priority matrix forces tradeoffs. It makes the selection criteria explicit so the decision is based on evidence rather than enthusiasm. It also gives you a record of why you chose what you chose, which matters when you are reviewing what went well six months later.
The two axes that matter
Axis one: business impact
Impact is a composite of four factors. Annual hours saved is the most reliable single measure, because it is objective and calculable from a simple time audit. Supplement it with cash acceleration (does this make the business faster at getting paid or responding to opportunities?), error reduction value (what does the current error rate cost in rework, customer remediation, or compliance exposure?), and capacity gain (does this extend the business's ability to grow without proportional headcount increase?).
Score each candidate from 1 to 5 on this axis. A task consuming 300+ hours per year across your team is a 5. A task consuming 20 hours per year is a 1 or 2.
Axis two: implementation complexity
Complexity is also composite. Data quality is the most important factor: if the inputs are inconsistent, hard to access, or held in formats that vary between instances, the automation will require significant pre-work before it functions reliably. Integration effort (how many systems does this touch?), compliance burden (does this involve personal data, regulated decisions, or customer-facing outputs that require audit trails?), and change management load (how much will this disrupt current team workflows?) all add to complexity score.
Score each candidate from 1 to 5. A task with clean, structured inputs, a single system integration, and no personal data is a 1. A task requiring access to multiple inconsistent data sources, touching customer records, and changing how three different people work is a 4 or 5.
Reading the matrix
With impact on the vertical axis and complexity on the horizontal, the four quadrants tell you different things:
- High impact, low complexity: automate first. These are your quick wins. They will produce visible results fast, build organisational confidence, and give you real data on what AI can do in your specific environment.
- High impact, high complexity: plan carefully, pilot in stages. These are worth pursuing, but they need proper scoping. Do not start without a baseline, a defined pilot, and clear go/no-go criteria at each stage.
- Low impact, low complexity: optional. Fine to do if resources allow, but do not let these crowd out the high-impact work. The ROI is limited and they can become a distraction.
- Low impact, high complexity: do not do this. These are the projects that consume budgets and produce nothing visible. They are often justified by their complexity sounding impressive. They rarely are.
What good automation candidates look like in practice
In most UK SMEs, the highest-scoring items on the matrix tend to be:
- Document data extraction: invoices, purchase orders, delivery notes, job sheets with consistent structure
- First-draft communications: quotes, follow-up emails, site survey summaries, maintenance reports
- Internal routing: categorising inbound enquiries and assigning them to the right person or queue
- Operational summaries: converting meeting notes, call logs, or visit reports into structured action lists
- Compliance checking: verifying that documents or forms meet required standards before they proceed
Automate clean process first. If your current process is inconsistent, AI will automate the inconsistency. Fix the process, then automate it.
A practical scoring model
Score each candidate from 1 to 5 on each dimension
- Weekly volume: how many times does this task occur?
- Manual handling time per occurrence: minutes of staff time each time
- Error and rework cost: estimated annual cost of current mistakes
- Data quality: how clean and consistent are the inputs? (score inversely for complexity)
- Integration dependency: how many systems does this touch? (score inversely for complexity)
- Compliance sensitivity: does this involve personal data or regulated outputs? (score inversely for complexity)
Multiply your impact scores together (or sum them), do the same for your complexity scores. Plot each candidate. The ones in the top-left quadrant of the resulting chart are where you start.
Limit your first wave to two projects. Completing two well is worth more than starting five and finishing none.
The mistakes this process prevents
- Starting with the most technically interesting problem rather than the highest-value one
- Committing to a complex integration project before you have proved any AI can work in your environment
- Automating a process that was already broken, which just makes the brokenness harder to see
- Spending three months building something that saves four hours a year
Sources
- Department for Business and Trade, Business population estimates for the UK and regions 2024
- Office for National Statistics, Business insights and impact on the UK economy (22 January 2026)
- UK Government, AI Playbook for the UK Government
- National Cyber Security Centre, Guidelines for secure AI system development
- Competition and Markets Authority, Potential competition concerns in generative AI foundation models