Most AI buying mistakes are contract mistakes. The demo looks strong, the vendor promises an easy integration, and the price seems reasonable. Then, three months in, you discover that your customer data is being used to train their model, the support SLA is measured in business days rather than hours, and the only way to export your data is a CSV that breaks on special characters.
These problems are in the contract. They were there before you signed. The difference between businesses that avoid them and businesses that do not is whether they asked the right questions before committing.
This checklist covers the seven areas that matter most. Work through them before any AI supplier agreement. Some of these questions will feel awkward to ask. Ask them anyway. A vendor that gets defensive about data terms or cannot explain their incident process is telling you something important before you have handed over any money.
The seven checks before you sign
1. Data use terms
Many AI vendors reserve the right to use your inputs to improve their models. This is buried in the terms, not highlighted in the sales process. For internal uses like drafting notes from public information, this may be acceptable. For anything involving client data, employee records, or commercially sensitive information, it is not. Ask directly: can our prompts and outputs be used for training? Can this be disabled, and if so, what does that change about pricing or functionality? Get the answer in writing before you sign.
2. Security posture
SOC 2 Type II is the standard you want to see from any vendor processing business data. It means an independent auditor has reviewed their controls annually. If a vendor cannot produce a SOC 2 report or equivalent, ask what they have instead. The answer will tell you whether they have thought seriously about security or whether "secure" is a marketing claim. Also confirm that data is encrypted in transit and at rest, and that they have a documented incident response process with a defined notification timeline.
3. Access controls
When a member of staff leaves, can you immediately revoke their access? When someone changes roles, can you adjust their permissions without contacting the vendor? Role-based access control and single sign-on are standard in tools aimed at business users. If the product only supports individual accounts with a shared password, you have a security problem that will grow with your team. Audit logs matter too: if something goes wrong, you need to be able to see who did what and when.
4. Exit path
The exit terms in a contract tell you how much leverage a vendor thinks they have over you once you are embedded. Look for: data export in a usable format (not a proprietary format that requires their tools to read), a clear deletion timeline after you cancel, and a transition period if you need support moving to another platform. Vendors who make exit difficult are not being malicious. They are being rational. Your job is to make sure the contract does not make switching prohibitively expensive before you have discovered whether the tool is actually the right one.
5. Performance evidence
Vendor case studies are marketing materials. What you want is evidence from businesses of similar size and sector using the tool for a similar purpose. Ask for references you can actually call. If they cannot provide any, that should inform your confidence in the claimed results. Before committing to a full licence, test the tool with your own data on a representative sample of the real work. AI tools often perform well on clean, idealised inputs and less well on the messy, inconsistent data most businesses actually have. Test with the messy version before you sign.
6. Support model
When a production AI workflow breaks at 9am on a Monday, what happens? Many AI vendors offer support via a shared inbox with a response time measured in business days. That is fine for a tool you use occasionally. It is not fine for a tool your operations depend on. Before signing, confirm the support tier, the response SLA for critical issues, and whether there is a named contact or an anonymous ticket queue. For any workflow that is time-sensitive or customer-facing, inadequate support is a production risk, not just an inconvenience.
7. Total cost of ownership
The licence cost is the starting point, not the full picture. Add up: usage fees that scale with volume, integration costs if the tool does not connect directly to your systems, implementation time (which is often underestimated), staff training, and the ongoing supervision cost once it is live. AI tools are not set-and-forget. Someone needs to check that outputs are still reliable as the tool or your data changes. If you have not included that time in your cost model, the ROI calculation is not accurate.
Questions to ask in the sales process
Before you agree a pilot or trial, get written answers to these:
- Are our prompts, inputs, or outputs used to train or improve your model? If yes, can this be disabled and what does that change about the service or pricing?
- Where is our data stored? Which subprocessors handle it? What are your data retention periods?
- What is your incident response process, and what is the contractual notification timeline?
- Who is our named support contact for critical issues, and what is your SLA for response and resolution?
- If we cancel, what is the data export process and what format does it use?
- Can you provide references from businesses of comparable size and sector?
A vendor that cannot answer these questions in writing before contract signature is a vendor with governance problems. That is not a risk worth taking on a live workflow.
Red flags
- No clear statement on data retention or model training use
- No named security owner or no incident response commitment
- Pricing linked to unclear token usage with no cap options
- No references in regulated or compliance-sensitive sectors
- Support only via a shared inbox with no SLA
- Exit terms that make data portability difficult or costly
If the vendor cannot explain their governance model in plain English, treat that as an implementation risk, not a complexity you can work around later.
If you are reviewing a shortlist and want an independent view on the vendor terms or the technical setup before you commit, the Operational Efficiency Audit covers vendor selection and integration risk as part of the process review. Or book a short call to talk through your specific shortlist. Either way, it is considerably cheaper than discovering a problem six months into a contract.
Sources
- UK Government, AI Playbook for the UK Government
- National Cyber Security Centre, Guidelines for secure AI system development
- UK Government, Cyber Essentials guidance
- Information Commissioner's Office, Artificial intelligence and data protection guidance
- Competition and Markets Authority, Potential competition concerns in generative AI foundation models