Why AI Gets Contract Review Wrong (Even When It Sounds Confident)

How contract complexity can fool AI, and what you can do about it

You've just received a job offer. The contract is 14 pages long.

You paste it into ChatGPT and get back a clean, confident summary. Everything sounds balanced and reasonable. You sign. Three months in, you discover the restrictive covenant clause stops you working for any direct competitor within your sector for 12 months (and the termination clause you thought protected you actually works in reverse).

The AI wasn't lying to you. It was doing exactly what it's built to do: produce a helpful, fluent answer.

The problem is that "helpful and fluent" isn't the same as "legally accurate."

Most people asking about AI contract review focus on one question: Is it safe to upload my contract? That's important. We've written about confidentiality risk and how to anonymise your contract before using any AI tool.

But there's a second question that matters just as much:

Even if you use AI carefully, can the contract itself push the AI toward the wrong conclusion?

The answer is yes. And understanding how it happens puts you firmly back in control.

Who This Is For

This post is for UK employees reviewing a new job offer, freelancers checking a service agreement or NDA, gig workers evaluating a platform contract, or anyone who has been tempted to paste a legal document into ChatGPT and trust the summary they get back.

How AI Actually Reads a Contract

Large language models don't read the way a lawyer does.

A lawyer slows down when language gets complicated. They notice when a reassuring clause has a quiet exception buried two paragraphs later. They track how a defined term in clause 2 rewrites the apparent meaning of clause 14.

AI models work differently. They predict the most useful-sounding response based on patterns in the text. That works brilliantly for many tasks. But it means the model can be steered by the way a contract is written: not intentionally, not maliciously, but structurally.

In AI security, deliberate manipulation through embedded content is called prompt injection. You don't need to understand the technical details. The practical lesson is simpler:

The way a contract is worded, structured, and framed can influence what an AI model pays attention to, and what it quietly glosses over.

Even without any malicious intent, structure and repetition can steer what the model weighs as important. A contract doesn't need to be designed to trick AI. It only needs to be complex enough that a general-purpose model smooths over the wrong detail.

Five Ways AI Can Miss the Point in Your Contract

1. ๐Ÿ” The summary sounds right, but the legal effect is wrong

The AI might tell you:

"This clause limits liability for both parties."

That sounds reassuring. But what if the clause contains a carve-out that removes your protection in the exact scenario most likely to affect you? The summary isn't technically false. It's just incomplete in the one place that matters.

2. ๐Ÿ“– Defined terms quietly rewrite everything

Legal contracts often define ordinary words in very non-ordinary ways.

A term like "Gross Misconduct", "Confidential Information", "Services", or "Material Breach" may look familiar but carry a specific contractual meaning set out in a definitions clause. If the AI reads later clauses using the everyday meaning instead of the defined one, the entire analysis can drift. You'd have no reason to notice.

๐Ÿ’ก Example: Your contract says you can be dismissed for "Gross Misconduct" without notice pay. Sounds standard. Then you check the definitions clause and find it includes "any breach of company policy," defined so broadly that it covers things like using personal email on a work device.

3. ๐Ÿ”— The real risk is hidden across multiple clauses

Some of the most important problems in a contract don't sit in one dramatic paragraph. They only emerge when several clauses are read together.

Common combinations to watch:

  • ๐Ÿ“Œ Payment terms plus termination rights

  • ๐Ÿ“Œ Notice period plus garden leave provisions

  • ๐Ÿ“Œ IP ownership plus licence restrictions

  • ๐Ÿ“Œ Confidentiality obligations plus data processing language

  • ๐Ÿ“Œ Liability cap plus indemnity carve-outs

Generic AI is much better at summarising each clause individually than at reliably stitching them together into a legally meaningful whole.

4. ๐ŸŽญ Reassuring language distracts from the sharp edges

Contracts often contain a lot of standard-sounding wording that makes the whole document feel balanced and professional. That tone influences AI summaries.

If most of the contract reads as reasonable, the model may produce an overall assessment that feels calm, even when one clause is aggressively one-sided. The document's general mood shapes the AI's output more than you'd expect.

5. ๐Ÿ“ Short clauses with big consequences get overlooked

AI models are sensitive to what's prominent: what's longest, clearest, or most repeated. A short line about non-compete obligations, assignment rights, governing law, automatic renewal, or termination for convenience can matter far more than a full page of operational detail.

Yet the short line is the easiest to miss.

๐Ÿ“‹ Common clauses AI tends to underweight: Non-compete and restrictive covenants / IP ownership / Termination for convenience / Liability caps and carve-outs / Governing law and jurisdiction / Garden leave / Payment trigger conditions

What This Means for You

โš–๏ธ AI can still be a useful first step. The mistake is treating a general chatbot like a reliable contract reviewer.

โš–๏ธ A polished, confident AI summary is not the same as a thorough legal assessment.

โš–๏ธ The clauses that matter most are often the least obvious ones.

Here's how to use AI more safely if you choose to:

A Safer Approach: How to Use AI on Your Contract

๐Ÿ“Œ 1. Don't just ask for a summary

A summary is the easiest place for important detail to disappear. Instead, ask for structured extraction:

"Extract the following from this contract: parties, payment terms, notice period, termination rights, exclusivity, non-compete clauses, IP ownership, confidentiality obligations, liability cap, indemnities, dispute resolution, and governing law."

That forces the model to surface specifics rather than produce one polished paragraph.

๐Ÿ“Œ 2. Ask what's unusual, not just what's there

A weak prompt: "Can you review this contract?"

A better prompt: "List any clauses that are unusually one-sided, commercially risky, or easy to overlook. Quote the relevant wording for each."

This pressures the model to be specific rather than reassuring.

๐Ÿ“Œ 3. Always verify high-impact clauses yourself

Regardless of what the AI tells you, read the original wording of:

  • Termination and notice periods

  • Liability and indemnities

  • IP ownership

  • Restrictive covenants and non-competes

  • Automatic renewal

  • Payment triggers

  • Governing law and jurisdiction

These are exactly where a smooth AI summary can be most dangerous.

๐Ÿ“Œ 4. Check every capitalised term

If a clause matters, look up whether any capitalised term inside it has a specific definition elsewhere in the contract. This one step catches a surprising amount of confusion.

๐Ÿ’ก Pro tip: Search the document for the term in quotation marks. If it appears earlier in a definitions section, the definition (not the everyday meaning) is what the contract actually says.

๐Ÿ“Œ 5. Test the AI before you trust it

Before relying on any AI summary, ask it a question you already know the answer to from the contract, for example, the notice period or start date. Ask it to quote the clause number and exact wording. If it gets that wrong, hedges, or paraphrases instead of quoting, treat the rest of the analysis with significant caution. It's a quick way to calibrate how well the model is actually reading your specific document.

๐Ÿ’ก Skip the workarounds entirely. Ookulli is built specifically for UK employment contracts, freelancer agreements, NDAs, and more โ€” with clause-by-clause analysis and your data kept private from the start. See how it works โ†’

Why Purpose-Built Contract Review Is Different

This is exactly why general AI and contract review shouldn't be treated as the same thing.

A generic chatbot is built to respond helpfully across almost any topic. That breadth is its strength, and its limitation.

A purpose-built tool like Ookulli is built for something narrower and harder:

โœ… Reviewing clause by clause, not as a single document dump โœ… UK-specific insights drawing on employment and contract law expertise โœ… Preserving legal structure rather than collapsing it into a summary โœ… Surfacing risks that only become visible when clauses are read together โœ… Plain-English explanations designed for employees, freelancers, and creatives โ€” not lawyers

And critically: Ookulli is designed for confidentiality from the start. Your data isn't used for training and is handled in line with clear data protection terms. (See our Privacy Policy.)

That means you can upload your actual contract with confidence, skip the anonymisation workaround entirely, and get analysis built for the job.

The Bottom Line

The question to ask isn't just "Is it safe to share my contract with this AI?"

It's also: "Is this AI actually reading my contract in a way that preserves the legal risk?"

That's a much harder bar to clear. It's why AI contract review needs more than a general-purpose chatbot.

๐Ÿ“ข At Ookulli, we review employment contracts, freelancer agreements, NDAs, and more โ€” clause by clause, with built-in UK legal expertise and your data handled with care from start to finish. Try a review today.

This article is for informational purposes only and does not constitute legal advice. If you have specific concerns about your employment contract or legal rights, you should seek advice from a qualified solicitor.

FAQ

Can ChatGPT review a contract?

It can help summarise and extract information, but it shouldn't be treated as a reliable replacement for specialist review. Generic AI may miss defined terms, carve-outs, cross-references, and commercially important nuance that only emerges when clauses are read together.

Why does AI sound so confident even when it's wrong?

Large language models are trained to produce fluent, helpful-sounding responses. Confidence in tone doesn't reflect legal accuracy. A clean summary can hide the detail that matters most.

What is prompt injection and does it affect contract review?

Prompt injection is when content influences an AI model in ways that change what it prioritises or how it responds. In contract review, the practical lesson is that wording, structure, and framing can steer a general-purpose model away from the clauses that carry the most legal risk.

Does anonymising my contract solve this problem?

Partly. Anonymising reduces privacy and confidentiality risk, but it doesn't solve the problem of generic AI misreading legal structure or missing your most important clause. The two risks are separate.

Is generic AI useless for contracts?

No. It can be useful for first-pass extraction, plain-English explanation, and generating questions to investigate further. The problem starts when users treat a polished answer as a dependable legal assessment.

What should I do instead of asking for a summary?

Ask for clause-by-clause extraction, request the exact wording for high-risk clauses, and manually verify anything involving liability, termination, payment, restrictive obligations, IP ownership, and dispute terms.

Related reading:

Ready to see through the legal fog?

Ready to see through the legal fog?

Sign up now to join our waiting list and be the first to experience legal clarity like never before!

ookulli - designed with โ™ฅ๏ธ worldwide

ookulli - designed with โ™ฅ๏ธ worldwide

ookulli - designed with โ™ฅ๏ธ worldwide