🌱 Preventing AI Hallucinations

1 What Is AI Hallucination?

AI hallucination refers to confidently incorrect, fabricated, or illogical responses generated by the model.

2 Strategies to Prevent Hallucination

Each method below reduces the chance of misleading outputs.

2.1 Use Clear, Specific Prompts

Why: Prevents AI from guessing user intent.

Example: β€œList three cited statistics from UK retail trends published in 2023.”

2.2 Apply the Verifier Pattern

Why: Ensures the output is internally consistent and fact-based.

Example: β€œCheck your previous answer and flag any unsupported claims.”

2.3 Provide Firm Instructions

Why: Inhibits the AI from inventing facts.

Example: β€œDo not guessβ€”say β€˜unknown’ if data is missing.”

2.4 Request and Validate Citations

Why: Identifies fabricated or unverifiable sources.

Example: β€œProvide real citations and then verify each one.”

2.5 Use Retrieval-Augmented Generation (RAG)

Why: Anchors the AI to known documents.

Example: β€œBased only on this policy PDF, list three operational risks.”

(Works best in ChatGPT with file upload or Gemini with source documents.)

2.6 Use Tools or Plug-ins

Why: Enables external checks or calculations.

Example: β€œSearch for the latest inflation figures from the ONS and summarise.”

2.7 Ask for Confidence Levels

Why: Forces the AI to self-assess uncertainty.

Example: β€œIndicate your confidence in each claim using a 1–5 scale.”

2.8 Keep to Known Domains

Why: Hallucinations are more common in poorly documented or niche areas.

Example: Use prompts tied to well-understood domains like project management.

2.9 Add a Fact-Checking Step

Why: Identifies and isolates any falsehoods.

Example: β€œHighlight and fact-check all claims in the previous answer.”

2.10 Require Uncertainty Flagging

Why: Makes grey areas visible to the user.

Example: β€œLabel speculative or unverified information