๐ฑ Preventing AI Hallucinations
1 What Is AI Hallucination?
AI hallucination refers to confidently incorrect, fabricated, or illogical responses generated by the model.
2 Strategies to Prevent Hallucination
Each method below reduces the chance of misleading outputs.
2.1 Use Clear, Specific Prompts
Why: Prevents AI from guessing user intent.
Example: โList three cited statistics from UK retail trends published in 2023.โ
2.2 Apply the Verifier Pattern
Why: Ensures the output is internally consistent and fact-based.
Example: โCheck your previous answer and flag any unsupported claims.โ
2.3 Provide Firm Instructions
Why: Inhibits the AI from inventing facts.
Example: โDo not guessโsay โunknownโ if data is missing.โ
2.4 Request and Validate Citations
Why: Identifies fabricated or unverifiable sources.
Example: โProvide real citations and then verify each one.โ
2.5 Use Retrieval-Augmented Generation (RAG)
Why: Anchors the AI to known documents.
Example: โBased only on this policy PDF, list three operational risks.โ
(Works best in ChatGPT with file upload or Gemini with source documents.)
2.6 Use Tools or Plug-ins
Why: Enables external checks or calculations.
Example: โSearch for the latest inflation figures from the ONS and summarise.โ
2.7 Ask for Confidence Levels
Why: Forces the AI to self-assess uncertainty.
Example: โIndicate your confidence in each claim using a 1โ5 scale.โ
2.8 Keep to Known Domains
Why: Hallucinations are more common in poorly documented or niche areas.
Example: Use prompts tied to well-understood domains like project management.
2.9 Add a Fact-Checking Step
Why: Identifies and isolates any falsehoods.
Example: โHighlight and fact-check all claims in the previous answer.โ
2.10 Require Uncertainty Flagging
Why: Makes grey areas visible to the user.
Example: โLabel speculative or unverified information