π± Preventing AI Hallucinations
1 What Is AI Hallucination?
AI hallucination refers to confidently incorrect, fabricated, or illogical responses generated by the model.
2 Strategies to Prevent Hallucination
Each method below reduces the chance of misleading outputs.
2.1 Use Clear, Specific Prompts
Why: Prevents AI from guessing user intent.
Example: βList three cited statistics from UK retail trends published in 2023.β
2.2 Apply the Verifier Pattern
Why: Ensures the output is internally consistent and fact-based.
Example: βCheck your previous answer and flag any unsupported claims.β
2.3 Provide Firm Instructions
Why: Inhibits the AI from inventing facts.
Example: βDo not guessβsay βunknownβ if data is missing.β
2.4 Request and Validate Citations
Why: Identifies fabricated or unverifiable sources.
Example: βProvide real citations and then verify each one.β
2.5 Use Retrieval-Augmented Generation (RAG)
Why: Anchors the AI to known documents.
Example: βBased only on this policy PDF, list three operational risks.β
(Works best in ChatGPT with file upload or Gemini with source documents.)
2.6 Use Tools or Plug-ins
Why: Enables external checks or calculations.
Example: βSearch for the latest inflation figures from the ONS and summarise.β
2.7 Ask for Confidence Levels
Why: Forces the AI to self-assess uncertainty.
Example: βIndicate your confidence in each claim using a 1β5 scale.β
2.8 Keep to Known Domains
Why: Hallucinations are more common in poorly documented or niche areas.
Example: Use prompts tied to well-understood domains like project management.
2.9 Add a Fact-Checking Step
Why: Identifies and isolates any falsehoods.
Example: βHighlight and fact-check all claims in the previous answer.β
2.10 Require Uncertainty Flagging
Why: Makes grey areas visible to the user.
Example: βLabel speculative or unverified information