**False confidence in AI**
The most obvious reason to apply critical thinking to AI generated content is its current tendency to hallucinate or to simply provide erroneous data. When we present AI content without critical consideration, we are displaying false confidence in its outputs.
**Filtering information**
In a world already drowning in information, AI allows us to *generate* unlimited information in seconds. We must be able to assess what is important and to filter it for it to be both relevant and valuable.
**Ethical considerations**
AI uses patterns found in training data to formulate its output. It does not have inherent values or social understanding. If its training data contains biases (or does not guard against them), its output may not meet the ethical standards, fairness or nuanced judgment that is essential for society.
**Complex judgments**
*True* understanding ambiguity, politics, emotion, and ethics are still outside the core capabilities of AI.
**Collaborative decisions**
When we are assessing points given to us by other people - and given that we can't know to what extent they've used AI to generate those points - we must remember that **AI output is only as good as the prompts and information given to it**. We are already seeing people generate their own plans, points or arguments on topics of which they personally know very little - for which they previously would have had to consult a more experienced colleague or a specialist team. But do they know enough to formulate an effective prompt or be able to assess the output? I consider this a key risk in adoption of AI today.