🌱 Don't be misled by AI

This video suggests claims that 'smart' / experienced people may be even more susceptible to being wrongly persuaded by AI's output than novices. I find its claims to be somewhat conceptual and would like the points to be more strongly argued.

If it is right, however, it claims that whilst novices are more likely to ask if something is true, experienced users tend to ask if it makes sense. Confident, clear and well structured information tends to give people a feeling of understanding - in other words, they do feel it makes sense and therefore trust it, even when the content is false. Since AI output tends to fall into the category of 'confident, clear and well structured', experienced users may fall into the trap of mistakenly trusting it. Instead of asking 'is this true?', they ask 'does this align with my mental model' or 'can I see how this would work?'.

The key vulnerability outlined by the video, therefore, is this shift from 'truth-testing' to 'coherence-testing'.

⏰ It might be interesting to revisit the coherence model of epistemology to see whether it can shed any light on this observations.