‘AI still makes mistakes’ – in my view one of the most annoying things to read about AI...
[This is the first in a mini-series of posts I intend write about my main bugbears with common views about AI. These are things that annoy me when I read them, and I will explain why.]
So, what bugs me about ‘AI still makes mistakes’? Well, it suggests that AI is somehow deficient and that it must be ‘fixed’. And that it’s only a matter of time until it's fixed and flawless (after all, it’s AI, right?).
This thinking is quite prevalent in public discourse. Yet, it ignores the very nature of AI. Unlike traditional computing, which is precise and accurate, AI is probabilistic. It encodes patterns from data and then makes predictions, either in the form of classifications (like with image recognition), or in the form of content generation.
But predictions can, and always will to some degree be ‘wrong’. In principle!
No matter how much fine-tuning, retraining or whatever computational improvements we throw at the technology, it will not change its nature as a probabilistic technology. There will always be some misclassifications, inaccuracies, or ‘hallucinations’.
But let’s not forget, it is the probabilistic nature that makes AI so powerful, with abilities that can exceed humans on a range of tasks. We just need to be careful where we employ it, and put safeguards in place when decisions have severe consequences, like keeping humans closely in the loop.
And there’s a final thing that bugs me – the notion of ‘still’ implies that it might be better to wait until the technology is perfect, until its somehow no longer ‘beta’. Well, such a waiting game can risk falling behind, when industries and competitors are already adopting AI for productivity and transformation.
AI will still make mistakes, regardless of how powerful it might become. And that’s ok, when we understand its nature, how to manage it and what structures to build around it. AI fluency is key.
#AI #generativeai #genai
Professor in Org Behavior | Anxiety, Leadership and Personality Research
2wWise advice! Thanks Kai Riemer!