Everyone talks about AI skills, but few talk about AI intuition - knowing when AI is right, when it’s wrong, and when it’s confidently wrong. Our Innovation Breakfast unpacked why this human skill is fast becoming essential.
One of the most unexpected and thought-provoking discussions at our recent Innovation Breakfast wasn’t about models, tools or roadmaps. It was about people.
And specifically, the skills people need in order to work safely and effectively with AI.
A phrase emerged that captured the problem perfectly: AI intuition.
Not AI literacy. Not prompt engineering. But intuition, the ability for a person to recognise when an AI system is likely to be right, when it’s likely to be wrong, and when it’s confidently wrong.
It quickly became clear across the roundtable that this is one of the biggest gaps organisations face today.
Traditional “AI skills” training focuses on features, tools, capabilities and use cases. Useful, but incomplete.
Because the challenge isn’t just learning what AI can do. It’s learning what it shouldn’t do.
One participant summed it up well: AI has jagged capabilities. It’s brilliant at one task and terrible at something that looks very similar. Humans assume consistency, but AI doesn’t behave consistently.
This inconsistency makes trust calibration extremely difficult. A calculator behaves the same way every time. AI doesn’t. And this is where intuition becomes essential.
A concern shared in the room was around what happens when people lean on AI before they truly understand the underlying skill.
Experienced professionals are using AI to accelerate tasks they already know how to do:
Because they understand the domain deeply, they can judge when the AI’s output is plausible and when it isn’t.
But people entering the workforce today won’t have that foundation. They’re learning with AI before they’ve learned without it.
This raises a critical question: If someone never learned the underlying skill, how can they recognise when the AI is confidently wrong?
In engineering, operations, public safety, research or regulated environments, that gap is more than an inconvenience, it’s a risk.
One of the clearest recommendations from the group was: Give people access. Let them use AI. Let them see where it works, and where it fails.
This is how intuition forms.
Through:
Not through PowerPoints or handbooks, but through lived experience.
Several participants noted how most AI training today focuses on capability rather than judgement.
AI intuition is fundamentally about:
This is why AI literacy, important as it is, no longer goes far enough. People need to develop the judgement that allows them to use AI safely, confidently and responsibly.
A provocative moment came when discussing the newest generation entering the workforce:
Many have never completed academic or professional tasks without AI. And when the tools fail - whether through outages, connectivity issues or model limitations - some struggle to operate independently.
Leaders questioned whether we’re preparing people for a world with AI… or unintentionally creating dependency on it.
One of the strongest insights from the morning was this:
AI intuition is as much about restraint as it is about capability.
It’s the ability to recognise when:
AI is a powerful accelerator but not a substitute for judgement.
AI adoption is accelerating fast, but AI understanding is not.
To use AI effectively, organisations need people who can:
This is AI intuition and it is fast becoming a core skill for the modern workforce.
Subscribe to get our best content. No spam, ever. Unsubscribe at any time.
Send us a message for more information about how we can help you