The rise of “AI intuition” and why skills development must change

Elia Corkery Marketing Executive
3 min read in AI
(667 words)
published

Everyone talks about AI skills, but few talk about AI intuition - knowing when AI is right, when it’s wrong, and when it’s confidently wrong. Our Innovation Breakfast unpacked why this human skill is fast becoming essential.

One of the most unexpected and thought-provoking discussions at our recent Innovation Breakfast wasn’t about models, tools or roadmaps. It was about people.
And specifically, the skills people need in order to work safely and effectively with AI.

A phrase emerged that captured the problem perfectly: AI intuition.

Not AI literacy. Not prompt engineering. But intuition, the ability for a person to recognise when an AI system is likely to be right, when it’s likely to be wrong, and when it’s confidently wrong.

It quickly became clear across the roundtable that this is one of the biggest gaps organisations face today.

Why AI intuition matters more than AI literacy

Traditional “AI skills” training focuses on features, tools, capabilities and use cases. Useful, but incomplete.

Because the challenge isn’t just learning what AI can do. It’s learning what it shouldn’t do.

One participant summed it up well: AI has jagged capabilities. It’s brilliant at one task and terrible at something that looks very similar. Humans assume consistency, but AI doesn’t behave consistently.

This inconsistency makes trust calibration extremely difficult. A calculator behaves the same way every time. AI doesn’t. And this is where intuition becomes essential.

The danger of skipped learning

A concern shared in the room was around what happens when people lean on AI before they truly understand the underlying skill.

Experienced professionals are using AI to accelerate tasks they already know how to do:

  • reviewing contracts
  • prototyping ideas
  • analysing scenarios
  • writing reports

Because they understand the domain deeply, they can judge when the AI’s output is plausible and when it isn’t.

But people entering the workforce today won’t have that foundation. They’re learning with AI before they’ve learned without it.

This raises a critical question: If someone never learned the underlying skill, how can they recognise when the AI is confidently wrong?

In engineering, operations, public safety, research or regulated environments, that gap is more than an inconvenience, it’s a risk.

AI intuition isn’t built through theory. It’s built through exposure.

One of the clearest recommendations from the group was: Give people access. Let them use AI. Let them see where it works, and where it fails.

This is how intuition forms.

Through:

  • Experimentation
  • trial and error
  • comparing outputs
  • challenging results
  • learning the patterns of capability
  • spotting the limits firsthand

Not through PowerPoints or handbooks, but through lived experience.

We need to train for judgement, not just tools

Several participants noted how most AI training today focuses on capability rather than judgement.

AI intuition is fundamentally about:

  • assessing risk
  • questioning outputs
  • recognising uncertainty
  • knowing when human oversight is non-negotiable
  • spotting edge cases
  • understanding when automation is inappropriate

This is why AI literacy, important as it is, no longer goes far enough. People need to develop the judgement that allows them to use AI safely, confidently and responsibly.

Are we building capability or dependency?

A provocative moment came when discussing the newest generation entering the workforce:

Many have never completed academic or professional tasks without AI. And when the tools fail - whether through outages, connectivity issues or model limitations - some struggle to operate independently.

Leaders questioned whether we’re preparing people for a world with AI… or unintentionally creating dependency on it.

The real skill is knowing when not to use AI

One of the strongest insights from the morning was this:

AI intuition is as much about restraint as it is about capability.

It’s the ability to recognise when:

  • the stakes are too high,
  • the data is unreliable,
  • the output is persuasive but suspect,
  • the scenario involves human nuance,
  • or the problem simply isn’t suited to automation.

AI is a powerful accelerator but not a substitute for judgement.

Why this matters now

AI adoption is accelerating fast, but AI understanding is not.

To use AI effectively, organisations need people who can:

  • evaluate risk
  • question outputs
  • recognise pattern failures
  • sense when something “doesn’t look right”
  • and maintain independent thinking

This is AI intuition and it is fast becoming a core skill for the modern workforce.


Elia Corkery Marketing Executive at New Icon

Join the newsletter

Subscribe to get our best content. No spam, ever. Unsubscribe at any time.

Get in touch

Send us a message for more information about how we can help you