AI governance in practice: managing shadow AI in UK organisations

Elia Corkery Marketing & Communications Manager
3 min read in Events
(820 words)
published

Insights from our AI governance roundtable, exploring shadow AI, data risk, and practical steps for organisations adopting AI responsibly.

Last week, we hosted our latest Minds in Motion roundtable in Bristol, bringing together senior leaders from across healthcare, finance, manufacturing, energy and technology to explore a challenge many organisations are facing today: AI governance.

The session saw a fantastic turnout, with the room at full capacity and additional interest beyond available spaces – a clear signal that organisations across the UK are actively trying to understand how to manage AI adoption safely and effectively.

Led by our CEO, Dolo Miah, the session was designed as an open, discussion-led forum rather than a presentation. What followed was an honest, practical and at times uncomfortable conversation about how AI is really being used inside organisations today.

AI adoption is already happening – with or without governance

One of the strongest themes to emerge was that AI adoption is already widespread across organisations, regardless of whether it has been formally approved.

Across sectors, attendees shared that employees are:

  • using generative AI tools to accelerate day-to-day work
  • experimenting with automation and workflows
  • incorporating AI outputs into business processes and decisions

Often, this is happening without clear visibility, policy, or governance frameworks in place.

This is what is increasingly being referred to as “shadow AI” – the use of AI tools outside approved systems or oversight. And importantly, this is not a future risk, it is already embedded within many organisations.

You can’t stop AI, but you can govern how it’s used

A consistent view across the group was that restricting or banning AI tools is not an effective strategy. If organisations attempt to block usage entirely, it does not stop adoption – it simply pushes it outside controlled environments.

Instead, a more effective approach to AI governance includes:

  • providing approved AI tools and environments
  • making secure usage easier than insecure alternatives
  • encouraging open conversations about how AI is being used
  • positioning governance as an enabler of innovation, not a blocker

For many organisations, this represents a shift from control to visibility and guidance.

AI governance builds on existing data and security foundations

Another key takeaway was that AI governance is not a standalone discipline.

The risks associated with AI adoption are closely linked to areas organisations are already familiar with, including:

  • data governance
  • GDPR and data privacy
  • security and access control
  • intellectual property protection
  • compliance and auditability

Rather than starting from scratch, organisations should focus on extending existing governance frameworks to account for how AI tools interact with data and decision-making. In many cases, the foundations already exist, they just need to evolve.

The real risk is misplaced trust in AI outputs

While security and data exposure were key concerns, one of the most important themes discussed was trust.

AI-generated outputs are often highly articulate, contextually relevant, and delivered with confidence… even when they are incorrect!

Attendees shared real examples where AI had generated inaccurate or fabricated references, introduced errors into regulated or client-facing work, and influenced decisions without clear traceability. Highlighting a critical risk: not just misuse, but over-reliance on outputs that appear credible but have not been validated.

Human judgement remains critical in AI-driven organisations

Across industries – particularly in highly regulated sectors – there was strong agreement that AI should augment, not replace, human decision-making.

The concept of “human in the loop” was a recurring theme. AI can support organisations by accelerating analysis, identifying patterns in large datasets, and improving operational efficiency, however, accountability, interpretation and trust remain human responsibilities.

The most effective organisations will be those that successfully combine human expertise with AI capability, rather than relying on automation alone.

Practical first steps for AI governance

While there is no one-size-fits-all AI governance framework, several practical starting points emerged from the discussion:

  • understand where AI is already being used across the organisation
  • define what data can and cannot be shared with AI tools
  • provide secure, approved environments for experimentation
  • update existing governance policies to explicitly include AI usage
  • build awareness through real-world examples and training
  • encourage a culture of transparency rather than hidden usage

Ultimately, the goal is not to eliminate AI risk entirely, but to manage it in a way that enables safe, scalable adoption.

New Icon’s perspective on AI governance

At New Icon, we see AI governance as a critical part of modern digital transformation. It is not about restricting innovation, but about creating the right foundations for organisations to adopt AI with confidence.

That means building on strong data and security principles, enabling safe experimentation, and designing systems that are transparent, explainable and accountable. As AI continues to evolve, so too must the way organisations approach governance.

What comes next?

What became clear from this session is that AI governance is not a one-off initiative. It is an ongoing, evolving capability. And for many organisations, this conversation is only just beginning. We’ll be continuing the discussion in future Minds in Motion sessions as we explore how businesses can move from experimentation to responsible, enterprise-grade AI adoption.


Elia Corkery Marketing & Communications Manager at New Icon

Join the newsletter

Subscribe to get our best content. No spam, ever. Unsubscribe at any time.

Get in touch

Send us a message for more information about how we can help you