Insights from our AI governance roundtable, exploring shadow AI, data risk, and practical steps for organisations adopting AI responsibly.
Last week, we hosted our latest Minds in Motion roundtable in Bristol, bringing together senior leaders from across healthcare, finance, manufacturing, energy and technology to explore a challenge many organisations are facing today: AI governance.
The session saw a fantastic turnout, with the room at full capacity and additional interest beyond available spaces – a clear signal that organisations across the UK are actively trying to understand how to manage AI adoption safely and effectively.
Led by our CEO, Dolo Miah, the session was designed as an open, discussion-led forum rather than a presentation. What followed was an honest, practical and at times uncomfortable conversation about how AI is really being used inside organisations today.
One of the strongest themes to emerge was that AI adoption is already widespread across organisations, regardless of whether it has been formally approved.
Across sectors, attendees shared that employees are:
Often, this is happening without clear visibility, policy, or governance frameworks in place.
This is what is increasingly being referred to as “shadow AI” – the use of AI tools outside approved systems or oversight. And importantly, this is not a future risk, it is already embedded within many organisations.
A consistent view across the group was that restricting or banning AI tools is not an effective strategy. If organisations attempt to block usage entirely, it does not stop adoption – it simply pushes it outside controlled environments.
Instead, a more effective approach to AI governance includes:
For many organisations, this represents a shift from control to visibility and guidance.
Another key takeaway was that AI governance is not a standalone discipline.
The risks associated with AI adoption are closely linked to areas organisations are already familiar with, including:
Rather than starting from scratch, organisations should focus on extending existing governance frameworks to account for how AI tools interact with data and decision-making. In many cases, the foundations already exist, they just need to evolve.
While security and data exposure were key concerns, one of the most important themes discussed was trust.
AI-generated outputs are often highly articulate, contextually relevant, and delivered with confidence… even when they are incorrect!
Attendees shared real examples where AI had generated inaccurate or fabricated references, introduced errors into regulated or client-facing work, and influenced decisions without clear traceability. Highlighting a critical risk: not just misuse, but over-reliance on outputs that appear credible but have not been validated.
Across industries – particularly in highly regulated sectors – there was strong agreement that AI should augment, not replace, human decision-making.
The concept of “human in the loop” was a recurring theme. AI can support organisations by accelerating analysis, identifying patterns in large datasets, and improving operational efficiency, however, accountability, interpretation and trust remain human responsibilities.
The most effective organisations will be those that successfully combine human expertise with AI capability, rather than relying on automation alone.
While there is no one-size-fits-all AI governance framework, several practical starting points emerged from the discussion:
Ultimately, the goal is not to eliminate AI risk entirely, but to manage it in a way that enables safe, scalable adoption.
At New Icon, we see AI governance as a critical part of modern digital transformation. It is not about restricting innovation, but about creating the right foundations for organisations to adopt AI with confidence.
That means building on strong data and security principles, enabling safe experimentation, and designing systems that are transparent, explainable and accountable. As AI continues to evolve, so too must the way organisations approach governance.
What became clear from this session is that AI governance is not a one-off initiative. It is an ongoing, evolving capability. And for many organisations, this conversation is only just beginning. We’ll be continuing the discussion in future Minds in Motion sessions as we explore how businesses can move from experimentation to responsible, enterprise-grade AI adoption.
Subscribe to get our best content. No spam, ever. Unsubscribe at any time.
Send us a message for more information about how we can help you