AI adoption is stalled by one thing: data trust. Until organisations fix ownership, governance and quality, AI can’t scale.
Across every industry represented at our Innovation Breakfast - from scientific research and national agencies to engineering, public sector leadership and advanced tech, one topic kept resurfacing: Trust. Not in AI itself, but in the data that feeds it.
While many organisations are experimenting with automation, private models and AI-powered workflows, very few feel confident that their data foundations are ready for real, operational change. And this isn’t a technical problem. It’s cultural, structural, and deeply human.
Here’s how the room unpacked it.
One of the strongest observations of the morning was that most organisations are drowning in data yet starved of insight.
Someone summed it up simply: “We collect so much data… but we don’t use most of it.”
Why?
Instead of being an asset, data becomes a liability, something that adds friction rather than clarity.
A key theme from the roundtable was the blurred line around data ownership. If data about customers, users or citizens exists across dozens of systems, platforms and vendors, who does it belong to? Who’s responsible for safeguarding it? Who can use it to train or optimise AI?
Several attendees pointed out that much of the mistrust surrounding AI comes from the uncertainty around how models acquire their training data in the first place. Scraping, aggregating and inferred data raise uncomfortable questions that many organisations don’t yet feel equipped to answer.
The consensus was clear:
Before people trust AI, they need to trust where their data goes, how it’s used and why it’s collected.
Participants working in the public sector and regulated environments highlighted something very real: the cost of getting it wrong is simply too high.
Data-sharing between departments is still approached cautiously. Automation projects move slowly because reputational and legal risks are significant. Even simple forms of AI adoption can stall if governance isn't fully aligned.
This isn’t reluctance. It’s responsible stewardship.
But it creates a bottleneck: Organisations want the benefits of AI without the exposure of data mismanagement, and until that tension is resolved, progress remains slow.
The conversation repeatedly touched on one insight: When people don’t trust the data, they don’t trust the outputs.And when they don’t trust the outputs, they don’t experiment. And when they don’t experiment, innovation stalls before it even begins.
This is why many AI initiatives never leave the experimentation phase. It’s not because the technology fails, it’s because confidence never reaches the point where organisations feel safe operationalising it.
A standout point was the need to move away from “collect everything just in case” towards collecting only what has purpose, value and clarity.
People trust what they can see and understand. When data ownership, access and usage rules are explicit, the anxiety dissolves.
Teams trust AI more when they can see how insights improve real processes. It’s not enough to implement models, they must be meaningfully tied to outcomes.
This is where the concept of “AI intuition” came in. People need to understand both the power and the boundaries of AI so they know when to trust the output and when to challenge it.
The strongest sentiment from the discussion was this: AI adoption isn’t stalled by technology.
It’s stalled by uncertainty around data.
Organisations don’t need more tools. They need clearer models for data ownership, ethical use, sharing, governance and accountability.
Until those foundations are solid, AI will remain a promising experiment rather than a fully embedded capability.
Subscribe to get our best content. No spam, ever. Unsubscribe at any time.
Send us a message for more information about how we can help you