Why data trust is still the biggest barrier to AI adoption

Elia Corkery Marketing Executive
3 min read in AI
(696 words)
published

AI adoption is stalled by one thing: data trust. Until organisations fix ownership, governance and quality, AI can’t scale.

Across every industry represented at our Innovation Breakfast - from scientific research and national agencies to engineering, public sector leadership and advanced tech,  one topic kept resurfacing: Trust. Not in AI itself, but in the data that feeds it.

While many organisations are experimenting with automation, private models and AI-powered workflows, very few feel confident that their data foundations are ready for real, operational change. And this isn’t a technical problem. It’s cultural, structural, and deeply human.

Here’s how the room unpacked it.

Organisations are collecting more data than ever, and using very little of it

One of the strongest observations of the morning was that most organisations are drowning in data yet starved of insight.

Someone summed it up simply: “We collect so much data… but we don’t use most of it.”

Why?

  • It’s held in silos
  • It’s unclear who owns it
  • Quality is inconsistent
  • Governance models are outdated or unclear
  • Teams aren’t sure what’s actually useful
  • People lack confidence that data can be shared safely

Instead of being an asset, data becomes a liability,  something that adds friction rather than clarity.

Trust isn’t a technology problem, it’s an ownership problem

A key theme from the roundtable was the blurred line around data ownership. If data about customers, users or citizens exists across dozens of systems, platforms and vendors, who does it belong to? Who’s responsible for safeguarding it? Who can use it to train or optimise AI?

Several attendees pointed out that much of the mistrust surrounding AI comes from the uncertainty around how models acquire their training data in the first place. Scraping, aggregating and inferred data raise uncomfortable questions that many organisations don’t yet feel equipped to answer.

The consensus was clear:

Before people trust AI, they need to trust where their data goes, how it’s used and why it’s collected.

Public sector teams face a unique challenge: risk aversion by necessity

Participants working in the public sector and regulated environments highlighted something very real: the cost of getting it wrong is simply too high.

Data-sharing between departments is still approached cautiously. Automation projects move slowly because reputational and legal risks are significant. Even simple forms of AI adoption can stall if governance isn't fully aligned.

This isn’t reluctance. It’s responsible stewardship.

But it creates a bottleneck: Organisations want the benefits of AI without the exposure of data mismanagement,  and until that tension is resolved, progress remains slow.

Without trust, teams avoid experimentation, and innovation flatlines

The conversation repeatedly touched on one insight: When people don’t trust the data, they don’t trust the outputs.And when they don’t trust the outputs, they don’t experiment. And when they don’t experiment, innovation stalls before it even begins.

This is why many AI initiatives never leave the experimentation phase. It’s not because the technology fails, it’s because confidence never reaches the point where organisations feel safe operationalising it.

So how do we actually build trust? The room offered four clear ideas

1. Collect less data, but collect it intentionally

A standout point was the need to move away from “collect everything just in case” towards collecting only what has purpose, value and clarity.

2. Build transparent governance models

People trust what they can see and understand. When data ownership, access and usage rules are explicit, the anxiety dissolves.

3. Make the benefits visible

Teams trust AI more when they can see how insights improve real processes. It’s not enough to implement models, they must be meaningfully tied to outcomes.

4. Educate teams on the limitations of AI, not just its possibilities

This is where the concept of “AI intuition” came in. People need to understand both the power and the boundaries of AI so they know when to trust the output and when to challenge it.

The takeaway: before AI can accelerate innovation, it must earn its licence to operate

The strongest sentiment from the discussion was this: AI adoption isn’t stalled by technology.
It’s stalled by uncertainty around data.

Organisations don’t need more tools. They need clearer models for data ownership, ethical use, sharing, governance and accountability.

Until those foundations are solid, AI will remain a promising experiment rather than a fully embedded capability.

 


Elia Corkery Marketing Executive at New Icon

Join the newsletter

Subscribe to get our best content. No spam, ever. Unsubscribe at any time.

Get in touch

Send us a message for more information about how we can help you