AI adoption in business: real examples, risks and lessons from UK leaders

Person writing in a notebook with a pen on a wooden table, next to some paperwork.
Dolo Miah CEO & CTO
7 min read in AI
(1793 words)
published

A practical look at AI adoption in business, covering real use cases, data challenges, AI governance risks and lessons from leaders across UK industries.

AI: The good, the bad, and the ugly

Over the last six months, I have had the privilege of sitting down - virtually and in person - with leaders from across UK industries: energy, utilities, construction, law, manufacturing, media, education, and beyond. Not to sell anything. Not to run a workshop. Just to have an honest conversation about where they are with AI adoption and implementation.

What I heard was revealing. Not because everyone was doing extraordinary things with AI (some were, some weren't), but because the honesty in those conversations cut through the noise that dominates most AI discourse right now.

So here is my attempt to distil what’s really going on with AI in businesses. No hype. No doom. Just the real picture - the good, the bad… and the ugly!

The good: Real value is being created

Let's start with the wins, because there are genuine examples of AI delivering real business value.

A senior leader in the education sector shared that AI has been a genuine win for content creation: cutting translation costs by 40% and reducing the time spent on translations by 50-60%, which in turn is getting products to market faster. That is not a marginal efficiency gain, that is a structural shift in their operating model due to AI-enabled processes. The same organisation is also using AI to augment tutors, helping students work through problems rather than simply handing them the answer. That is AI enhancing human potential - not replacing it. 

A senior executive in the energy sector reframed this in a way that struck a chord with me. The shift to new energy - electric vehicles, flexible tariffs, mixed energy sources - is fundamentally a behaviour change problem, not a technology one. Early adopters of new energy will figure it out themselves. But the mass market will only follow if the experience is simpler than what they do today. Smart meter data tells you what a customer is doing, but to genuinely be able to influence behaviour, you need a 360-degree view across multiple dimensions. Get that right, and AI can personalise the energy experience in a way that makes the sustainable choice the path of least resistance.

The transition to clean energy will not be won by technology alone. It will be won by giving customers the easier option.

In the charity and mental health sector, one founder described becoming "time-rich" - using AI to handle administrative tasks like drafting contracts so that his team's energy goes where it matters most: people. He was refreshingly candid too, noting they are bringing in an expert to ensure they use it ethically. That kind of intentionality is exactly what good AI adoption looks like. 

A Head of AI in the media and publishing sector made a point that resonates well beyond his industry. His organisation has decades of high-quality, authoritative content - built long before the internet, let alone AI. The challenge was never the quality of what they had. It was the ability to find it, surface it, and put it in front of the right person at the right moment. AI is now making that possible. Not by changing the content but by making it work harder. The same team is also reporting that it is building applications much quicker and for a lower cost - meaning they can experiment, learn, and move on without the risk and expense that would previously have stopped them trying.

The common thread across successful AI use cases? The organisations seeing real value are not chasing the headline. They started with a clear problem, kept humans in the loop, and in many cases, found that AI's biggest win was unlocking value that was already there

The bad: The gap between expectations and reality

Here is where AI implementation gets more complicated.

Several leaders I spoke to were candid about the gap between what AI was promised to deliver and what it has actually delivered so far.

A senior figure in media and journalism noted that the big productivity and cost savings that were supposed to materialise simply have not - at least not yet. The focus has quietly shifted from "AI will save us money" to "AI might improve our customer experience". That pivot is the right instinct but it only means something if you actually know what your customer experience looks like today. Without a measurable baseline, you cannot answer the one question your board will eventually ask: what did we actually get for that investment? 

A director at a construction and property law firm put it plainly: accuracy is a mixed bag, and expectations have had to come down. Even when working with closed, proprietary data, AI is not reliably picking up the nuances that matter in legal work. Her take? The more senior you are, the less satisfied you will be with the output - because you know exactly what good looks like. For junior tasks like drafting letters or pulling together a chronology, it saves a little time. For substantive legal work? A human must be in the loop, full stop. She also flagged something that should concern every leader in a compliance-heavy industry: the Building Safety Act is fast-moving, and the consequences of getting it wrong are severe. If AI is feeding inaccurate or outdated guidance into decisions about fire safety or structural compliance, the stakes are not abstract. And she is not alone in having to set client expectations as they arrive with AI-generated instructions in hand. 

A leader in a manufacturing consultancy made a point that every senior decision-maker needs to hear. AI can sound extraordinarily confident while being entirely wrong. Think of it as that colleague who delivers bad news with a beaming smile and zero self-awareness. It can conflate separate conversations - motorbikes and cars in the same estimate, for example - leading to outputs that look authoritative but are riddled with errors. It is excellent for generating initial ideas but can be dangerous when taken as the source of truth.

In the energy supply chain space, the ambition is there - a real-time, AI-powered control tower that gives full visibility across operations. The obstacle is not the AI. It is the data. With critical information locked inside legacy systems and reports pulled manually, you cannot build intelligent systems on top of fragmented, conflicting and shaky foundations. The AI may well be ready. But the infrastructure also has to be.

The bad is not really about AI failing. It is about organisations discovering that AI is only as good as the foundations beneath it - the data, the governance, the processes, and the realistic expectations set from the start.

The ugly: The risks that keep leaders up at night

Now for the uncomfortable bit.

A health and safety lead in the utilities sector and a senior operations leader in the water industry separately circled the same risk from different angles. One flagged that his organisation built an AI system to assess health and safety risks - solid work, well intentioned. But when they looked to roll it out internationally, a problem emerged. In certain management cultures, what the system says can quickly become what the workforce does, with no experienced voice in between to question it. If the manager directing that workforce has never worked in the operational field, they have no practical experience to sense-check what the AI is telling them. The other leader made the same point from a different angle: "Garbage in, garbage out". Operations teams are being asked to trust systems that are only as reliable as the data feeding them. Put those two problems together - poor data quality and inexperienced interpretation - and the consequences can be severe. AI does not know when it is wrong (AI hallucination is a key example). That is still a human's job.

A director in the design and construction industry - with a background spanning smart cities, data engineering and large-scale technology consulting - had a sharp observation. She has spent years watching engineers jump to solutions before the problem is understood, and AI is accelerating that instinct. Organisations are deploying complex capabilities before the architecture is mapped or the business case made. And the people who should be asking the hard questions - the "experts" - are too confident and not honest enough about how much is still being figured out. 

And to add my two pence - perhaps the most universal risk of all: the absence of AI governance. I have written about this at length in our AI governance framework, but the conversations I have had over these six months only reinforce it. Employees using unsanctioned tools, sensitive data entering public models, outputs used without verification. It is the digital equivalent of leaving your front door open and being surprised when things go missing. This is leading to a new enterprise risk - ‘Shadow AI’ which has the potential to be exponentially bigger than its predecessor ‘Shadow IT’.

The ugly truth is this: bad data, inexperienced interpretation, and ungoverned tools are not edge cases. For most organisations, they are the current reality - whether they know it or not.

The light at the end of the tunnel

Here is what I want to leave you with.

Every single person I spoke to - the sceptics, the enthusiasts, and everyone in between - is still in the game. Nobody is walking away from AI. What they are doing is getting smarter about it.

The leaders who are seeing real value share a few things in common: they started with a clear problem rather than a technology; they kept humans in the loop; they invested in their data before they invested in their models; and they measured outcomes rather than activity.

AI is not a magic wand for digital transformation. It is not going to change your business overnight simply because you bought a licence or ran a proof of concept. But applied thoughtfully as part of a structured AI strategy, with the right foundations, architecture, governance, and the right expectations, it is already improving how some of the UK's most complex organisations operate.

The question is not whether AI is right for your business. The question is whether you are approaching it in a way that will actually deliver value through the balanced combination of people, process and technology considerations.

At New Icon, we do not recommend AI for the sake of AI. We have those honest conversations - the ones about data readiness, realistic timelines, and what good actually looks like for your specific context. If you are at the start of your AI adoption journey, or somewhere in the middle wondering whether you are on the right track, we would love to talk


Dolo Miah CEO & CTO at New Icon

Join the newsletter

Subscribe to get our best content. No spam, ever. Unsubscribe at any time.

Get in touch

Send us a message for more information about how we can help you