March 2025
In January 2025, the UK government unveiled its ambitious AI Opportunities Action Plan, positioning artificial intelligence as a cornerstone of the NHS's future. The timing couldn't be more critical. With 61,500 patients enduring 12-hour waits for hospital beds in January alone, over 360,000 people visiting A&E more than five times annually, and GP appointments increasingly difficult to secure, our healthcare system desperately needs solutions that can alleviate these pressures.
The healthcare sector itself appears ready to embrace AI – a Health Foundation survey revealed that 76% of NHS staff support AI's role in patient care. The public's growing comfort with digital healthcare is evident too, with 10 million more people using NHS websites and applications in 2021 compared to the previous year.
Yet a striking paradox emerges from our February 2025 survey of 2,000 UK adults. While people increasingly embrace technology for health management, they remain deeply sceptical about AI-powered healthcare solutions specifically.
The survey findings paint a clear picture of this disconnect. About 65% of respondents believe people have become more health-conscious in the past five years, with the same percentage agreeing that wearables and apps can help them take responsibility for their health. An even larger proportion—67%—support wider use of healthcare technology if it allows professionals to focus on those who need care most urgently.
However, when AI enters the equation, confidence plummets dramatically. Only 29% of UK adults would trust AI to provide basic health advice. This drops further to just 19% for personalised health advice based on medical history, 15% for mental health support, and a mere 14% for AI chatbots replacing doctor appointments for minor concerns.
Perhaps most concerning is how these trust issues break down along demographic lines. While 71% of 18-34 year-olds feel comfortable using digital healthcare tools, only 47% of those aged 55+ share this comfort. Similarly, 48% of younger people understand AI applications in healthcare compared to just 21% of older adults.
People with disabilities express even greater concerns—73% worry AI could exclude those who aren't tech-confident (versus 66% of the general population), and 65% fear AI might prioritise efficiency over personal care needed by some patients (versus 59% overall).
What's driving this trust deficit? For many, it's a lack of transparency and evidence. A substantial 61% of respondents believe there simply isn't enough evidence yet to determine if AI in healthcare is trustworthy or reliable. Without understanding how AI works, how decisions are made, and what safeguards exist, public confidence remains stubbornly low.
This striking statistic reveals a fundamental challenge in AI healthcare adoption that goes beyond mere unfamiliarity with technology. The trust deficit stems from a complex interplay of several key concerns that healthcare organisations and technology developers must address to move forward effectively.
Compounding these concerns is anxiety about data security and privacy. Healthcare data represents some of the most sensitive personal information—detailing not just physical conditions, but mental health challenges, genetic predispositions, and lifestyle choices. Many respondents likely worry about who has access to their data, how it might be used beyond direct care, whether it could be compromised, and if their information might be used in ways they haven't explicitly consented to. The survey suggests that without transparent data governance frameworks clearly communicated to patients, many remain unwilling to trust AI systems with their health information.
There's also the critical issue of bias and fairness. Several high-profile cases have demonstrated that AI systems can perpetuate or even amplify existing healthcare disparities if they're trained on non-representative data. People from marginalised communities have legitimate concerns about whether AI systems will serve them as effectively as others. Without transparency about how AI systems are tested for bias or what steps are taken to ensure equitable outcomes, many potential users remain sceptical about whether these technologies will benefit everyone equally.
Regulatory uncertainty further undermines confidence. Unlike pharmaceuticals or medical devices, AI healthcare applications exist in a relatively new and evolving regulatory landscape. Most patients don't understand what standards AI systems must meet before deployment, what ongoing monitoring exists, or who is ultimately accountable if something goes wrong. This regulatory ambiguity creates a trust vacuum—without clear oversight frameworks, patients are left wondering who is ensuring these systems are safe and effective.
Finally, there's the crucial issue of human oversight and intervention. Many patients worry that cost-cutting measures might lead to AI systems replacing rather than augmenting the expertise of human clinicians. Without clear explanations of how AI fits into the broader care ecosystem, what human oversight exists, and when human clinicians will intervene, patients may fear they're being relegated to algorithmic care without appropriate safeguards.
I have identified this as fundamentally "an image problem". Despite AI's potential to enhance healthcare efficiency and free up professionals for more complex cases, many still associate it with impersonal automation or overhyped promises.
The solution requires a multi-faceted approach:
The fundamental challenge isn't just improving AI healthcare solutions—it's building public trust in them. Healthcare is deeply personal, and people want reassurance that AI will enhance human care, not replace it.
With the right approach prioritising clarity, accessibility, and transparency, we can bridge the gap between cutting-edge innovation and public confidence. But public attitudes towards technology have changed before, and AI is no different. The question is whether we can guide this change thoughtfully enough to ensure AI delivers on its promise to transform healthcare for everyone, leaving no one behind.
Click here and download the full nuom report today to access comprehensive insights from 2,000 UK adults, revealing detailed demographic breakdowns, exclusive expert analysis, and actionable strategies for building public confidence in AI-powered healthcare solutions—essential reading for healthcare providers, policymakers, and technology developers committed to creating an inclusive digital health future that leaves no patient behind while alleviating the immense pressures facing our healthcare system.
We create human-centered solutions that drive positive outcomes for users and organisations. Let’s collaborate.
See our work