Skip to content

The AI Healthcare Divide: Innovation vs. Reality

by Dr Margaret Horton, Independent Industry Consultant, Visiting Fellow in Healthcare, Centre for Assuring Autonomy

As AI systems in healthcare continue to mature and advance, a growing disconnect has emerged between the realities of adopting AI in complex healthcare ecosystems such as the NHS, and the ways that tech companies design, develop, validate and position these systems. Here I outline six key areas of tension, and explore how AI safety approaches may narrow this divide.

Understanding the different cultures of startups and health care providers

While AI has promising potential to improve medical practice, the journey from impressive algorithm results to meaningful patient outcomes exposes stark contrasts in priorities, expectations, and accountability structures. Understanding these tensions is essential for healthcare AI product teams, clinical leaders, and investors who must navigate the complex landscape where cutting-edge technology meets the evidence-based approach of medical practice.

Much of the innovation in artificial intelligence (AI) for healthcare settings today emerges from the unique culture of startups and venture capital, with a strong component of academic departments producing startup talent in machine learning and AI. The general startup culture is celebrated, and has also received due criticism and scrutiny, such as provided by the insightful reporting by Karen Hao in Empire of AI. Undeniably startups provide a singular environment where new ideas, together with dedication and talent, produce rapid technological advances. Teams can quickly build and ship products, and apply unconventional approaches to solve intractable problems. In healthcare there are AI-based systems stemming from startups that are helping patients understand their cancer progression risk and make treatment decisions that would not simply be possible with today’s standard tests and assessments.

This startup culture often stands in stark contrast to the real-life healthcare delivery environment, and the hospitals and healthcare systems that are considering taking steps to integrate AI solutions into their clinical pathways. Healthcare delivery involves delicate running operations and the focus for providers is not on disruption, but rather on providing critical services to patients and driving health outcomes. From my experiences in the AI image-based diagnostics space, first-of-kind AI clinical deployments into hospitals and health care providers can be highly complex to technically realise, and the lists of requirements around data governance, regulatory clearances, risk assessments, quality standards and validation can be daunting. Simply taking full stock of the requirements, including the many placeholder guidelines that are in development, can feel like an overwhelming and endless effort.

Six Critical Tensions in Healthcare AI Adoption.

Can combining AI safety methods and an innovation mindset resolve tensions?

Certainly the tensions identified above reflect many of the barriers to sustainable AI adoption, though I believe that they also present an opportunity. What if the same grit, ingenuity and teamwork that powers the best aspects of startup culture could be applied to resolve these tensions? I see the potential of an AI safety systems mindset to help here, where the creativity used to design novel technologies in a startup environment could be applied to identify impactful edge cases, or potential downstream harms. Where alongside perfecting a product, there is a multidisciplinary strive for perfection in developing clinical study protocols to assess real patient benefits and meet evidence benchmarks. Some ideas: a Safety Hackathon format could be a possibility where AI, product and clinical teams identify all types of scenarios where the AI system may behave unexpectedly – for example rare types of diseases, sample quality issues, or even among end-users of very different dispositions and levels of trust towards AI – and are encouraged to test the AI performance at unexplored boundaries.

This is something with which I have direct experience. In a recent research project, Dr. Yan Jia, from the CfAA, and I collaborated with an NHS hospital[2] to examine how new risks can be identified and also mitigated, to provide assurance when AI diagnostic assistance is deployed to clinical settings. It was refreshing to consider all of the potential failure modes together with the clinical end users, who had a balanced view of the benefits of AI and its trade-offs, because they had experienced the system in their own workflows and in their own patient populations. The result is painting a transparent and complex picture much closer to reality than is possible with a conventional in-silico validation study.

As a Visiting Fellow at the Centre for Assuring Autonomy, continuing my work with Dr. Jia, I am bringing my own industry experience and insights, a long list of questions, and endless curiosity to approach new research on AI safety in deployed clinical environments. Together with experts at the CfAA including Dr. Nathan Hughes, we are picking these topics up further and asking new questions while we engage with the many experts advancing leadership in AI safety science in healthcare. The fascinating part for me is that since this field is advancing so rapidly, new insights and best practices from multiple disciplines are emerging daily.

[1] Lawton T, Morgan P, Porter Z, et al. Clinicians risk becoming ‘liability sinks’ for artificial intelligence. Future Healthc J. 2024;11(1):100007. Published 2024 Feb 19. doi:10.1016/j.fhj.2024.100007

[2] Jia Y, Verrill C, White K, et al. A deployment safety case for AI-assisted prostate cancer diagnosis. Comput Biol Med. 2025;192(Pt B):110237. doi:10.1016/j.compbiomed.2025.110237