
by Faye Cameron, Business & Partnerships Manager, Centre for Digital Innovations in Health and Social Care (CDIHSC), Faculty of Health Studies, University of Bradford
A key part of my role is to facilitate collaboration on cutting edge projects. I do this through strategic partnerships, shared resource and infrastructure, to connect organisations that have the potential to revolutionise health and social care services in Bradford and beyond. This includes companies that are developing digital healthcare technologies, university researchers with expertise in the development and evaluation of digital health, and health and social care organisations (including the NHS). While much research about healthcare AI is taking place, a key recurring question is: ‘How can we safely and successfully close the gap between AI research and AI practice?’ My role focuses on this gap.
Why AI now?
To clarify terms, healthcare AI is not a single product; it is a family of approaches and applications spanning diagnosis and treatment, patient engagement and adherence, and administrative workflows.
AI was what (I thought) I had known for about a decade as Machine Learning (ML), Natural Language Processing (NLP) or expert systems, so I did not see AI as a shiny new concept. And so, since CDIHSC’s inception last August, I have endeavoured to understand what was really meant by those loud and proud AI references promoted by organisations at every digital health conference I was in the mix at. Why the excitement? Why did it look and feel like a race? Was this evolution of old, or the era of revolution? Are we in the AI era, like we were in the dot-com era? Is this the beginning of a new industrial revolution?
AI in healthcare
The way I see it, the gap between AI research and the application of AI in healthcare can be bridged by high-quality research and evaluation.
This means engaging the very people who will be using the technology – patients, service users, carers, and health and social care professionals, to ensure that technology is fit for purpose, having meaningful impact, and delivering maximum benefits for individuals and organisations. Digital solutions must put end-user needs and experiences first to ensure sustainable outcomes in the health and social care sector.
From research to practice
High-quality research and evaluation have an essential role to play in the successful use and adoption of AI in clinical practice. Researchers carefully analyse, strategically plan, and collaborate with project stakeholders, working to a rigorous robust process. They thoroughly scope out problems and challenges, defining these and identifying any gaps where improvements can be made. They gather data and feedback, providing valuable insights about how AI tools actually perform looking at design, content, and functionality when they are put to task in a medical setting (e.g. in an operating theatre if its use is for surgery) or a home environment if it’s a medical device being used by a patient in the community.
Observing and evaluating how real users like health professionals and members of the public interact with technology is key. Researchers undertake usability testing to answer questions like: Can users successfully complete tasks? What about the time and effort it takes? How easy is it to learn and become proficient with usage? Do any errors occur, can they be resolved, how, how quickly, and can they be prevented from reoccurring?
What robust evaluation delivers
Robust evaluation research involves systematically assessing the effectiveness, value and impact of digital health interventions and tools, like clinical AI; drawing conclusions about whether claimed benefits for users and the health and care system are realised; and making recommendations based on the findings. Businesses and technology companies, and health and social care providers can then reflect and implement any necessary essential improvements to technology and service, which in turn helps improve experience and benefit to patients and professionals.
Safety and trust
My observations are that AI has been a significant focus for many years across lots of sectors and there is visible excitement – sometimes hype; however, its use and adoption in clinical practice so far has been limited.
Healthcare AI sure seems ripe to revolutionise care, with great potential (and greater expectation) to improve diagnostic accuracy and earlier diagnosis, accelerate clinical trials, personalise treatments, cut NHS waiting times, save clinicians time, and deliver cost savings. But the harsh reality is that this is offset against the backdrop of health and social care systems ever challenged to deliver effective high-quality patient care, ever-enduring sustained workforce, and budget shortages. And in healthcare, AI must be safe, ethical, accurate, efficient, and effective; patient safety, clinician and patient trust, evidence-based practice, and data privacy are non-negotiable.
It is no coincidence that in June this year, the UK joined the HealthAI Global Regulatory Network as its first ‘Pioneer Country’, with the MHRA (Medicines and Healthcare products Regulatory Agency, an executive agency of the Department for Health and Social Care) committed to working collaboratively with regulators around the world to help make AI in healthcare safer and more effective for patients, through AI reform and regulation. Together, this network is working to share early warnings on AI safety and risks and monitor how AI tools perform in practice, strengthening real-world evidence through their work with researchers, NICE (National Institute for Health and Care Excellence, who produce guidance for the NHS), and the NHS and shaping global standards.
AI is not just a tool bolted on at the end of care. AI is increasingly integrated into decision-making, but outputs are only as good as the inputs and workflows. We are talking AI-enabled, process-driven change. So, before AI-enabled new medical technology can even be adopted by hospital trusts and health and social care organisations, and used in practice, it is essential that clinical processes and pathways are reviewed, tweaked, and optimised.
About the CDIHSC
The CDIHSC team is multidisciplinary, talented, and diverse whose approach to research and evaluation is challenge-driven, i.e. problem-oriented, collaborative, interdisciplinary and impact-focused. CDIHSC academic researchers are driving innovation in digital health collaboration, playing a leading role in advancing digital health through expert contributions to a wide range of collaborative projects with health and care partners and digital health companies.
Highlights include:
- Enhancing patient and clinician interfaces: Leading the co-design and usability evaluation of both patient-facing and clinician-facing systems.
- Pioneering the integration of AI in histopathology: Heading a key work package to identify the requirements for introducing AI into histopathology workflows and leading its evaluation.
- Evaluating digital technologies in real-world settings: Partnering with Joii Ltd.
- Championing patient safety through data and intelligence: Yorkshire and Humber Patient Safety Research Collaboration – Professor Rebecca Randell co-leading the Safety Intelligence theme and Dr Muhammad Faisal serving as Centre Statistician.
Interested in evaluating AI safely and effectively? Get in touch via email at CDIHSC@bradford.ac.uk to explore ways to partner and collaborate.