Skip to content

Content Hub

Article: Changing the patient safety mindset: Can safety cases help?

By Mark Sujan, Ibrahim Habli

Safety cases reflect a systems-based approach that supports transparent reasoning and learning about safety at both the local and the wider health system level. We need to embrace the mindset that the safety case approach is not simply a report nor a safety management tool—it is an approach to support safe clinical work.

Article: Ethics in conversation: Building an ethics assurance case for autonomous AI-enabled voice agents in healthcare

By Ibrahim Habli

The principles-based ethics assurance argument pattern is one proposal in the AI ethics landscape that seeks to support and achieve that aim, with the purpose of structuring reasoning about, and to communicate and foster confidence in, the ethical acceptability of uses of specific real-world AI systems in complex socio-technical contexts. This paper presents the interim findings of a case study applying this ethics assurance framework to the use of Dora, an AI-based telemedicine system, to assess its viability and usefulness as an approach.

Article: Implementing an artificial intelligence command centre in the NHS: a mixed-methods study

By Tom Lawton, Ibrahim Habli

Hospital ‘command centres’ use digital technologies to collect, analyse and present real-time information that may improve patient flow and patient safety. Bradford Royal Infirmary has trialled this approach and presents an opportunity to evaluate effectiveness to inform future adoption in the United Kingdom.

Article: The Need for the Human-Centred Explanation for ML-based Clinical Decision Support Systems

By Yan Jia, John McDermid, Nathan Hughes, Mark Sujan, Tom Lawton, Ibrahim Habli

Machine learning has shown great promise in a variety of applications, but the deployment of these systems is hindered by the “opaque” nature of machine learning algorithms. This paper highlights the need to develop human-centred explanations for machine learning-based clinical decision support systems, as clinicians who typically have limited knowledge of machine learning techniques are the users of these systems.

Article: Contextual design requirements for decision-support tools involved in weaning patients from mechanical ventilation in intensive care units

By Yan Jia, John McDermid, Nathan Hughes, Mark Sujan, Tom Lawton, Ibrahim Habli

Weaning patients from ventilation in intensive care units is a complex task. There is a growing desire to build decision-support tools to help clinicians during this process, especially those employing Artificial Intelligence. It is important to identify areas where decision-support tools may aid clinicians, and associated design requirements for such tools. This study analysed the work context surrounding the weaning process from mechanical ventilation in ICU environments, via cognitive task and work domain analyses.

Article: Development and translation of human-AI interaction models into working prototypes for clinical decision-making

By Yan Jia, John McDermid, Nathan Hughes, Mark Sujan, Tom Lawton, Ibrahim Habli

In the standard interaction model of clinical decision support systems, the system makes a recommendation, and the clinician decides whether to act on it. However, this model can compromise the patient-centeredness of care and the level of clinician involvement. This paper presents alternative models of human-AI interaction and illustrate how a co-design approach can be used to translate them into functional prototypes that can be tested with users to explore potential impacts on clinical decision-making.

Article: Stakeholder perceptions of the safety and assurance of artificial intelligence in healthcare

By Mark Sujan, Ibrahim Habli

There is an increasing number of healthcare AI applications in development or already in use. However, the safety impact of using AI in healthcare is largely unknown. This paper explores how different stakeholders (patients, hospital staff, technology developers, regulators) think about safety and safety assurance of healthcare AI.

Article: The role of explainability in assuring safety of machine learning in healthcare

By Yan Jia, John McDermid, Tom Lawton, Ibrahim Habli

Established approaches to assuring safety-critical systems and software are difficult to apply to systems employing ML where there is no clear, pre-defined specification against which to assess validity. Explainable AI (XAI) methods have been proposed to tackle this issue by producing human-interpretable representations of ML models which can help users to gain confidence and build trust in the ML system. This paper identifies ways in which XAI methods can contribute to safety assurance of ML-based systems.

Article: Clinicians risk becoming “liability sinks” for artificial intelligence

By Tom Lawton, Ibrahim Habli

Artificial Intelligence (AI) is often touted as healthcare’s saviour, but its potential will only be realised if developers and providers consider the whole clinical context and AI’s place within it. One of many aspects of that clinical context is the question of liability. As we move towards integrating AI into healthcare systems, it is important to ensure that this does not translate into clinicians unfairly absorbing legal liability for errors and adverse outcomes over which they have limited control.

Article: Moving beyond the AI sales pitch – Empowering clinicians to ask the right questions about clinical AI

By Ibrahim Habli, Mark Sujan, Tom Lawton

In order to fully realise the potential of healthcare AI and to ensure its suitability for purpose, we must empower clinicians and decision makers to see beyond headline-grabbing sales pitches, and to carefully frame their questions from a systems perspective to avoid hasty and overly simplistic conclusions. Acknowledging the power imbalance between technology companies, supported by influential policy makers and market forces, and the overburdened clinical workforce with outdated digital and organisational infrastructure is essential.

Article: Disagreeing with AI could be bad for your health

By Yan Jia, John McDermid, Tom Lawton, Ibrahim Habli

Artificial Intelligence is increasingly being used in healthcare, with a growing potential to improve clinical decision-making. Many AI systems are now providing treatment recommendations although human clinicians remain responsible for making the final decision about whether to implement an AI’s recommendation. To explore this further, the Shared CAIRE project is investigating how clinicians interact with AI in a variety of clinical scenarios.

Podcast: Defining AI safety

By Ibrahim Habli

This podcast discusses the topic of defining and contextualising AI safety and risk, given existence of existing safety practices from other industries.

Podcast: AI in Healthcare

By Tom Lawton

In this podcast, Dr Lawton discusses the sorts of technology and AI currently being used at his Trust in particular, and more widely in the NHS; the advantages that these technologies bring, as well as the challenges posed. Dr Lawton also contributes thoughts on the future and what there is to come, both in terms new AI and technology yet to be deployed, and where improvements can be made to existing AI and technology to improve patient safety and experience.

Project: ARTICULATE PRO

By Yan Jia, Ibrahim Habli

ARTICULATE PRO is a two-year project, funded by the Accelerative Access Collaborative and NHSx through a Phase 4 AI in Health and Care Award. The project officially began on 1 September 2021. Our remit is to investigate the deployment of AI (computer assisted technology) in the prostate cancer pathway by using Paige Prostate to assist pathologists when reading prostate biopsies.

Article: Trustworthy and Ethical
Assurance of Digital
Health and Healthcare

By Ibrahim Habli

As data-driven technologies, such as digital twins or AI systems, continue to be used in critical sectors like healthcare, finance, and criminal justice, ensuring they are designed, developed, and deployed in a trustworthy and ethical manner is paramount. To harness the full potential of data-driven technologies while mitigating the inherent risks, we must prioritise building systems that are trustworthy and ethical.

Article: Multi-site validation of automated AI tool for screening of large bowel endoscopic biopsy slides

By Ibrahim Habli, Yan Jia

In this podcast, Dr Lawton discusses the sorts of technology and AI currently being used at his Trust in particular, and more widely in the NHS; the advantages that these technologies bring, as well as the challenges posed. Dr Lawton also contributes thoughts on the future and what there is to come, both in terms new AI and technology yet to be deployed, and where improvements can be made to existing AI and technology to improve patient safety and experience.

Podcast: Behind the paper – Moving beyond the AI sales pitch

By Ibrahim Habli

This video dives into the Future Healthcare Journal’s paper “Moving Beyond the AI Sales Pitch. The discussion focuses on how clinical AI should support and not replace clinicians. Focusing on safety, trust, and practical issues around AI implementation in healthcare.