Skip to content

Augmenting Humanity: The Role of AI in Modern Healthcare

Tom Lawton a white man with close cropped brown hair and wearing a dark navy scrub top.

By Professor Tom Lawton, Clinical Lead for Artificial Intelligence, NHS North East and Yorkshire

As a front-line clinician I’ve seen all sorts of promises from technology – and witnessed technology’s capacity either to empower, or to overwhelm and risk being switched off. This is even more pronounced with the promise of AI, which has reignited a crucial conversation: for AI to be genuinely useful it must work with doctors, nurses, and allied health professionals (AHPs) to augment, rather than replace them. The rollout of AI must be driven by the real problems we face in the NHS, not by vendors who may have ‘solved’ an issue that was never the real concern. This requires intention, vigilance, and a commitment to keeping humans at the heart of the solution. In this post I’ll break down why AI needs to be a tool to augment human performance – not substitute for it – and what that really means for healthcare.

AI as Human Augmentation, Not Replacement

A common concern from colleagues and patients is that AI will replace human roles in healthcare. This idea is far too frequently promoted by those who know AI better than they do healthcare; yet it’s simply not realistic. Replacement ends up with neither the human nor the AI system doing what they do best. Worse, the human may end up acting as a ‘liability sink;’ taking legal responsibility for AI errors, without having the direct understanding and control to make that responsibility meaningful. Meanwhile AI, limited to act on data available electronically, may miss important patient context, ideas, concerns, expectations, and that all-important “end-of-the-bed test”. Everyone loses, particularly the patient.

Take a typical day in a busy surgery. As well as seeing patients, a GP might spend hours sifting through patient notes, coding diagnoses, or coordinating referrals. Rather than taking on the front-line work, AI might automate these tasks – summarising and searching notes, handling some administrative work – so clinicians can focus on what they do best: listening, connecting, and making complex decisions with patients.

In our region of North East and Yorkshire we found that 63% of healthcare staff were already using AI tools. But the key is how they’re used. For example, AI ‘ambient voice technology’ can transcribe and summarise consultations, freeing doctors to engage fully with patients: that’s augmentation. But if we try to use AI to make treatment decisions, there’s a problem. There’s no point just trying to make smart AI; we need to make the human-AI team more effective.

Appropriate Use of AI: Know Your Limits

AI excels at repetitive, data-heavy, or pattern-based tasks. But it’s not a replacement for clinical judgment. It works best in these sorts of situations:

  • Administrative tasks: Automating appointment bookings, clinical coding, or searching documentation.
  • Data analysis: Identifying trends in patient outcomes or predicting risks – such as the early detection of sepsis.
  • Summarisation: Condensing lengthy notes into concise insights, or answering more focused questions.

Using AI for clinical decisions can lead to sub-optimal results. For now, clinicians must make the final call – effectively meaning they need to repeat the work of the AI unless they are prepared to trust it without explicitly checking – and one of three related problems may appear:

  • Automation bias – Over-trusting AI could turn human review into a token gesture.
  • Anchoring – Clinicians might fixate on AI outputs and potentially ignore other useful information.
  • Algorithmic deference – There is a temptation to follow AI recommendations simply because they are easier to justify, even when clinical judgement might suggest otherwise.

Thus, while AI can significantly streamline processes and provide useful information, the Shared CAIRE project recommends limiting it to this role and not allowing it to provide a decision or even recommend a particular course of action – at least for now.

Biases, Liability, and Training

AI can only ever reflect the data it’s trained on, and that data is often biased. If an AI tool is trained on datasets that underrepresent certain populations, it may fail to detect conditions in those groups. As well as leading to underperformance, it’s a serious health equity issue.

For example, given that the white, less deprived population may find it easier to access healthcare already – AI algorithms may end up being trained more on their data whilst “data poverty” in other groups widens existing healthcare inequalities. We have already seen this with pulse oximetry technology, and with AI training being less transparent than manually-constructed algorithms it is even easier for biases to go unnoticed.

To go back to the risk of humans becoming ‘liability sinks’, present rules place the burden of error on the clinician, even when the mistake originates from an AI tool. But our research shows that clinicians are happy to take this responsibility – if they are given the tools and training to do so. They need to understand the types of AI systems they will be using – from machine-learnt models through image classifiers to generative transformers – and more importantly their appropriate uses and limitations. They need to be given information on the AI system’s training, so they can understand its biases and meaningfully question whether its output is relevant to the patient in front of them. This digital literacy training needs to be available throughout the healthcare system – from student nurses and doctors just entering the workforce, to catching up for those of us who are seeing our workplaces become ever more digital – we all need meaningful understanding of the tools we use.

A Problem-Led, Not Vendor-Led, Culture

One of the most frustrating issues I’ve observed as a clinician is the flood of vendor-driven AI. Companies push tools based on vendor innovation or marketability, not because they solve real clinical problems. We need a healthcare-led approach. In practice this could look like:

  • Start with the problem: A tool that simply reorders waiting lists feels like rearranging the Titanic’s deckchairs; however, one which helps healthcare staff deal with the enormous burden of information and documentation could save time and help get those operations done.
  • Collaborative design: AI tools must be co-created with those who use them – continually engaged throughout the process rather than just as a tick-box at the start and end.
  • Measure the impact: Does this tool reduce workload, improve outcomes, or enhance patient experience? If not, it’s not worth it.

By reversing the current approach which is to design ‘a solution looking for a problem’, healthcare-led AI has the potential to benefit patients, clinicians and AHPs without causing additional and unforeseen problems.

What’s Next? A Call for Collaboration

Responsible AI isn’t the responsibility of one group. It’s a team effort:

  • Healthcare staff must advocate for tools that solve a real problem, and support not overwhelm.
  • Healthcare systems should set up centralised procurement to reduce duplication of effort, and ensure tools meet NHS standards.
  • Vendors need to prioritise transparency, explainability, and collaboration.
  • Policymakers must create frameworks that balance innovation with safety, with clear accountability for AI errors.
  • And everyone in the AI sphere should work on public engagement to build trust.

As we move forward, we should be asking: “Does this AI tool make our work easier, safer, and more human?” If the answer isn’t a resounding “yes,” then we need to rethink.

AI has the potential to transform healthcare for the better. But only if we use it wisely. By reinforcing clinician autonomy, mandating ongoing digital literacy training, and adopting a problem-led strategy for AI deployment, we can harness technology to enhance patient care while preserving the essential human touch.

Together, we can ensure AI serves patients and healthcare staff, not the other way around.