

by Professor Bu’Hussain Hayee, Clinical Director for Liver, Endoscopy, Gastroenterology King’s College Hospital NHS Foundation Trust, and Dr Olaolu Olabintan Advanced Endoscopy Research Fellow PhD Candidate & Gastroenterology SpR, Kings College Hospital
Artificial intelligence is no longer a futuristic concept in gastrointestinal endoscopy. It’s already here, embedded into colonoscopy rooms, quietly highlighting polyps in real time and, increasingly, offering diagnostic suggestions about what we see on screen.
As people working at the intersection of clinical endoscopy and AI safety, we’ve found that much of the current conversation focuses on performance metrics: adenoma detection rate, sensitivity, false positives, and recently, cost-effectiveness.
These are important. But they’re not the whole story.
What interests us just as much, and what often gets less airtime, is how AI changes our responsibilities as clinicians. Not just legally, but professionally and ethically. What does it mean to practise endoscopy well when part of the “seeing” is shared with an algorithm?
This blog reflects on that question, drawing on our recent review in Frontline Gastroenterology journal “AI in Endoscopy: navigating risk, responsibility, and ethical challenges”
Current AI use in endoscopy
The most established AI tools in endoscopy today are computer-assisted detection (CADe) and computer-assisted diagnosis (CADx). CADe flags possible lesions during withdrawal, while CADx aims to characterise these lesions in real-time to support resection and surveillance decisions.
There’s strong evidence that CADe can improve adenoma detection rate—an outcome linked to reduced post-colonoscopy colorectal cancer. That’s a real and meaningful clinical gain. CADx, meanwhile, holds promise for reducing unnecessary histology and streamlining workflows, though recent data suggest its added value over expert optical diagnosis may be more limited than originally hoped.
But these systems are not neutral observers. They shape how we look, where we focus our attention, and how confident we feel in our decisions. And that’s where risk starts to become more complicated than a simple false-positive rate.
When assistance quietly becomes influence
One concern that comes up repeatedly in discussions with colleagues is over-reliance. Not blind trust necessarily but subtle shifts in behaviour.
For example, if an alert doesn’t come up, do we subconsciously relax our search? If the system flags something confidently, do we feel pressure to agree?
Studies suggest AI can change visual scanning patterns, particularly in trainees. Over time, this raises uncomfortable questions about deskilling and dependency. The paradox is that a system designed to support detection could, if poorly integrated, weaken the very skills it’s meant to augment.
This doesn’t mean CADe or CADx should not be used. But it does mean we need to be honest about how human–AI interaction works in real-world clinical practice—not just in controlled trials.
Forward-looking responsibility: what clinicians must do before harm occurs
A concept we found useful in thinking this through is forward-looking responsibility. In simple terms, these are the things clinicians ought to do in advance to reduce the risk of future harm.
In the context of AI-assisted endoscopy, this includes:
- Technical understanding: Not just how to turn the system on, but when it may underperform; poor bowel prep, partial views, uncommon pathology, under-represented patient groups.
- Critical engagement: Treating AI output as information, not instruction. The final judgement still sits with the endoscopist.
- Monitoring and vigilance: Being alert to unexpected behaviour, performance drift, or patterns of false reassurance.
- Avoiding complacency: Continuing to practise core detection skills even when AI is present.
None of this is especially glamorous. But it’s essential. The safest AI system in the world can still be risky if “forward-looking responsibility” is not implemented.
Talking to patient about AI: still an open challenge
Another area where responsibility is evolving is patient communication. Patients are increasingly aware that AI is used in healthcare, but understanding is often superficial. “The computer helps the doctor” is true, but incomplete. I don’t think clinicians need to explain neural networks or training datasets. But we do have a responsibility to explain, in clear terms:
- How AI is being used during the procedure.
- Why it’s being used and what its limitations are.
This matters for informed consent, but also for trust. Transparency isn’t about overwhelming patients with detail; it’s about respecting their right to understand how decisions about their care are being supported.
Who is responsible when things go wrong:
Traditionally, medicine places responsibility squarely on the clinician. If something is missed, the assumption is that someone failed to see or act.
AI complicates this picture. If a lesion is missed, despite AI support, was it:
A human error?
An algorithmic limitation?
A training issue?
A system-integration problem?
Often, it’s some combination of all four. This is why there’s growing interest in shared responsibility models, approaches that recognise AI-enabled care as a “sociotechnical” system involving clinicians, institutions, developers, regulators, and patients.
Importantly, this doesn’t let anyone “off the hook.” Shared responsibility doesn’t dilute accountability—it clarifies it. Each actor remains fully responsible for their part of the system, rather than pretending the outcome rests with a single individual at the sharp end.
Why this matters now
AI in endoscopy is still evolving, but its direction of travel is clear. These systems will become more embedded, more automated, and harder to disentangle from routine practice. If we don’t actively shape how responsibility is understood and supported, we risk one of two outcomes:
- Over-confidence, where clinicians defer too readily to AI.
- Over-burden, where responsibility is unfairly concentrated on individuals using tools they didn’t design.
Neither is good for patients or professionals.
Where does this leave us?
For us, the key takeaway is this:
AI doesn’t remove responsibility from clinicians—it reshapes it.
Used well, CADe and CADx can raise standards, reduce variability, and support safer care. Used poorly, they can introduce new risks while giving a false sense of security.
Getting this right requires more than regulation or better algorithms. It requires:
- Thoughtful training
- Clear governance
- Honest conversations about limitations
- And support for clinicians navigating this new terrain
The future of AI-driven endoscopy won’t be defined by code alone. It will be defined by how responsibly we choose to use it.