
By Dr. Bev Townsend, Research Fellow, York Law School.
The introduction of robots into medical settings is slowly becoming less science-fiction and more science-fact but the question is; will the public be comfortable with this and what do people really think about being treated by a medical robot?
This question encourages us to think about AI-enabled medical systems as they move into everyday reality. Our recent study explored exactly that — asking members of the public how they feel about the increasing possibility of AI and robotics adoption in healthcare settings.
As these technologies raise important social, ethical, cultural, and regulatory challenges, we questioned members of the UK public about their perspectives and expectations around medical AI adoption and any sociotechnical related harm. The concern here was not only about how the technology fits within society and the environment and its integration into clinical settings, but about its negative effects — directly or indirectly — on individual and societal interests, values, and well-being. By giving a voice to end-users we hoped to support and reinforce traditional AI policy interventions and introduce measures to address, prioritise, and mitigate risks of harm.
Gathering diverse perspectives
Participants in the study were encouraged to think about medical AI applications and the impact and potential harm these applications might have on themselves and members of their groups and communities. The study introduced participants as an illustration to DAISY — a prototype Diagnostic AI System for Robotic and Automated Triage and Assessment — developed in collaboration with the University of York and York and Scarborough Hospitals NHS Foundation Trust.
Our aim was to capture a diverse range of perspectives on how AI-enabled digital health technologies can transform healthcare delivery across various settings and to find out what this might mean to users.
Although many saw DAISY as a promising and helpful innovation — calling it a “good idea” and “of value” — some had strong reservations: “I will not be happy to be treated by such a robot.” While participants were ‘cautiously optimistic’ about medical AI adoption, all were concerned about potential sociotechnical harms associated with emerging and future medical AI technologies. Sociotechnical harm refers to any adverse implications, including physical, psychological, social, and cultural impacts experienced by the individual or their broader society due to robotic adoption.
Drawn from racially, ethnically, and linguistically diverse groups, as well as self-identified minority groups, a wide range of concerns were observed. These included privacy and data-related concerns, the lack of human autonomy, the role of emulated empathy and epistemic injustice, and issues of deception and transparency, as well as the need to maintain the safety and effectiveness of the robotic system.
One participant commented: “[Although] I am somewhat concerned about the use of medical AI; I believe that the benefits of these technologies can outweigh the potential risks”. Overall participants believed that to harness the benefits of AI, adoption should be measured, pragmatic, and fair.
Key concerns were the risk of exclusion, inequitable access, and furthering and deepening the “digital divide.” As one participant said: “I am worried about the technology divide leading to less access for marginalised sections of the community.” Another participant added: “I fear that older generations who are less tech-savvy would struggle to read the screens given the font size or use the touchscreen, as this is new to them,” while another expressed: “Some members of my family and community will find this difficult.”
Reasons for exclusion were due to technical literacy, system complexity, language barriers, financial constraints, apprehension and fear, and age. AI is also seen as a challenge “at a deeply human level, where the technology risks further eroding our connection to our biological and emotional realities.” A lack of human connection and a reduction in the qualitative experience of healthcare, as well as a loss of the “human touch,” were believed to increase feelings of alienation and diminish the quality and standard of care.
Can AI systems offer empathy and avoid injustice?
We found that further exploration is required into the conceptual distinction between what it means for systems to be empathetic (i.e. sympathetic to the feelings of users) and empathic (i.e. able to accurately read feelings). Most participants were doubtful that the former is or will ever be possible: “I would find it hard to believe that [the system] can provide empathetic care.” A participant commented: “Certain things [the robot] will never understand — sad and critical situations, tragic situations, and abuse, for instance.” While users may engage with medical AI, accepting its limitations, they expected to be treated with certain prosocial values in mind — that is, gently, humanely, and with dignity.
One interesting emerging view was that of epistemic injustice. Epistemic injustice occurs when a person is not believed, and their credibility is unfairly dismissed, based on the way they speak or present their views, often because they belong to a particular ethnicity, gender, age, or socioeconomic class. It also involves being unfairly treated in one’s capacity to know something, either by excluding or silencing them, distorting or misrepresenting their meanings or contributions, or undervaluing their status or what they have to say. Epistemic injustice may arise when the robot does not have the capacity to ‘receive’ a person’s testimonies and, therefore, ignores, overlooks, or dismisses them:
“I am very worried that the system will find my stories less credible,” and “[human] doctors have previously disregarded my statements where I signalled that I was stressed or anxious — will this happen with the robot?”
There was also the concern that the robot “might take everything [I say] literally.” It may also arise where users are afforded no or limited opportunity to describe or comfortably express themselves: “I believe a drawback is that it is more difficult [for me] to assert [my] case with a robot than with a human.”
The need for value-led design
Most participants wanted to see these systems designed thoughtfully. That means embedding prosocial values like respect, equality, dignity, patience, gentleness, empathy, and personability from the start. Robots may not yet have a bedside manner, but the public expects them to uphold basic human values. Participants also cared about cultural alignment, with certain participants questioning how this might be achieved in practice. Concerns around deception and transparency were identified: “I would want to know if AI was involved and that I am talking to a machine.” For some, certain experiences are difficult to relay to a non-human system: “Because of past trauma, it is hard for me to speak about certain topics.” Others saw this as a potential benefit: “There may be less judgment compared to speaking with a human” and “I do not want to disclose certain things to a human doctor for fear of being judged.”
While many participants believed responsibility should be shared, participants expected regulatory authorities to play an important role in addressing sociotechnical harm and “for keeping people safe and preventing future harm”.
The adoption of medical robots and AI in healthcare offers enormous potential. However, as this study highlights, the public’s response is not one of unfettered enthusiasm. Instead, there is a clear desire for thoughtful, inclusive design that prioritises human values, empathy, and equity. People are cautiously optimistic, eager for the benefits of AI but wary of the risks — particularly the loss of human connection, exclusion from use, and the ethical implications of trusting robots with sensitive health data.
As AI becomes more embedded in healthcare, its success will be determined not just on technological advancements but on how well it addresses responds to users’ requirements and societal concerns. While clinical safety and efficacy are imperative to safe and responsible AI adoption, we show how various social, ethical, and cultural normative values can emerge from interactive processes involving participant users.
Robot failure can have serious, adverse consequences for both clinical outcomes and patient experiences, which can erode public trust and undermine confidence in the healthcare institutions deploying them. By listening to the public and designing AI systems that reflect these principles, we can ensure that the future of healthcare is one that works for everyone.