A realistic image of a female doctor created using generative AI to illustrate how AI doctors on screen may be used for healthcare in the future.
AI doctors on screen may be used for health care in the future — possibly even to help address medicine’s empathy problem.Adobe – this image was created using generative AI

Modern medicine has an empathy problem. Artificial intelligence — done right — might be able to help ease it.

Despite the proliferation of communication training programs over the past decade or two, doctors often fail to express empathy, especially in stressful moments when patients and their families are struggling to hear bad news and make difficult decisions. Since empathy has been shown to enhance what patients understand and how much they trust their medical team, falling short compromises the quality of patient care.

advertisement

Can AI help? That might sound like an ironic question, because doctors who struggle to express empathy can come across as robotic. Yet researchers and health care professionals are increasingly asking it, and not just because we’re living through an AI hype cycle.

One reason for the growing interest in AI to help solve medicine’s empathy problem is that this aspect of medical care has proven particularly hard to improve. This isn’t surprising, given that physicians face ever-increasing pressures to quickly see large numbers of patients while finding themselves drowning in paperwork and a myriad of administrative duties. These taxing conditions lead to both a lack of time and, perhaps more importantly, a lack of emotional energy. An American Medical Association report indicated that 48% of doctors experienced burnout last year.

Given the magnitude of the empathy problem and its significant clinical and ethical stakes, various possible uses of AI are being explored. None of them are likely to be silver bullets and, while each is well-intentioned, the entire endeavor is fraught with risks.

advertisement

One rather extreme option has been suggested by Dr. Arthur Garson Jr., a member of the National Academy of Medicine and a clinical professor of health systems and population health sciences the University of Houston. He urges us to prepare for a time when some human doctors are replaced with AI avatars. Garson thinks it’s possible, even likely, that AI-powered avatars displayed on computer screens could be programmed to look “exactly like a physician” and have “in-depth conversations” with “the patient and family” that are customized to provide “highly appropriate reactions” to a patient’s moods and words.

Whether AI will ever get this advanced raises tricky questions about the ethics of empathy, including the risk of creating negative dehumanizing effects for patients because, for the foreseeable future, computer programs can’t experience empathy. To be sure, not all human doctors who sound empathetic truly feel that way in the moment. Nevertheless, while doctors can’t always control their own feelings, they can recognize and respond appropriately to patients’ emotions, even in the midst of trying circumstances.

Simulated AI “doctors,” no matter how apparently smart, cannot truly care about patients unless they somehow become capable of having the human experience of empathy. Until that day comes — and it may never arise — bot-generated phrases like “I’m sorry to inform you” seem to cheapen the very idea of empathy.

A more moderate vision revolves around various applications of generative AI to support doctors’ communication with patients in real time. Anecdotal evidence suggests this use of this technology is promising, like Dr. Joshua Tamayo-Sarver’s moving account of how ChatGPT saved the day in a California emergency department when he struggled to find the right words to connect with a patient’s distraught family. Preliminary academic research, like a much-discussed article in JAMA Internal Medicine, also suggests generative AI programs based on large language models can effectively simulate empathetic discourse.

advertisement

Another recent study, however, suggests that while the content of an empathic message matters, so does the messenger’s identity. People rate AI-generated empathic statements as better on average than human-generated ones if they don’t know who or what wrote them. But the machine’s advantage disappears once the recipient learns that the words had been generated by a bot.

In a forthcoming book, “Move Slow and Upgrade,” one of us (E.S.) proposes the following possibility: integrating a version of generative AI into patient portals to help doctors sound more empathetic. Patients see portals as a lifeline, but doctors spend so much time fielding inbox messages that the correspondence contributes to their burnout. Perhaps a win-win is possible. Doctors might improve patient satisfaction and reduce the number of follow-up questions patients ask by pushing an empathy button that edits their draft messages.

While this application of AI-generated empathy is promising in a number of ways, it also runs many risks even if the obvious challenges are resolved, like the technology consistently performs well, is routinely audited, is configured to be HIPAA compliant, neither doctors nor patients are forced to use it, and doctors use it transparently and responsibly. Many tricky issues would still remain. For example, how can doctors use AI quickly and oversee its outputs without placing too much trust in the technology’s performance? What happens if the technology creates a multiple persona problem, where a doctor sounds like a saint online but is a robot in person? And how can a new form of AI dependence be created to avoid further deterioration of human communication?

Some visions capitalize on AI’s potential to enhance doctors’ communication skills. For example, one of us (T.C.) is involved with the SOPHIE Project, an initiative at the University of Rochester to create an AI avatar trained to portray a patient and provide personalized feedback. It could help doctors improve their ability to appropriately express empathy. Preliminary data are promising, although it is too soon to draw firm conclusions, and further clinical trials are ongoing.

advertisement

This approach has the advantages of being reproducible, scalable, and relatively inexpensive. It will, however, likely have many of the same limitations as traditional, human-actor-based communication training courses. For example, on the individual level, communication skills tend to degrade over time, requiring repeated training. Another issue is that the doctors who most need communication training may be least likely to participate in it. It is also unrealistic to expect SOPHIE-like training programs to overcome system-level stresses and dysfunction, which are a major contributor to the empathy problem in the first place.

Because technology changes so quickly, now is the time to have thoughtful and inclusive conversations about the possibilities we’ve highlighted here. While the two of us don’t have all the answers, we hope discussions about AI and empathic communication are guided by an appreciation that both the messages and the messengers matter. Focusing too much on what AI can do can lead to overestimating the value of its outputs and undervaluing essential relationships of care — relationships that, at least for the foreseeable future, and perhaps fundamentally, can occur only between human beings. At the same time, prematurely concluding that AI can’t help may unnecessarily contribute to preserving a dysfunctional system that leaves far too many patients seeing doctors as robotic.

Evan Selinger, Ph.D., is a professor of philosophy at Rochester Institute of Technology and the co-author, with Albert Fox Cahn, of the forthcoming book Move Slow and Upgrade: The Power of Incremental Innovation” (Cambridge University Press). Thomas Carroll, M.D., Ph.D., is an associate professor of medicine at the University of Rochester Medical Center.