When Kim Hilliard shows up at the clinic at the New Orleans University Medical Center, she’s not there simply for an eye exam. The human touches she gets along the way help her navigate her complicated medical conditions.
In addition to diabetes, the 56-year-old has high blood pressure. She has also had back surgery and has undergone bariatric surgery to help her control her weight.
Hilliard is also at risk of blindness, which can result from a condition called diabetic retinopathy. And on this day in February, her vision will be evaluated by a new practitioner: a piece of software.
Automation like this is starting to infiltrate medical care. Depending on how it’s deployed, it could help reduce medical errors and potentially reduce the cost of care.
It could also create a gulf between health caregivers and people of more modest means.
“My fear is we will end up with what I’ve been calling a ‘health care apartheid,’ ” says Sonoo Thadaney Israni, at the Stanford University medical school. “If we create algorithmic care and ‘kiosk’ it in some fashion — focusing on efficiency and throughput — the people who will end up having access and using it will be the ones who already lack privileges of various kinds.”
We are far from that dystopian world at the moment, but are we moving in that direction? That possibility concerns her.
Hilliard’s experience at the clinic underscores the importance of human contact. She’s here for an annual eye exam to look for signs of blindness that can arise in people with diabetes.
“I got the full diabetes when I made 40,” she says. It’s a challenge for her to stay on top of all her medical conditions. “I go to so many doctor’s appointments I get tired,” she says.
The software to identify early signs of diabetic retinopathy, called IDxDR, can do that job without expert intervention, but skilled medical personnel at this clinic are, for the moment at least, still playing a hands-on role.
After Hilliard finishes the exam, nurse practitioner Chevelle Parker shows her images of her eye.
“If we zoom in here, we can see some little fat deposits here, OK?” Parker says. Hilliard leans in and studies the image of her retina.
“That can be from the foods you’re eating,” Parker says. “Think of some of the fatty foods you’re eating — sausage, bacon.”
Hilliard says she stopped eating those foods last fall, after her gastric bypass surgery.
“Well, when you were eating those, the deposits were being placed on the eye,” Parker explains. “That’s why we talk to you about your diet. And now that you know you can’t have that, this is the reason why, OK?”
Parker goes on to reinforce the dietary recommendations for diabetes. Hilliard should eat breakfast within an hour or so of waking up, and she should be sure to have some protein, rather than carbohydrates, at the end of the day.
Hilliard gratefully accepts the advice, along with a referral to an ophthalmologist, who will need to get a closer look at the signs of damage in her eye.
“I do what I can do to keep from going blind,” Hilliard says. “So whatever they tell me to do that’s what I do. At least I try.”
Hilliard’s experience is a stark reminder that health care is more than a simple transaction. Six in 10 adults in the United States have a chronic disease, and 4 in 10 have two or more, according to the Centers for Disease Control and Prevention.
This is the real world, in which computer algorithms are starting to take off in medicine.
“I think for too long we’ve had this assumption that any new technology is good, more is better,” says Abraham Verghese, a physician who works in partnership with Thadaney at a Stanford center that focuses on the human aspects of medical care.
“New is not always better,” he says as the three of us sit together in their office.
Medical care, like so much of our society, creates haves and have-nots, Thadaney says. “We need to make sure that technology doesn’t further exacerbate the issues of equity and inclusion.”
“Just to carry that thought forward,” Verghese says, “AI algorithms we already know are causing inequities in bail bonding, inequities in real estate,” as well as in policing. Unconscious racism and other biases get baked in, without the developers even being aware of it. “That same kind of algorithmic approach can easily infect medicine and probably does,” Verghese says.
These technologies are driven by companies interested in turning a profit, and that doesn’t necessarily lead to better care. In fact, the cost-savings these technologies promise could be the result of reducing the time an individual spends face-to-face with a doctor or nurse.
“One thing that I think is unchanged since antiquity is that when you’re seriously ill, you feel bad,” says Verghese. “And amongst all the other things you need, you also want someone to care for you — not just your family member but someone with the scientific knowledge to also express care.”
Thadaney says a member of her household recently brought that point home. He had been injured in a bicycle accident. Treatment involved a complicated trek through two hospitals and a rehabilitation facility. Thadaney was able to advocate for him. “I was able to call friends who are physicians,” she says. “I was able to, you know, call into the leadership of those organizations and request for something different.”
That intervention alone provided an edge to her family member, but she says what really helped him was a visit with Verghese. The doctor “didn’t tell him anything different than he already knew,” she says, but he provided comfort and reassurance, “and I think it hastened his healing.”
Verghese says he was recently reading Walt Whitman’s accounts of his time caring for the wounded in Civil War medical tents on the Mall in Washington, D.C.
“He did what those young men most needed,” Verghese says. “They were so far from home. They needed someone to read to them, to hold their hands and to write letters for them and take care of their every task. And it was the most elemental kind of care. Nothing’s changed. You know we’re still the same human beings.”
Verghese is hopeful that technology, such as artificial intelligence, can improve medical care, but only if it isn’t done at the expense of human contact. AI has the potential to free up clinicians to spend more time with their patients, depending on how it ends up being deployed. In principle, AI could also help the most challenging tasks.
“We don’t need another image recognition [system],” he says. “They’re all nice great and very tidy.”
But where the technology can do the most good is to help sort through the clues gathered during medical treatment. “Medicine is messy,” he says. “Help us out.”
Some of the nuts-and-bolts improvements that AI can bring have their place, Thadaney says. “Yes, the patient wants you to make sure that you have efficiencies in your system so they don’t get 19 bills with the same stupid thing.”
But patients also want to get better. To help accomplish that, doctors and nurses can’t simply be adjuncts to machines. Her mantra to the young doctors she advises is this: “In the end, be present. That matters a great deal.”
In March, Stanford inaugurated a new institute to focus on the human dimensions of artificial intelligence.
Dr. Russ Altman, a professor of bioengineering and genetics at Stanford and an associate director of the new institute, says it is important to have best practices in place as technology and medicine commingle. “It’s unfair and unrealistic to expect that technologists to be experts at all this.”
He shares the concerns of Verghese and Thadaney that machines could degrade the human relationship at the core of medicine.
“Medicine is a combination of art and science,” which will be augmented by AI, Altman says. “But the act of laying your hands on a patient, showing that you really care about what is there, what their problem is [and] assuring them that you’re going to be with them through an odyssey — that might take a while,” he says. “That is very difficult to imagine being replaced by computers.”
You can contact NPR Science Correspondent Richard Harris at firstname.lastname@example.org.