Can AI Exist in Medicine Without Human Oversight?

; Abraham Verghese, MD; Melanie Mitchell, PhD

Disclosures

December 23, 2020

0

This transcript has been edited for clarity.

Eric J. Topol, MD: Hello. Welcome to this episode of Medicine and the Machine. This is Eric Topol, here from Medscape with my partner and great friend, Abraham Verghese. Today we have the delight of speaking with Melanie Mitchell, a professor at Portland State in Oregon and at the Santa Fe Institute in New Mexico, and one of the go-to's on artificial intelligence (AI). She may be the most clear-eyed person out there, and she's here today to give us the real skinny on what we know and what we don't know about AI. It's going to be a fun discussion. Melanie, welcome.

Melanie Mitchell, PhD: Thanks for having me here.

Topol: I read your book, Artificial Intelligence: A Guide for Thinking Humans, about a year ago, and now it's out in paperback. It provides a kind of grounding force on how AI can help or potentially hurt. Before we get into the medical and healthcare applications of AI, where are we in the field in general?

Mitchell: There has been a dramatic rise in progress in AI over the past decade or so, largely due to what are called deep neural networks, deep learning, where "deep" in this sense means the number of layers in a neural network. Neural networks are roughly modeled on the visual cortex in the brain, which is arrayed in a series of layers, and "deep" means the number of layers of simulated neurons. This, combined with very fast parallel computation and lots and lots of big data for training, has produced machines that can accomplish many tasks very well, such as speech recognition, object and image recognition, facial recognition, even powering autonomous vehicles, language translation tasks, and robotics.

The improvement in the field has been dramatic, but AI is still quite narrow: Each machine can do its own narrow task, but no machine has the kind of general intelligence that can take knowledge in one domain and transfer it to another domain, which is the hallmark of human intelligence. So I would say that AI is doing quite well on the task front, but in terms of general intelligence, we're still quite far away.

Topol: One of the sentences in your book that grabbed me was "Deep learning is due less to new breakthroughs in AI than to the availability of huge amounts of data. Thank you, Internet and very fast parallel computer hardware." We have a deficiency of large, annotated data sets in healthcare, as opposed to facial recognition and most of the other current uses for AI. Do you think that is a real bottleneck? Is that what's going to hold us back?

Mitchell: That's one of the bottlenecks, I would say. Deep neural networks — the thing that's given AI its oomph in the past decade — relies on large, labeled training sets. Many areas in medicine and other fields don't have that, so they can't take advantage of AI.

But there are some areas of medicine in which, as you know, AI is showing a lot of progress. I read an article today about diabetic retinopathy and the deep neural networks that are able to spot that better than human physicians. I don't know exactly what "better than" means, but some pockets of AI applications are really promising in medicine. But you're right — medicine doesn't yet have what's called supervised learning, where the machine is given tens of thousands of examples that are labeled by a human. Without that, it's hard to apply these new deep learning techniques.

Topol: For our listeners, a device and an algorithm have been approved for diagnosing diabetic retinopathy in grocery stores, where an untrained person can scan a person's retina and — with a cloud algorithm and deep learning, tested in what was the first prospective trial of AI in medicine — get a very accurate diagnosis of diabetic retinopathy. With half of people with diabetes never getting screened for diabetic retinopathy, that's definitely an advance.

Abraham Verghese, MD: I really enjoyed your book. It was a wonderful introduction to AI for someone who's not very computer literate. But your journey was the most fascinating part of it, the way that you came to AI yourself, having not been a computer scientist. Maybe before we go much further, you can talk about your own personal journey through this field.

Mitchell: My journey is different from most people's in computer science, who start out being obsessed with computers and video games and whatnot. I was never interested in computers at all. I was a mathematics major in college. I didn't know what I wanted to do, but I read Douglas Hofstadter's book Godel, Escher, Bach: An Eternal Golden Braid, which was published in 1979. It was an amazing work of nonfiction. It's about the emergence of consciousness or self-awareness, or understanding from a substrate of neurons in which there is no consciousness or self-awareness in an individual neuron. But somehow, put together, you get the emergence of this fantastic phenomenon we call intelligence. That book made me decide that I really wanted to work in AI with Hofstadter. I persistently tried to convince him to let me join his research group, which he finally did, and I ended up doing a PhD with him. Without having taken a computer science course before I got to graduate school — I don't think you can do that anymore — I managed to get a PhD in computer science.

Verghese: I think the fact that you were not a computer scientist when you started lends something to the book. You have a wonderful way of telling the story without assuming too much of your listeners and yet challenging us.

I reviewed a paper recently that was about trying to set up an ethical framework for every future AI application in medicine. The point of the paper was that too often we come to the ethical implications after the fact, rather than prospectively considering the ethical implications at every level from the nature of the researchers to the nature of the database, the dataset, and so on. Maybe we can dig into that, because I think every time people discuss machine learning in medicine, one of the big concerns is the lack of transparency for the clinician who is presented this diagnosis of diabetic retinopathy with no ability to question it in the way we might question a CT scan or a biopsy report or something like that.

Mitchell: The whole area of ethics and AI is exploding, with people thinking about the complexities of it. It's not a simple matter of making sure the data don't have some kind of bias. For example, with facial recognition, which is a very prominent area of AI application, we're seeing that with these machines, which learn from human-labeled data that's often been downloaded from the Web, these algorithms can have bias against non-White, non-male faces. It's not just a matter of fixing the data; that kind of bias goes way down into the depths of the way the data are collected and even the technology of cameras that favor lighter-skinned people in terms of how the image is processed, how the data are collected, how the people who are building the algorithms actually go about dealing with the data. I think that's true in the medical areas as well. How to make sure these machines aren't picking up all of the bias that goes into human society is a complex dilemma. I don't think anyone has the answer.

One of the possible solutions I've seen is to train the machines to have our kind of ethics, our moral values. But that brings up the question of what our moral values are. How do we believe in what our own ethics are and how do we train a machine to do that when it's so hard to train a machine to have a broad human concept in any domain? How do we train machines to have moral concepts? How do we make these machines into moral philosophers? It's quite difficult. Right now we have to have humans in the loop. Machines just don't have the ability to make these decisions autonomously without humans being able to make sure in some way that they're not biased or making some kind of mistake that's very unhuman-like. That also makes it difficult to talk about how to certify a machine as unbiased or trustworthy. I think that's the biggest obstacle to letting the machines be autonomous, even in the case of diagnosing diabetic retinopathy. Machines are not ready yet. They're not smart enough to be allowed to be autonomous, and we tend to trust them more than we should.

Topol: That's a critical point. Our podcast is called Medicine "and" the Machine, not "vs" the machine. Ever since Garry Kasparov and IBM's Deep Blue chess match, it's been man vs machine, when in fact, in medicine, as you aptly pointed out, Melanie, you still need to have oversight. In fact, you need to have training to understand the nuances. Reading your book might be a good start for lots of physicians, but certainly in the curriculum of medical schools, it would seem to be important to understand why oversight is so crucial, such as before you sign off that this treatment is indicated, this diagnosis is correct, that kind of thing.

  • 0
TOP PICKS FOR YOU

Comments

3090D553-9492-4563-8681-AD288FA52ACE
Comments on Medscape are moderated and should be professional in tone and on topic. You must declare any conflicts of interest related to your comments and responses. Please see our Commenting Guide for further information. We reserve the right to remove posts at our sole discretion.

processing....