Don’t expect AI to solve the coronavirus crisis on its own

A thermal imaging camera takes a woman’s temperature.

Some public health officials hope that thermal imaging combined with AI software can spot infected individuals before they can spread the coronavirus. | Alexander Ryumin/TASS via Getty Images

How optimistic should we be about the impact of artificial intelligence in a pandemic?

Open Sourced logo

Scientists are exploring every possible option for help battling the coronavirus pandemic, and artificial intelligence represents an intriguing avenue. AI has been used to search for new molecules capable of treating Covid-19, to scan through lung CTs for signs of Covid-related pneumonia, and to aid the epidemiologists who tracked the disease’s spread early on. The technology is even powering new tracking software that might help identify those walking around with a fever or catch people violating quarantine rules. But how much faith should people really have in these untested tools?

In a recent brief, Alex Engler, who studies AI at the Brookings Institution, warned that people should manage their expectations. Artificial intelligence can be helpful, he says, but it’s important to be wary of tech companies making broad, unfounded claims about what AI can do, and question whether these companies really have the data and expertise to ensure that the application of this technology is actually helpful. Ultimately, Engler argues that AI could be helpful on the margins, but it’s nowhere near ready to replace human experts in the battle against Covid-19.

Just because we’re in a pandemic doesn’t mean that some of AI’s greatest challenges — accuracy, bias, and the risk of exacerbating surveillance — haven’t gone away. Engler warns that people need to question whether the companies touting this technology really have access to the information they would need to build it, and whether the AI is even right to address many of the problems that Covid-19 has created.

The risks of overhyped artificial intelligence aren’t new. But during a pandemic, when people are eager for quick solutions, the dangers of trusting an unproven technology are greater than ever.

The following interview has been edited for clarity and brevity.

Rebecca Heilweil

Starting off, can you explain how artificial intelligence is being used to address the Covid-19 pandemic?

Alex Engler

I mean, that’s a really kind of a fundamental question, right? We’re seeing a huge number of claims around how AI is being used to fight or aid the fight against coronavirus, but it’s hard to tell which of those are really the valuable applications. And they fall into different categories.

Some are probably, frankly, just snake oil and might never happen. Some are maybe a good idea. But they’re brand new, and we should be pretty careful in trusting them, especially if they haven’t been robustly tested and they haven’t gotten out in the field yet. You might think about the diagnoses using X-rays or CT scans of pneumonia or coronavirus, as an example of that.

And then there are some things for which there are applications that are definitely useful. Maybe it’s helping on the margin, but not fundamentally changing the field. So some epidemiological modeling uses artificial intelligence, but it’s not the only part of the modeling software. It’s not the only thing. It’s working with subject matter experts, but it’s not an AI epidemiologist doing it all on its own. But the AI is helping on the margin …

Rebecca Heilweil

I think a lot of people don’t necessarily understand why AI needs humans. We already have lots of health data, right? Aren’t we just building off of what we already know?

Alex Engler

That’s a great question, and it depends on the application. So basically, across the board, AI on its own is not helpful in these kinds of situations for a couple reasons. One: We don’t have endless, huge datasets about the spread of coronavirus and epidemics that are similar enough to this. So we don’t know enough to learn exclusively from historical data. And so that’s the first reason. Some of this has to come from subject matter expertise, things that we’re learning from experiments that we’re learning on a day-to-day basis.

You might also notice that a lot of the stakes of decisions are really high. And so what if you’re going to use AI to diagnose someone and you’re wrong? Specifically, you might be concerned about a false negative. That is, you say someone is healthy when they, in fact, have coronavirus. That’s a pretty enormous mistake to make, and we want to be really careful when we give AI too much influence in situations like that.

That’s the same sort of concern we have with the CT scan approach. We might be able to diagnose coronavirus with a CT scan at some point in the future, but the methods aren’t robust enough yet. We’re not sure they work well enough yet.

Another example is if you’re looking at fever-detection, and you want to see whether people have fevers using thermal imaging. You can be wrong the other way: You could think people have fevers, and they don’t. And based on that, you’re going let AI keep people out of a grocery store or an airport?

Rebecca Heilweil

What’s the worst example of artificial intelligence that’s been touted in response to Covid-19? You called it snake oil.

Alex Engler

I think the worst examples come not only from claims that are difficult to believe but also those that have subtle and pernicious side effects. A lot of the time, AI has second-order consequences that can be easy to forget about.

So some people have suggested — a little bit of news coverage, a little bit of corporate claims — that if you attach various sensors to drones, you can detect all sorts of things. The most ridiculous claims I saw were that drones could not only do thermal imaging to detect whether they have a fever but also get a sense of respiratory rate and heart rate from a drone.

Maybe that’s possibly true, but I have a very hard time believing it for some of the reasons I’ve already talked about. What’s worse is that it also justifies a substantial surveillance opportunity, a mechanism of a surveillance state. That’s where you really run into problems where you might justify a new level of surveillance that’s imposing in public spaces, that maybe affects people’s behavior, and that still can’t do the task that the AI is claiming it can do.

So that’s probably the worst example, the one that makes me the most concerned. But there are different mechanisms, different ways to be concerned about this. The mortality rate predictions make me the most concerned for bias, for instance, and that’s a very different perspective on what’s potentially harmful.

Rebecca Heilweil

Most people have heard that AI can be biased, and that it can be discriminatory based on race or gender, or other factors. Can you explain what that AI bias might look like in a pandemic?

Alex Engler

One of the most important examples of AI bias that we’ve seen is the case of the Optum algorithm, which was a health care algorithm used by Optum, the data analytics subsidiary of UnitedHealthcare that’s used to determine the risk of future health care needs.

What researchers discovered when they got access to the Optum algorithm was that it was very biased against African Americans for reasons that weren’t relevant to health, that were more relevant to finances and socio-economic status. Through both automated decisions human-made decisions, they basically argued for less care for black Americans.

When you see that type of system, your default should be: There are biases in them until you rigorously evaluate them to show that they’re not there. Probably you will find some, and probably you would have to do some mitigation.

So in the case of these early algorithms that people are using to evaluate the mortality risk of Covid-19, the likelihood that there are subtle biases is very high, especially when you look at things like different biological characteristics, things called biomarkers. These might help you guess who is more likely to be seriously ill, but they can also be very misleading in terms of unaccounted-for signals.

So men, for instance, are more likely to be smokers, and they also show higher mortality risk. But if you didn’t account for the fact that they were smoking or that there’s smoking in their medical history, your algorithm might show that all men are more at risk, and thus, all men are going to get prioritized for care — hypothetically.

You have to be concerned if you’re going to roll out these mortality-risk algorithms, which for the record, can work and are valuable. But you’d have to be concerned about rolling them out so quickly that they include these sort of pernicious biases in something so important and so high-risk as health care allocation.

Rebecca Heilweil

Is there an application of this that sticks out to you as the most promising application of artificial intelligence that’s being used to tackle the Covid-19 pandemic?

Alex Engler

I’m hopeful about two efforts. One is AlphaFold, which is Deep Mind’s protein-folding initiative. It is possible that those protein structure estimations are helpful for creating vaccines and also treatments. To Deep Mind’s credit, they did this immediately once they got the genetic makeup of Covid-19. They did estimations, and they publicly released those models. So it is possible — and I think it is very much worth keeping an eye on — that this effort might speed the development of a vaccine or a therapeutic antibody that helps undermine the damage of the virus. It’s a little too early to tell. These are estimations, and these predictions need to be experimentally validated, but this is a new development. It could be one that’s very meaningful.

There’s another effort to analyze the research about Covid-19 — one that’s an emerging area of AI — that involves using AI to analyze large amounts of academic papers. Because there are so many papers, the pure amount of academic research being created, especially in fields like biomedicine, makes it very hard for anyone to read all of them or even to find all the ones that are relevant. So there are some people taking a very large database of papers and seeing what they can discover about them.

I am hedged in my optimism here. I think this could do some useful things, like make it easier for researchers to find relevant papers and to categorize papers. I think it’s unlikely that our vaccines, our solutions, or our core understanding of Covid-19 will come from that. But it could help in meaningful ways to organize what we know.

For the record, I think that in time, with less of an absurd turnaround period, you can see AI meaningfully help in medical imagery. There’s tons of good news around AI and medical imagery. Maybe it can tell the difference between bacterial pneumonia and the pneumonia that’s associated with Covid-19. Maybe with really good thermal imaging, it can get closer to fever detection. It’s not that these things are fundamentally impossible tasks. It’s that it’s worth approaching them with a skeptical, informed take rather than just taking the idea on its face, “Of course, AI can do that.”

Open Sourced is made possible by Omidyar Network. All Open Sourced content is editorially independent and produced by our journalists.


Support Vox’s explanatory journalism

Every day at Vox, we aim to answer your most important questions and provide you, and our audience around the world, with information that has the power to save lives. Our mission has never been more vital than it is in this moment: to empower you through understanding. Vox’s work is reaching more people than ever, but our distinctive brand of explanatory journalism takes resources — particularly during a pandemic and an economic downturn. Your financial contribution will not constitute a donation, but it will enable our staff to continue to offer free articles, videos, and podcasts at the quality and volume that this moment requires. Please consider making a contribution to Vox today.

via Vox – Recode

Check out the Finding Your Identity Podcast!