The research, described in Nature Biomedical Engineeringfound that the model was more effective at identifying issues such as pneumonia, collapsed lungs, and lesions than other self-supervised AI models. In fact, it was similar in accuracy to human radiologists.
While others have tried to use unstructured medical data in this manner, this is the first time a team’s AI model has learned from unstructured text and matched radiologists’ performance, and it has demonstrated the ability to predict multiple diseases from a given x-ray with a high degree of accuracy, says Ekin Tiu, an undergraduate student at Stanford and a visiting researcher who coauthored the report.
“We are the first to do that and demonstrate that effectively in this field,” he says.
The model’s code has been made publicly available to other researchers in the hope it could be applied to CT scans, MRIs, and echocardiograms to help detect a wider range of diseases in other parts of the body, says Pranav Rajpurkar, an assistant professor of biomedical informatics in the Blavatnik Institute at Harvard Medical School, who led the project.
“Our hope is that people are able to apply this out of the box to other chest x-ray data sets and diseases that they care about,” he says.
Rajpurkar is also optimistic that diagnostic AI models requiring minimal supervision could help increase access to health care in countries and communities where specialists are scarce.
“It makes a lot of sense to use the richer training signal from reports,” says Christian Leibig, director of machine learning at German startup Vara, which uses AI to detect breast cancer. “It’s quite an achievement to get to that level of performance .”