In a variation of the test, one pathologist worked “without time constraint.” She spent about 30 hours analyzing the slides, which, the study noted, “is infeasible in clinical practice.” Even then, the top 2 algorithms did about as well.
It’s not the first time computers were pitted against human counterparts in oncology. Only recently, however, has their performance begun to catch up with — and exceed — that of people.
In a small 2016 feasibility study analyzing MRI scans from 15 patients, a machine learning program developed by researchers at Case Western Reserve University outperformed 2 neuroradiologists in determining whether abnormal tissue in the images were the result of radiation necrosis or retuning brain cancer.2
One of the 2 neuroradiologists correctly diagnosed 7 patients. The other got 8 right. The computer program got 12 of the 15 right.
In January 2017, a group of computer scientists with the Stanford University Artificial Intelligence Laboratory in Palo Alto, California, reported they had developed an artificial intelligence algorithm — a deep “convolutional neural network” (CNN) — that could learn as it went, and teach itself to properly diagnose images of skin lesions for cancer.3
They designed it to correctly classify 2 types of cases: keratinocyte carcinomas vs benign seborrheic keratosis, and malignant melanomas vs benign nevi.
“The first case represents the identification of the most common cancers, the second represents the identification of the deadliest skin cancer,” the algorithm’s designers explained.
Testing its performance against 21 board-certified dermatologists, the researchers wrote, “the CNN achieves performance on par with all tested experts across both tasks, demonstrating an artificial intelligence capable of classifying skin cancer with a level of competence comparable to dermatologists.”
The findings, they concluded, showed that artificial intelligence could put the power of diagnosis in the hands of nearly everyone.