New research raises safety concerns about cyberattacks that could affect medical images and lead to incorrect cancer diagnoses.

Researchers simulated an “adversarial attack” where mammogram images were altered in ways that would impact breast cancer diagnosis, and this simulated attack deceived both an artificial intelligence (AI) model and experienced radiologists. 

These findings were published in Nature Communications.


Continue Reading

The researchers explained that, although several AI-computer-aided diagnosis (CAD) models have shown promising performance in medical applications, such technologies are vulnerable to cyberattacks.

To investigate how an AI diagnosis model would behave under an adversarial attack, the researchers used 4346 mammogram images from 1284 women. The group included 918 patients who were negative for breast cancer and 366 patients who had biopsy-proven breast cancer.

For the first part of the study, the researchers built a deep learning AI-CAD model, which was capable of distinguishing breast cancer cases from benign cases with more than 80% accuracy.

The team then used a Generative Adversarial Network (GAN) to generate fake mammogram images by inserting or removing cancerous regions from images.

When the researchers tested the GAN-generated images on the AI-CAD model, the model was fooled by 69.1% of the fake images. Specifically, 44 positive images that were made to look negative by the GAN were classified as negative by the model. Of the 319 negative images made to look positive, 209 were classified as positive by the model.

For the second part of the study, the researchers recruited 5 experienced breast imaging radiologists and evaluated the extent to which they could detect an adversarial image.

In 5 different sessions, the radiologists were asked to read 100 to 440 images that were real, fake, or a mixture of both, with and without educational intervention. The radiologists were able to identify 29% to 71% of the fake images. 

“Certain fake images that fool AI may be easily spotted by radiologists. However, many of the adversarial images in this study not only fooled the model, but they also fooled experienced human readers,” study author Shandong Wu, PhD, of the University of Pittsburgh in Pennsylvania, said in a statement.

“Such attacks could potentially be very harmful to patients if they lead to an incorrect cancer diagnosis,” he added. “We hope that this research gets people thinking about medical AI model safety and what we can do to defend against potential attacks, ensuring AI systems function safely to improve patient care.”

Disclosures: Some study authors declared affiliations with biotech, pharmaceutical, and/or device companies. Please see the original reference for a full list of disclosures.

Reference

Zhou Q, Zuley M, Guo Y, et al. A machine and human reader study on AI diagnosis model safety under attacks of adversarial images. Nat Commun. Published online December 14, 2021. doi:10.1038/s41467-021-27577-x