Using a form of visual cortex-emulating artificial intelligence (AI) known as convolutional neural network (CNN) to analyze adenocarcinoma pathology images from the National Lung Screening Trial (NLST), researchers have identified 15 visual features that correlate with patient survival outcomes, they reported in Scientific Reports.1

“This is the first study to quantify tumor shape-related features using a CNN-based model in lung cancer,” wrote senior coauthor Guanghua Xiao, PhD, associate professor, department of clinical scienceat UT Southwestern Medical Center in Dallas, Texas, and colleagues. The system “easily detected” tumor bodies and tumor spread, they reported.

The study builds on previous research that employed more conventional artificial intelligence (AI) techniques to identify the morphological correlates of lung tumor histology, and to analyze how they relate to patient prognosis, noted Mohammad Ihsan Fazal, BMBS, at Brighton and Sussex Medical School, University of Sussex in England.2-4

Using a NLST learning dataset of 5344 tumor, nonmalignant, and white image patches extracted from 27 lung pathology images, the study authors reported there were 15 “well-defined shape and boundary features” that were significantly associated with patient survival. Features included total tumor cross-sectional perimeter, sum of hole numbers in tumors, major and minor axis lengths, eccentricity, and solidity of the main tumor region.

The CNN model differentiated tumor, nontumor, and empty (white) image regions with an overall prediction accuracy of 89.8% (88% for tumor regions and 93.5% for nontumor regions). In univariate analyses, shape and boundary feature-based predictive “heat maps” of tumor regions predicted patient survival outcomes. A prognostic model using these features was validated with an independent 389-patient cohort. In a multivariate analysis, the model identified patients at high and low risk of mortality after adjusting for age, sex, tumor stage, and smoking status (hazard ratio 2.25; 95% CI: 1.35-3.77; = .002).1

Related Articles

CNNs are a relatively young form of “deep” machine learning systems, designed to emulate neuronal connectivity in the mammalian visual cortex. CNNs are used for image recognition or matching and, increasingly, to learn from large medical imaging datasets about how to best distinguish between healthy and diseased tissues or organs. 

“This technique simulates aspects of the human brain and is more scalable with larger data sets, [in addition, it is] arguably more accurate,” Dr Fazal said. The resulting tumor heat maps have been explored using breast cancer images, he said.5

Two-dimensional pathology images do not capture the entire range of tumor morphology and may not represent the 3-dimensional size or shape of a biopsied tumor, the authors cautioned. Tumor size is a well-established prognostic factor, they noted. In addition, CNNs are vulnerable to focus and other pathology image artifacts, such as out-of-focus blood cells and stroma. 

“False-positive rate is always an issue in AI image interpretation and a disadvantage compared [with] a human pathologist/radiologist,” Dr Fazal told Cancer Therapy Advisor. “False-positive rate is a barrier that needs to be overcome before we can see mainstream adoption of pure AI image interpretation due to the risk of needless invasive investigations and treatment of healthy [people], and the costs that that entails.”

Unlike deep machine-learning networks that learn from raw computed tomography or other imaging scan data sets, CNN learning from actual images cannot detect features or correlations that are not readily visualized.

“However, the image may be processed to reveal features that are not immediately apparent — for example, doing something as simple as inverting the colors or converting to black and white or running through a filter to blur or exclude features,” Dr Fazal explained. “This processing would factor into the training of the model and would produce a model that can be applied to fresh, unseen data with better success.”

Other researchers use blurring filters to train AI models to avoid out-of-focus image-associated artifacts and errors.4,6 “This additional processing could be done manually, but would be time-consuming,” Dr Fazal concluded.


  1. Wang S, Chen A, Yang L, et al. Comprehensive analysis of lung cancer pathology images to discover tumor shape and boundary features that predict survival outcomeSci Rep. 2018;8(1):10393.
  2. Yu K-H, Zhang C, Berry GJ, et al. Predicting non-small cell lung cancer prognosis by fully automated microscopic pathology image featuresNat Commun. 2016;7:12474.
  3. Luo X, Zang X, Yang L, et al. Comprehensive computational pathological image analysis predicts lung cancer prognosisJ Thorac Oncol. 2017;12(3):501-509.
  4. Teramoto A, Tsukamoto T, Kiriyama Y, Fujita H. Automated classification of lung cancer types from cytological images using deep convolutional neural networksBiomed Res Int. 2017;4067832. doi: 10.1155/2017/4067832
  5. Stumpe M, Peng L. Assisting pathologists in detecting cancer with deep learning. Updated March 3, 2017. Accessed August 27, 2018.
  6. Fazal MI, Patel ME, Tye J, Gupta Y. The past, present and future role of artificial intelligence in imagingEur J Radiol. 2018;105:246-250.