The diagnostic accuracy of an artificial intelligence algorithm in recognizing melanomas in camera-based dermoscopic images of suspicious pigmented skin lesions was similar to that made by clinician specialists, study results published in JAMA Network Open have shown.1

In 2016, the United States Preventive Services Task Force (USPSTF) recommended against widespread clinician screening for skin cancer by visual inspection due to insufficient evidence regarding the benefits and risks of such an approach.2 More recently, a series of Cochrane reviews of diagnostic methods used in the evaluation of skin lesions, including visual assessment with or without skin service microscopy (ie, dermoscopy), reflectance conformal microscopy, teledermatology, and computer-based or smartphone applications, found only limited data for the latter 3 approaches. Other potential barriers to widespread melanoma screening include the expense of reflectance confocal microscopic equipment, and varying clinician experience.1

Related Articles

In this prospective, multicenter, single-arm, masked study, an artificial intelligence algorithm called Deep Ensemble for Recognition of Malignancy was used to categorize photographs of suspicious pigmented skin lesions and control lesions taken using 2 types of smartphones and a digital camera, all of which had a dermoscopic attachment. These results were compared with the assessments by clinician specialists.

Of the 514 patients enrolled in the study, nearly all were white. The number of skin lesion images included 551 and 999 biopsied and control images, respectively, for a total of 1550 images. Pathologic analyses were performed on specimens of biopsied lesions. In the case of the biopsied lesions, 22.7% were classified as melanoma.

The artificial intelligence algorithm, previously trained using published dermatoscopic images, was further refined for each of the 3 cameras using a subset of the images collected from each camera as part of this study, although no images from the same patients were used in the training and data sets. All lesions included in the image data set were photographed by all 3 digital cameras.

The algorithm rated each image on a scale of 0 (certainly benign) to 1 (certainly malignant), whereas clinicians used a scale of 1 (unlikely to be melanoma) to 4 (highly likely to be melanoma). Area under the receiver operating characteristic curve (AUROC) was used as 1 of the indices of accuracy for melanoma assessment based on a comparison with results for biopsied lesions and all lesions.

Notably, the AUROCs were between 85% and 91% for artificial intelligence assessments of images from all 3 cameras with respect to biopsied lesions and between 91% and 96% for all lesions. The corresponding AUROC for biopsied lesions made by clinician assessment was 77.8%.

Furthermore, at a sensitivity of 100%, the specificity of the algorithm was 64.8% compared with 69.8% for clinician assessment.

In their concluding remarks, the study authors noted that “the findings of this diagnostic trial demonstrated that an artificial intelligence algorithm, using different camera types, can detect melanoma with a similar level of accuracy as specialists. The development of low-cost screening methods, such as artificial intelligence-based services, could transform patient diagnosis pathways, enabling greater efficiencies throughout the health care service.”

References

  1. Phillips M, Marsden H, Jaffe W, et al. Assessment of accuracy of an artificial intelligence algorithm to detect melanoma in images of skin lesions. JAMA Netw Open. doi: 10.1001/jamanetworkopen.2019.13436
  2. US Preventive Services Task Force, Bibbins-Domingo KGrossman DCCurry SJ, et al. Screening for skin cancer: US Preventive Services Task Force recommendation statement. JAMA. 2016;316(4):429-435.

This article originally appeared on Oncology Nurse Advisor