Discriminating Healthy Optic Discs and Visible Optic Disc Drusen on Fundus Autofluorescence and Color Fundus Photography Using Deep Learning—A Pilot Study

The aim of this study was to use deep learning based on a deep convolutional neural network (DCNN) for automated image classification of healthy optic discs (OD) and visible optic disc drusen (ODD) on fundus autofluorescence (FAF) and color fundus photography (CFP). In this study, a total of 400 FAF...

Verfasser: Diener, Raphael
Lauermann, Jost
Eter, Nicole
Treder, Maximilian
Dokumenttypen:Artikel
Medientypen:Text
Erscheinungsdatum:2023
Publikation in MIAMI:05.12.2023
Datum der letzten Änderung:05.12.2023
Angaben zur Ausgabe:[Electronic ed.]
Quelle:Journal of Clinical Medicine 12 (2023) 5, 1951, 1-9
Schlagwörter:deep learning; artificial intelligence; optic disc drusen; visible optic disc drusen; optic disc drusen; deep convolutional neural network; DCNN; inceptionv3
Fachgebiet (DDC):610: Medizin und Gesundheit
Lizenz:CC BY 4.0
Sprache:English
Förderung:Finanziert durch den Open-Access-Publikationsfonds der Universität Münster.
Format:PDF-Dokument
URN:urn:nbn:de:hbz:6-18938507058
Weitere Identifikatoren:DOI: 10.17879/18938507791
Permalink:https://nbn-resolving.de/urn:nbn:de:hbz:6-18938507058
Verwandte Dokumente:
Onlinezugriff:10.3390_jcm12051951.pdf

The aim of this study was to use deep learning based on a deep convolutional neural network (DCNN) for automated image classification of healthy optic discs (OD) and visible optic disc drusen (ODD) on fundus autofluorescence (FAF) and color fundus photography (CFP). In this study, a total of 400 FAF and CFP images of patients with ODD and healthy controls were used. A pre-trained multi-layer Deep Convolutional Neural Network (DCNN) was trained and validated independently on FAF and CFP images. Training and validation accuracy and cross-entropy were recorded. Both generated DCNN classifiers were tested with 40 FAF and CFP images (20 ODD and 20 controls). After the repetition of 1000 training cycles, the training accuracy was 100%, the validation accuracy was 92% (CFP) and 96% (FAF), respectively. The cross-entropy was 0.04 (CFP) and 0.15 (FAF). The sensitivity, specificity, and accuracy of the DCNN for classification of FAF images was 100%. For the DCNN used to identify ODD on color fundus photographs, sensitivity was 85%, specificity 100%, and accuracy 92.5%. Differentiation between healthy controls and ODD on CFP and FAF images was possible with high specificity and sensitivity using a deep learning approach.