Ophthalmology
-
To illustrate what is inside the so-called black box of deep learning models (DLMs) so that clinicians can have greater confidence in the conclusions of artificial intelligence by evaluating adversarial explanation on its ability to explain the rationale of DLM decisions for glaucoma and glaucoma-related findings. Adversarial explanation generates adversarial examples (AEs), or images that have been changed to gain or lose pathologic characteristic-specific traits, to explain the DLM's rationale. ⋯ Adversarial explanation increased the explainability over GradCAM, a conventional heatmap-based explanation method. Adversarial explanation may help medical professionals understand more clearly the rationale of DLMs when using them for clinical decisions.