icon

Implementation of model explainability for a basic brain tumor detection using convolutional neural networks on MRI slices.

Abstract

Making the decisions of a network more explainable helped to identify potential bias and choose appropriate training data.

Model explainability should be considered in early stages of training a neural network for medical purposes as it may save time in the long run and will ultimately help physicians integrate the network's predictions into a clinical decision.

In order to assess the advantages of implementing features to increase explainability early in the development process, we trained a neural network to differentiate between MRI slices containing either a vestibular schwannoma, a glioblastoma, or no tumor.

An unhandled error has occurred. Reload 🗙