Data-efficient vision transformers for multi-label sisease classification on chest radiographs

Link:
Autor/in:
Verlag/Körperschaft:
Hamburg University of Technology
Erscheinungsjahr:
2022
Medientyp:
Text
Schlagworte:
  • Chest Radiograph
  • CheXpert
  • Convolutional Neural Network
  • Deep Learning
  • Vision Transformer
  • 600: Technik
  • 610: Medizin
Beschreibung:
  • Radiographs are a versatile diagnostic tool for the detection and assessment of pathologies, for treatment planning or for navigation and localization purposes in clinical interventions. However, their interpretation and assessment by radiologists can be tedious and error-prone. Thus, a wide variety of deep learning methods have been proposed to support radiologists interpreting radiographs. Mostly, these approaches rely on convolutional neural networks (CNN) to extract features from images. Especially for the multi-label classification of pathologies on chest radiographs (Chest X-Rays, CXR), CNNs have proven to be well suited. On the Contrary, Vision Transformers (ViTs) have not been applied to this task despite their high classification performance on generic images and interpretable local saliency maps which could add value to clinical interventions. ViTs do not rely on convolutions but on patch-based self-attention and in contrast to CNNs, no prior knowledge of local connectivity is present. While this leads to increased capacity, ViTs typically require an excessive amount of training data which represents a hurdle in the medical domain as high costs are associated with collecting large medical data sets. In this work, we systematically compare the classification performance of ViTs and CNNs for different data set sizes and evaluate more data-efficient ViT variants (DeiT). Our results show that while the performance between ViTs and CNNs is on par with a small benefit for ViTs, DeiTs outperform the former if a reasonably large data set is available for training.
Beziehungen:
DOI 10.1515/cdbme-2022-0009
Lizenzen:
  • info:eu-repo/semantics/openAccess
  • https://creativecommons.org/licenses/by/4.0/
Quellsystem:
TUHH Open Research

Interne Metadaten
Quelldatensatz
oai:tore.tuhh.de:11420/13482