Biased data, biased AI: deep networks predict the acquisition site of TCGA images

Taher Dehkharghanian, Azam Asilian Bidgoli, Abtin Riasatian, Pooria Mazaheri, Clinton J.V. Campbell, Liron Pantanowitz, H. R. Tizhoosh, Shahryar Rahnamayan

Research output: Contribution to journalArticlepeer-review

Abstract

Background: Deep learning models applied to healthcare applications including digital pathology have been increasing their scope and importance in recent years. Many of these models have been trained on The Cancer Genome Atlas (TCGA) atlas of digital images, or use it as a validation source. One crucial factor that seems to have been widely ignored is the internal bias that originates from the institutions that contributed WSIs to the TCGA dataset, and its effects on models trained on this dataset. Methods: 8,579 paraffin-embedded, hematoxylin and eosin stained, digital slides were selected from the TCGA dataset. More than 140 medical institutions (acquisition sites) contributed to this dataset. Two deep neural networks (DenseNet121 and KimiaNet were used to extract deep features at 20× magnification. DenseNet was pre-trained on non-medical objects. KimiaNet has the same structure but trained for cancer type classification on TCGA images. The extracted deep features were later used to detect each slide’s acquisition site, and also for slide representation in image search. Results: DenseNet’s deep features could distinguish acquisition sites with 70% accuracy whereas KimiaNet’s deep features could reveal acquisition sites with more than 86% accuracy. These findings suggest that there are acquisition site specific patterns that could be picked up by deep neural networks. It has also been shown that these medically irrelevant patterns can interfere with other applications of deep learning in digital pathology, namely image search. Summary: This study shows that there are acquisition site specific patterns that can be used to identify tissue acquisition sites without any explicit training. Furthermore, it was observed that a model trained for cancer subtype classification has exploited such medically irrelevant patterns to classify cancer types. Digital scanner configuration and noise, tissue stain variation and artifacts, and source site patient demographics are among factors that likely account for the observed bias. Therefore, researchers should be cautious of such bias when using histopathology datasets for developing and training deep networks.

Original languageEnglish (US)
Article number67
JournalDiagnostic pathology
Volume18
Issue number1
DOIs
StatePublished - Dec 2023

Keywords

  • AI bias
  • AI ethics
  • Cancer
  • Deep Learning
  • Digital pathology
  • TCGA

ASJC Scopus subject areas

  • Pathology and Forensic Medicine
  • Histology

Fingerprint

Dive into the research topics of 'Biased data, biased AI: deep networks predict the acquisition site of TCGA images'. Together they form a unique fingerprint.

Cite this