Assessment and Optimization of Explainable Machine Learning Models Applied to Transcriptomic Data

Yongbing Zhao, Jinfeng Shao, Yan W. Asmann

Research output: Contribution to journalArticlepeer-review


Explainable artificial intelligence aims to interpret how machine learning models make decisions, and many model explainers have been developed in the computer vision field. However, understanding of the applicability of these model explainers to biological data is still lacking. In this study, we comprehensively evaluated multiple explainers by interpreting pre-trained models for predicting tissue types from transcriptomic data and by identifying the top contributing genes from each sample with the greatest impacts on model prediction. To improve the reproducibility and interpretability of results generated by model explainers, we proposed a series of optimization strategies for each explainer on two different model architectures of multilayer perceptron (MLP) and convolutional neural network (CNN). We observed three groups of explainer and model architecture combinations with high reproducibility. Group II, which contains three model explainers on aggregated MLP models, identified top contributing genes in different tissues that exhibited tissue-specific manifestation and were potential cancer biomarkers. In summary, our work provides novel insights and guidance for exploring biological mechanisms using explainable machine learning models.

Original languageEnglish (US)
Pages (from-to)899-911
Number of pages13
JournalGenomics, Proteomics and Bioinformatics
Issue number5
StatePublished - Oct 2022


  • Gene expression
  • Machine learning
  • Marker gene
  • Model interpretability
  • Omics data mining

ASJC Scopus subject areas

  • Biochemistry
  • Molecular Biology
  • Genetics
  • Computational Mathematics


Dive into the research topics of 'Assessment and Optimization of Explainable Machine Learning Models Applied to Transcriptomic Data'. Together they form a unique fingerprint.

Cite this