TY - GEN
T1 - Graph-Based Fusion of Imaging and Non-Imaging Data for Disease Trajectory Prediction
AU - Tariq, Amara
AU - Tang, Siyi
AU - Sakhi, Hifza
AU - Celi, Leo Anthony
AU - Newsome, Janice M.
AU - Rubin, Daniel
AU - Trivedi, Hari
AU - Gichoya, Judy
AU - Patel, Bhavik
AU - Banerjee, Imon
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - This study proposes a graph convolutional neural networks (GCN) architecture for fusion of radiological imaging and non-imaging tabular electronic health records (EHR) for the purpose of clinical event prediction. We focused on a cohort of hospitalized patients with positive RT-PCR test for COVID-19 and developed GCN based models to predict three dependent clinical events (discharge from hospital, admission into ICU, and mortality) using demographics, billing codes for procedures and diagnoses and chest X-rays. We hypothesized that the two-fold learning opportunity provided by the GCN is ideal for fusion of imaging information and tabular data as node and edge features, respectively. Our experiments indicate the validity of our hypothesis where GCN based predictive models outperform single modality and traditional fusion models. We compared the proposed models against two variations of imaging-based models, including DenseNet-121 architecture with learnable classification layers and Random Forest classifiers using disease severity score estimated by pre-trained convolutional neural network. GCN based model outperforms both imaging-only methods. We also validated our models on an external dataset where GCN showed valuable generalization capabilities. We noticed that edge-formation function can be adapted even after training the GCN model without limiting application scope of the model. Our models take advantage of this fact for generalization to external data.
AB - This study proposes a graph convolutional neural networks (GCN) architecture for fusion of radiological imaging and non-imaging tabular electronic health records (EHR) for the purpose of clinical event prediction. We focused on a cohort of hospitalized patients with positive RT-PCR test for COVID-19 and developed GCN based models to predict three dependent clinical events (discharge from hospital, admission into ICU, and mortality) using demographics, billing codes for procedures and diagnoses and chest X-rays. We hypothesized that the two-fold learning opportunity provided by the GCN is ideal for fusion of imaging information and tabular data as node and edge features, respectively. Our experiments indicate the validity of our hypothesis where GCN based predictive models outperform single modality and traditional fusion models. We compared the proposed models against two variations of imaging-based models, including DenseNet-121 architecture with learnable classification layers and Random Forest classifiers using disease severity score estimated by pre-trained convolutional neural network. GCN based model outperforms both imaging-only methods. We also validated our models on an external dataset where GCN showed valuable generalization capabilities. We noticed that edge-formation function can be adapted even after training the GCN model without limiting application scope of the model. Our models take advantage of this fact for generalization to external data.
KW - clinical event prediction
KW - graph convolutional neural network
KW - multi-modal data fusion
UR - http://www.scopus.com/inward/record.url?scp=85160660392&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85160660392&partnerID=8YFLogxK
U2 - 10.1109/NER52421.2023.10123892
DO - 10.1109/NER52421.2023.10123892
M3 - Conference contribution
AN - SCOPUS:85160660392
T3 - International IEEE/EMBS Conference on Neural Engineering, NER
BT - 11th International IEEE/EMBS Conference on Neural Engineering, NER 2023 - Proceedings
PB - IEEE Computer Society
T2 - 11th International IEEE/EMBS Conference on Neural Engineering, NER 2023
Y2 - 25 April 2023 through 27 April 2023
ER -