TY - JOUR
T1 - Comparative effectiveness of tumor response assessment methods
T2 - Standard of care versus computer- assisted response evaluation
AU - Allen, Brian C.
AU - Florez, Edward
AU - Sirous, Reza
AU - Lirette, Seth T.
AU - Griswold, Michael
AU - Remer, Erick M.
AU - Wang, Zhen J.
AU - Bieszczad, Jacob E.
AU - Cox, Kelly L.
AU - Goenka, Ajit H.
AU - Howard-Claudio, Candace M.
AU - Kang, Hyunseon C.
AU - Nandwana, Sadhna B.
AU - Sanyal, Rupan
AU - Shinagare, Atul B.
AU - Clark Henegan, J.
AU - Storrs, Judd
AU - Davenport, Matthew S.
AU - Ganeshan, Balaji
AU - Vasanji, Amit
AU - Rini, Brian
AU - Smith, Andrew D.
N1 - Publisher Copyright:
© 2018 American Society of Clinical Oncology.
PY - 2017
Y1 - 2017
N2 - Purpose To compare the effectiveness of metastatic tumor response evaluation with computed tomography using computer-assisted versus manual methods. Materials and Methods In this institutional review board-approved, Health Insurance Portability and Accountability Act-compliant retrospective study, 11 readers from 10 different institutions independently categorized tumor response according to three different therapeutic response criteria by using paired baseline and initial post-therapy computed tomography studies from 20 randomly selected patients with metastatic renal cell carcinoma who were treated with sunitinib as part of a completed phase III multiinstitutional study. Images were evaluated with a manual tumor response evaluation method (standard of care) and with computer-assisted response evaluation (CARE) that included stepwise guidance, interactive error identification and correction methods, automated tumor metric extraction, calculations, response categorization, and data and image archiving. A crossover design, patient randomization, and 2-week washout period were used to reduce recall bias. Comparative effectiveness metrics included error rate and mean patient evaluation time. Results The standard-of-care method, on average, was associated with one or more errors in 30.5% (6.1 of 20) of patients, whereas CARE had a 0.0% (0.0 of 20) error rate (P < .001). The most common errors were related to data transfer and arithmetic calculation. In patients with errors, the median number of error types was 1 (range, 1 to 3). Mean patient evaluation time with CARE was twice as fast as the standard-of-care method (6.4 minutes v 13.1 minutes; P < .001). Conclusion CARE reduced errors and time of evaluation, which indicated better overall effectiveness than manual tumor response evaluation methods that are the current standard of care.
AB - Purpose To compare the effectiveness of metastatic tumor response evaluation with computed tomography using computer-assisted versus manual methods. Materials and Methods In this institutional review board-approved, Health Insurance Portability and Accountability Act-compliant retrospective study, 11 readers from 10 different institutions independently categorized tumor response according to three different therapeutic response criteria by using paired baseline and initial post-therapy computed tomography studies from 20 randomly selected patients with metastatic renal cell carcinoma who were treated with sunitinib as part of a completed phase III multiinstitutional study. Images were evaluated with a manual tumor response evaluation method (standard of care) and with computer-assisted response evaluation (CARE) that included stepwise guidance, interactive error identification and correction methods, automated tumor metric extraction, calculations, response categorization, and data and image archiving. A crossover design, patient randomization, and 2-week washout period were used to reduce recall bias. Comparative effectiveness metrics included error rate and mean patient evaluation time. Results The standard-of-care method, on average, was associated with one or more errors in 30.5% (6.1 of 20) of patients, whereas CARE had a 0.0% (0.0 of 20) error rate (P < .001). The most common errors were related to data transfer and arithmetic calculation. In patients with errors, the median number of error types was 1 (range, 1 to 3). Mean patient evaluation time with CARE was twice as fast as the standard-of-care method (6.4 minutes v 13.1 minutes; P < .001). Conclusion CARE reduced errors and time of evaluation, which indicated better overall effectiveness than manual tumor response evaluation methods that are the current standard of care.
UR - http://www.scopus.com/inward/record.url?scp=85075398747&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85075398747&partnerID=8YFLogxK
U2 - 10.1200/CCI.17.00026
DO - 10.1200/CCI.17.00026
M3 - Article
C2 - 30657391
AN - SCOPUS:85075398747
SN - 2473-4276
VL - 2017
SP - 1
EP - 16
JO - JCO Clinical Cancer Informatics
JF - JCO Clinical Cancer Informatics
IS - 1
ER -