TY - JOUR
T1 - Comparative Analysis of Artificial Intelligence Virtual Assistant and Large Language Models in Post-Operative Care
AU - Borna, Sahar
AU - Gomez-Cabello, Cesar A.
AU - Pressman, Sophia M.
AU - Haider, Syed Ali
AU - Sehgal, Ajai
AU - Leibovich, Bradley C.
AU - Cole, Dave
AU - Forte, Antonio Jorge
N1 - Publisher Copyright:
© 2024 by the authors.
PY - 2024/5
Y1 - 2024/5
N2 - In postoperative care, patient education and follow-up are pivotal for enhancing the quality of care and satisfaction. Artificial intelligence virtual assistants (AIVA) and large language models (LLMs) like Google BARD and ChatGPT-4 offer avenues for addressing patient queries using natural language processing (NLP) techniques. However, the accuracy and appropriateness of the information vary across these platforms, necessitating a comparative study to evaluate their efficacy in this domain. We conducted a study comparing AIVA (using Google Dialogflow) with ChatGPT-4 and Google BARD, assessing the accuracy, knowledge gap, and response appropriateness. AIVA demonstrated superior performance, with significantly higher accuracy (mean: 0.9) and lower knowledge gap (mean: 0.1) compared to BARD and ChatGPT-4. Additionally, AIVA’s responses received higher Likert scores for appropriateness. Our findings suggest that specialized AI tools like AIVA are more effective in delivering precise and contextually relevant information for postoperative care compared to general-purpose LLMs. While ChatGPT-4 shows promise, its performance varies, particularly in verbal interactions. This underscores the importance of tailored AI solutions in healthcare, where accuracy and clarity are paramount. Our study highlights the necessity for further research and the development of customized AI solutions to address specific medical contexts and improve patient outcomes.
AB - In postoperative care, patient education and follow-up are pivotal for enhancing the quality of care and satisfaction. Artificial intelligence virtual assistants (AIVA) and large language models (LLMs) like Google BARD and ChatGPT-4 offer avenues for addressing patient queries using natural language processing (NLP) techniques. However, the accuracy and appropriateness of the information vary across these platforms, necessitating a comparative study to evaluate their efficacy in this domain. We conducted a study comparing AIVA (using Google Dialogflow) with ChatGPT-4 and Google BARD, assessing the accuracy, knowledge gap, and response appropriateness. AIVA demonstrated superior performance, with significantly higher accuracy (mean: 0.9) and lower knowledge gap (mean: 0.1) compared to BARD and ChatGPT-4. Additionally, AIVA’s responses received higher Likert scores for appropriateness. Our findings suggest that specialized AI tools like AIVA are more effective in delivering precise and contextually relevant information for postoperative care compared to general-purpose LLMs. While ChatGPT-4 shows promise, its performance varies, particularly in verbal interactions. This underscores the importance of tailored AI solutions in healthcare, where accuracy and clarity are paramount. Our study highlights the necessity for further research and the development of customized AI solutions to address specific medical contexts and improve patient outcomes.
KW - Bard
KW - ChatGPT
KW - artificial intelligence
KW - large language model
KW - machine learning
KW - natural language processing
UR - http://www.scopus.com/inward/record.url?scp=85194252945&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85194252945&partnerID=8YFLogxK
U2 - 10.3390/ejihpe14050093
DO - 10.3390/ejihpe14050093
M3 - Article
AN - SCOPUS:85194252945
SN - 2174-8144
VL - 14
SP - 1413
EP - 1424
JO - European Journal of Investigation in Health, Psychology and Education
JF - European Journal of Investigation in Health, Psychology and Education
IS - 5
ER -