Integrating large language models in systematic reviews: a framework and case study using ROBINS-I for risk of bias assessment

Bashar Hasan, Samer Saadi, Noora S. Rajjoub, Moustafa Hegazi, Mohammad Al-Kordi, Farah Fleti, Magdoleen Farah, Irbaz B. Riaz, Imon Banerjee, Zhen Wang, Mohammad H Murad

Research output: Contribution to journalArticlepeer-review

Abstract

Large language models (LLMs) may facilitate and expedite systematic reviews, although the approach to integrate LLMs in the review process is unclear. This study evaluates GPT-4 agreement with human reviewers in assessing the risk of bias using the Risk Of Bias In Non-randomised Studies of Interventions (ROBINS-I) tool and proposes a framework for integrating LLMs into systematic reviews. The case study demonstrated that raw per cent agreement was the highest for the ROBINS-I domain of ' Classification of Intervention'. Kendall agreement coefficient was highest for the domains of ' Participant Selection', ' Missing Data' and ' Measurement of Outcomes', suggesting moderate agreement in these domains. Raw agreement about the overall risk of bias across domains was 61% (Kendall coefficient=0.35). The proposed framework for integrating LLMs into systematic reviews consists of four domains: rationale for LLM use, protocol (task definition, model selection, prompt engineering, data entry methods, human role and success metrics), execution (iterative revisions to the protocol) and reporting. We identify five basic task types relevant to systematic reviews: selection, extraction, judgement, analysis and narration. Considering the agreement level with a human reviewer in the case study, pairing artificial intelligence with an independent human reviewer remains required.

Original languageEnglish (US)
JournalBMJ evidence-based medicine
DOIs
StateAccepted/In press - 2024

Keywords

  • Evidence-Based Practice
  • Methods
  • Systematic Reviews as Topic

ASJC Scopus subject areas

  • General Medicine

Fingerprint

Dive into the research topics of 'Integrating large language models in systematic reviews: a framework and case study using ROBINS-I for risk of bias assessment'. Together they form a unique fingerprint.

Cite this