AI Chatbot versus Human Clinician: Who Wins at Probabilistic Reasoning?
Researchers from Beth Israel Deaconess Medical Center (BIDMC) compared the performance of a chatbot’s probabilistic reasoning to that of human clinicians. The results, published in JAMA Network Open, suggest that artificial intelligence could be a useful clinical decision support tool for physicians.
Comparison Study
Dr. Adam Rodman and his team at BIDMC conducted a study based on a national survey of more than 550 practitioners. They compared the probabilistic reasoning abilities of these practitioners to a Large Language Model (LLM) chatbot known as Chat GPT-4. Through their study, they found that the chatbot demonstrated more accuracy in making diagnoses than humans, especially when interpreting negative test results.
The Future of AI in Healthcare
Dr. Rodman is particularly interested in how highly skilled physicians’ performance might change with the availability of supportive AI technologies in the clinic. He believes that despite their imperfections, AI chatbots like Chat GPT-4 could help humans make better decisions, potentially improving the overall quality of healthcare.
Further research into the collective use of human and artificial intelligence in healthcare is needed to fully understand the impact of AI on clinical decision-making.