![]() Results: TheĬorrect classification of responses ranged between 49.0% to 85.7% for different Patient-provider communication, using a Likert scale of 1-5. Participants were also asked about their trust in chatbots' functions in ![]() Participants were asked,Īnd incentivized financially, to correctly identify the response source. Provider-generated and five were chatbot-generated. Participants were informed that five responses were In the survey, each patient's question was followed by a provider- orĬhatGPT-generated response. ![]() Respond using approximately the same word count as the human provider's Patients' questions were placed in ChatGPT with a request for the chatbot to Non-administrative patient-provider interactions were extracted from the EHR. 53.2% of respondentsĪnalyzed were women their average age was 47.1. Sample of 430 study participants aged 18 and above. Download a PDF of the paper titled Putting ChatGPT's Medical Advice to the (Turing) Test, by Oded Nov and Nina Singh and Devin Mann Download PDF Abstract: Objective: Assess the feasibility of using ChatGPT or a similar AI-basedĬhatbot for patient-provider communication.
0 Comments
Leave a Reply. |