Study Finds Parents Trust ChatGPT More Than Doctors About Their Children’s Health

No more pediatricians or general practitioners: when it comes to the health of their children, parents would put more trust in the artificial intelligence tool ChatGPT, relate Vice. This is what a study recently published in the Journal of Pediatric Psychology and carried out by researchers from the University of Kansas, in the United States.

The latter aimed to determine whether, according to parents, a text generated by ChatGPT was as reliable as a text written by a medical expert. The behaviors of 116 parents aged 18 to 65 were studied. Participants were asked to complete a baseline assessment of their behavioral intentions regarding pediatric health care. Then, they commented on texts generated either by an expert or by ChatGPT.

ChatGPT more reliable than a medical expert for some parents

“We started this research right after the launch of ChatGPT because we were concerned that parents were using this new tool to gather information about their children’s health”explains Calissa Leslie-Miller, lead author of the study. She continues: “Parents often turn to the internet for advice, so we wanted to understand what their use of ChatGPT would look like and whether we should be concerned”.

The study found that ChatGPT is able to influence parents’ behavior towards their children regarding medication, sleep and diet. The latter believe that there is “little difference” between ChatGPT’s statements and those of a doctor in terms of morality, reliability, expertise, accuracy and trust. More alarming, parents who believe there is a difference lean more toward ChatGPT regarding reliability and accuracy. Participants also indicated that they would be more likely to trust information from ChatGPT rather than that of an expert.

“People have difficulty distinguishing AI-generated text from content written by an expert,”

“This result surprised us, especially since the study took place at the very beginning of ChatGPT”wonders Calissa Leslie-Miller. “AI is embedded in digital content in ways that are sometimes implicit, and people sometimes have difficulty distinguishing AI-generated text from content written by an expert”, she adds.

The main problem, according to her, is that when ChatGPT does not have sufficient context to answer a question, the system produces a «hallucination»and embroiders an answer on sometimes random facts. It can also transmit erroneous information if its database has not been updated with the latest studies and scientific articles published on the subject.

Calissa Leslie-Miller warns: “In the field of child health, the consequences can be considerable. We fear that people will increasingly rely on AI for health advice, without expert supervision. We absolutely must tackle the problem.” While AI has great potential to exploit, it is not expert, and most of the information it provides does not come from expert sources either.

Source: www.slate.fr