OpenAI tool that converts conversations into text appears to come up with things itself

Whisper, a transcribing tool from OpenAI, appears not to be doing as well as it should. The software, which converts conversations and recorded text into written text, appears to make up things that were never said.

Several software developers, programmers and academic researchers express opposite AP their concerns about Whisper. Various tests show that the transcribing tool makes up information that is not stated at all. This would include medical treatments and racial commentary that appeared in texts created by Whisper.

It is known that AI chatbots sometimes invent information that is incorrect. But you wouldn’t expect this, especially with a tool like Whisper, which is fed with information by the user and does not search for information in databases, for example. Moreover, it is extra dangerous because Whisper is already used in hospitals and other medical institutions, for example to convert conversations with patients into text. Until now, OpenAI has always discouraged use in such situations.

Many errors in examined texts

A University of Michigan researcher found so-called hallucinations in eight out of every ten recordings he examined. A machine learning expert studied 100 hours of Whisper recordings. He found hallucinations in more than half of the cases. And one developer AP spoke to found them in almost all of the 26,000 transcripts he made with Whisper.

OpenAI responds that the company is continuously working on improvements to its software, including to reduce hallucinations. It thanks the researchers for their findings, writes TechCrunch.

Read more about AI and stay informed via our newsletter.

Source: www.bright.nl