In a concerning development, researchers at the Oxford Internet Institute are warning about the alarming tendency of Large Language Models (LLMs) used in chatbots to hallucinate. These models, which are designed to produce helpful and convincing responses without any guarantees regarding their accuracy or alignment with fact, pose a direct threat to science and scientific truth.
According to a paper published in Nature Human Behaviour, LLMs are often used as knowledge sources and are trained on data that may not be factually correct. This can lead users to trust them as human-like information sources, even when their responses have no basis in fact or present a biased or partial version of the truth.
To address this issue, the researchers urge the scientific community to use LLMs as “zero-shot translators.” This means that users should provide the model with the appropriate data and ask it to transform it into a conclusion or code, rather than relying on the model itself as a source of knowledge. This approach makes it easier to verify that the output is factually correct and aligned with the provided input.
While LLMs will undoubtedly assist with scientific workflows, it is crucial for the scientific community to use them responsibly and maintain clear expectations of how they can contribute. The researchers emphasize that information accuracy is essential in science and education and warn against treating LLMs as infallible sources of knowledge.