A team of scientists from the University of Science and Technology of China and Tencent’s YouTu Lab have developed a tool to combat “hallucination” by artificial intelligence (AI) models.
Hallucination is the tendency for an AI model to generate outputs with a high level of confidence that don’t appear based on information present in its training data. This problem permeates large language model (LLM) research, and its effects can be seen in models such as OpenAI’s ChatGPT and Anthropic’s Claude.