DeepMind Reveals GopherCite, a Model to Tackle Factual Inaccuracies in Language Models
Last year, DeepMind published several articles discussing large language models (LLMs), including a study of Gopher, their own large language model. The technology for language modeling, which is also in development by various other labs and companies, holds promise for enhancing numerous applications, from search engines to new conversational assistants.
DeepMind’s research highlighted the issue of “hallucinating” facts in language models, where models may generate plausible but false information. To address this problem, DeepMind introduced GopherCite, a model designed to provide evidence-backed answers to factual questions. GopherCite uses Google Search to find relevant web pages and quotes passages to support its responses. If the system cannot find sufficient evidence, it will answer with “I don’t know.”
The comparison between Gopher and GopherCite illustrates the substantial improvement in the latter’s ability to provide trustworthy answers backed by evidence. In a user study, GopherCite was able to answer factual and explanation-seeking questions correctly about 80-67% of the time. However, it falls into traps when faced with adversarial questions, indicating the need for further improvements.
While GopherCite is a significant step forward, DeepMind acknowledges that evidence citation alone is insufficient for ensuring overall safety and trustworthiness. They plan to continue their work in this area and address the issues presented with further research and development.
For more detailed information about DeepMind’s research and GopherCite functionality, readers can refer to the relevant context from the research literature and an FAQ answered by the model itself.