The Significance of Retrieval Augmented Language Models
With textual materials making up a large part of the web, providing the most current and relevant information online is a major problem in information retrieval. ChatGPT and other question-answering systems based on large language models (LLMs) have become increasingly popular, but they often generate false positives or hallucinations. This is where Retrieval Augmented Language Models (RALMs) come in as a potential solution.
Understanding Retrieval Augmented Language Models
In contrast to traditional LLMs, RALMs use an external document database to pull information rather than relying on parametric memory. This database encompasses all versions of online documents, allowing for efficient retrieval of the most recent versions. RALMs have shown to excel at answering factual questions, but they struggle with timing on continuously updated information.
A recent study by San Jose State University presents TempRALM, a new and successful model that gives accurate and time-relevant documents. TempRALM was designed to improve the retrieval and ranking of the Atlas model, the first RALM. The new model considers documents in terms of both semantics and time, unlike other RALMs.
Enhancements and Results
The team experimented with TempRALM’s parameters and discovered that their model outperforms the standard Atlas model by up to 74% with fewer computational resources. TempRALM doesn’t require extra computational power or altering the document index. The team plans to further explore the possibilities of their findings, including applications in fact-checking, recommendation systems, and dialogue agents.
Dhanshree Shenwai is a Computer Science Engineer with a great deal of FinTech experience. She is passionate about exploring new technologies and advancements in today’s evolving world.