Home AI News Boosting Factual Knowledge in Large Language Models with DoLa Decoding Approach

Boosting Factual Knowledge in Large Language Models with DoLa Decoding Approach

0
Boosting Factual Knowledge in Large Language Models with DoLa Decoding Approach

Natural Language Processing (NLP) applications have greatly benefited from large language models (LLMs). However, LLMs still struggle with producing inconsistent information with real-world facts. This is a problem for high-stakes applications, such as those in clinical and legal settings, where trustworthy text generation is crucial.

The problem may lie in the maximum likelihood language modeling target, which aims to minimize the divergence between the data and model distributions. If this target is pursued, LLMs may assign a non-zero probability to phrases that don’t fully align with the training data’s knowledge.

Researchers from MIT and Microsoft have proposed a solution by leveraging the modular encoding of knowledge in transformer LMs. They suggest using a contrastive decoding strategy that prioritizes information from deeper layers rather than intermediate or shallower ones. This approach, called Decoding by Contrasting Layers (DoLa), improves the factual knowledge of LLMs without the need for external knowledge retrieval or further fine-tuning.

Experimental results show that DoLa enhances the integrity of LLaMA family models on TruthfulQA and FACTOR. It also demonstrates potential in improving factual reasoning for StrategyQA and GSM8K cc. In terms of open-ended text production, DoLa generates more informative and factual responses compared to the original decoding approach, as seen with GPT-4. The decoding process with DoLa only adds a minimal amount of time.

However, the researchers didn’t explore the model’s performance in other domains, such as following instructions or incorporating human feedback. They also rely solely on preexisting architecture and parameters rather than utilizing human labels or external factual information sources for fine-tuning. Future work should consider integrating these components with the decoding technique to overcome these limitations.

Conclusion

The researchers’ work on DoLa presents a promising approach to increase the factual knowledge and trustworthiness of large language models. By prioritizing information from deeper layers and downplaying that from intermediate or shallower ones, DoLa reduces hallucinations and improves the integrity of text generation. It offers a potential solution for high-stakes applications in clinical, legal, and other domains where accurate and reliable text is crucial.

For more information, you can read the paper and access the Github repository.

If you’re interested in AI research news, cool projects, and more, join our ML SubReddit, Facebook Community, Discord Channel, and Email Newsletter.

If you enjoy our work, don’t forget to subscribe to our newsletter to stay up to date with the latest AI news and developments.

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here