Home AI News Enhancing Multimodal Alignment in Large Language Models with RLHF

Enhancing Multimodal Alignment in Large Language Models with RLHF

0
Enhancing Multimodal Alignment in Large Language Models with RLHF

Improved Multimodal Alignment in Large Multimodal Models

Building Large Multimodal Models (LMMs) can be challenging due to the limited quantity and quality of multimodal data compared to text-only datasets. However, researchers from UC Berkeley, CMU, UIUC, UW–Madison, UMass Amherst Microsoft Research, and MIT-IBM Watson AI Lab have developed a solution.

Introducing LLaVA-RLHF, a vision-language model designed to enhance multimodal alignment in LMMs by using Reinforcement Learning from Human Feedback (RLHF). This approach collects human preferences on recognizing hallucinations and incorporates them through reinforcement learning to fine-tune LMMs. The benefit of this strategy is that it improves multimodal alignment at a relatively low annotation cost of $3000 for 10K human preferences.

Reward hacking can pose a problem in RLHF, where high ratings from the reward model may not always align with human judgments. To address this, the researchers propose a data-efficient alternative by enhancing the reward model’s capacity using existing knowledge and data from larger language models.

To further improve the overall functionality of LLaVA-RLHF, the researchers use a superior visual encoder and a bigger language model. They also introduce the Factually Augmented RLHF algorithm, which calibrates the reward signals by supplementing them with additional information such as picture descriptions or ground-truth multi-choice options.

During the Supervised Fine-Tuning stage, the researchers augment synthetic vision instruction tuning data with high-quality human-annotated multimodal data in conversation format. This enhances the general capabilities of LMMs.

To evaluate the multimodal alignment in real-world scenarios, the researchers develop the MMHAL-BENCH benchmark dataset. This dataset addresses hallucinations and closely matches human assessments, particularly in terms of anti-hallucination scores.

The experimental assessment of LLaVA-RLHF shows impressive results. It achieves a 94% improvement on the LLaVA-Bench, a 60% improvement on the MMHAL-BENCH, and sets new performance records for LLaVA models.

The researchers have made their code, model, and data accessible to the public on GitHub. To stay updated with the latest AI research news and projects, join their ML SubReddit, Facebook Community, Discord Channel, and Email Newsletter.

The research was conducted by Aneesh Tickoo, a consulting intern at MarktechPost. Aneesh is currently pursuing a degree in Data Science and Artificial Intelligence from the Indian Institute of Technology(IIT), Bhilai. He focuses on machine learning projects, particularly in image processing, and enjoys collaborating with others on interesting projects.

Watch the latest AI research updates on their YouTube channel and find more information in their paper and project.

Credit for this research goes to the dedicated team of researchers involved in this project.

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here