Home AI News InRanker: Distilling Large Neural Rankers for Practical Information Retrieval

InRanker: Distilling Large Neural Rankers for Practical Information Retrieval

0
InRanker: Distilling Large Neural Rankers for Practical Information Retrieval

Researchers have been facing a challenge in deploying large models in real-world systems due to their significant computational requirements. To tackle this problem, researchers at UNICAMP, NeuralMind, and Zeta Alpha have proposed a new method called InRanker for distilling large neural rankers into more compact versions with increased effectiveness in out-of-domain scenarios. This approach involves two distillation phases: one training on existing supervised soft teacher labels and another training on teacher soft labels for synthetic queries generated using a large language model.

The first phase aims to familiarize the student model with the ranking task using real-world data from the MS MARCO dataset, while the second phase utilizes synthetic queries generated by a large language model to improve zero-shot generalization. The distillation process allows smaller models like monoT5-60M and monoT5-220M to enhance their effectiveness by incorporating the teacher’s knowledge despite being significantly smaller.

The research has demonstrated that smaller models distilled using the InRanker methodology significantly improved their effectiveness in out-of-domain scenarios, matching or even surpassing the performance of larger counterparts in various test environments. This advancement is particularly beneficial in real-world applications with limited computational resources, providing a more practical and scalable solution for information retrieval tasks.

In conclusion, the InRanker method presents a practical solution to the challenge of using large neural rankers in production environments, effectively distilling their knowledge into smaller, more efficient versions without compromising out-of-domain effectiveness. This approach addresses the computational constraints of deploying large models and opens new avenues for scalable and efficient information retrieval. For more information, check out the Paper and Github links.

Author: Nikhil is an intern consultant at Marktechpost. He is pursuing an integrated dual degree in Materials at the Indian Institute of Technology, Kharagpur. Nikhil is an AI/ML enthusiast who is always researching applications in fields like biomaterials and biomedical science. With a strong background in Material Science, he is exploring new advancements and creating opportunities to contribute.

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here