Home AI News Improving AI Reasoning with Modular Framework: SCREWS and Heterogeneous Resampling

Improving AI Reasoning with Modular Framework: SCREWS and Heterogeneous Resampling

0
Improving AI Reasoning with Modular Framework: SCREWS and Heterogeneous Resampling

Large Language Models (LLMs) have been successful in various reasoning tasks. However, sometimes the output of these models needs to be adjusted iteratively to ensure accuracy. This process assumes that consecutive results will lead to improved performance, but that’s not always the case. In fact, refining the results can sometimes lead to false positives. This highlights the need for a modular approach to refining the output.

In the past, research on iterative refinement has focused on a single, fixed reasoning technique. However, humans are more adaptable and use different techniques depending on the situation. To address this, researchers from ETH Zurich and Microsoft Semantic Machines have developed SCREWS, a modular framework for reasoning about changes.

SCREWS consists of three core components: Sampling, Conditional Resampling, and Selection. The framework allows for the use of different submodules within each component, depending on the specific task and input sequence. This modular design enables the exploration of various tactics for refining the output.

The researchers have tested SCREWS using ChatGPT or GPT-4 on tasks such as multi-hop question answering, arithmetic reasoning, and code debugging. Their suggested solutions have shown significant improvements compared to standard sample and resampling procedures, with a 10-15% increase in performance.

The value of SCREWS lies in its ability to incorporate additional framework elements to enhance self-refining approaches. For example, combining the model-based selection technique with the self-refinement method can further improve overall performance. The modular design also allows for the inclusion of additional sub-components, such as cached memory or online search, fine-tuned models, or external verifiers, and selection based on human input or an oracle.

Overall, SCREWS offers a flexible and effective framework for refining the output of large language models. It opens up possibilities for improving reasoning tasks and offers valuable insights into enhancing the capabilities of AI systems.

For more information, you can check out the paper and join our ML SubReddit, Facebook Community, and Discord Channel for the latest AI research news, projects, and more.

About the Author:
Aneesh Tickoo is a consulting intern at MarktechPost. He is currently pursuing an undergraduate degree in Data Science and Artificial Intelligence from the Indian Institute of Technology (IIT), Bhilai. Aneesh is passionate about building solutions around image processing and enjoys collaborating on interesting projects.

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here