Home AI News Automated Reasoning and Tool Usage (ART): Adapting Language Models for Multistep Reasoning

Automated Reasoning and Tool Usage (ART): Adapting Language Models for Multistep Reasoning

0
Automated Reasoning and Tool Usage (ART): Adapting Language Models for Multistep Reasoning

Automated Reasoning and Tool usage (ART) is a framework developed by researchers from the University of Washington, Microsoft, Meta, University of California, and Allen Institute of AI research. It allows large language models (LLMs) to adapt to new tasks quickly by utilizing in-context learning. Instead of hosting the LLM or annotating large datasets, ART provides the LLM with a few demos and real language instructions.

However, there are some performance issues with multistep reasoning, math, having the most recent information, and other aspects. To address these constraints, recent research suggests giving LLMs access to tools that facilitate more sophisticated reasoning or challenging them to emulate a chain of reasoning for multistep tasks.

ART addresses this challenge by automatically creating decompositions (multistep reasoning) for examples of new tasks. It pulls examples of similar tasks from a task library, allowing for a few-shot breakdown and tool usage. ART uses a flexible query language that makes it easy to read intermediate stages, pause the creation process to use external tools, and restart it once the output of those tools has been included.

The framework also selects and employs the best suitable tools at each stage, such as search engines and code execution. This allows the LLM to receive demos on how to break down instances of various related activities and how to choose and use tools from the tool library.

ART has been tested on various tasks, including BigBench tasks, MMLU tasks, and tasks from related tool usage research. It consistently matches or surpasses computer-created reasoning chains for most tasks, and performance increases when tools are allowed.

Compared to direct few-shot prompting, ART outperforms both on BigBench and MMLU tasks. It also outperforms the best-known GPT3 findings, including supervision for decomposition and tool usage, in tasks requiring mathematical and algorithmic reasoning.

One of the advantages of ART is the ability to update the task and tool libraries with new examples, allowing for human interaction and enhancement of the reasoning process. This makes it easier to improve performance on any given task with minimal human input.

Overall, ART is a promising framework that automates the reasoning process for new tasks and utilizes tools to enhance performance. It offers a flexible and efficient approach to adapting LLMs to different activities and tools, providing a significant advancement in artificial intelligence capabilities.

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here