Large Language Models (LLMs) have made significant strides in problem-solving and reasoning tasks. One key innovation is the Chain of Thought (CoT) prompting technique, which has proven effective in various challenging scenarios. However, there is still much to learn about how CoT works and how it can be improved.
A recent study from Northwestern University, University of Liverpool, New Jersey Institute of Technology, and Rutgers University focused on understanding the impact of the length of reasoning steps in CoT prompts on the effectiveness of LLMs in problem-solving.
The study found that lengthening reasoning steps in prompts, without adding new information, enhances LLMs’ reasoning abilities. Shortening the reasoning steps diminishes these abilities. Even incorrect rationales can yield favorable outcomes if they maintain the necessary length of inference.
The effectiveness of increasing reasoning steps depends on the complexity of the task, with more complex tasks benefitting from extended inference sequences. These findings provide valuable insights for refining CoT strategies in various complex NLP tasks, emphasizing the significance of reasoning length in the reasoning chain.
If you’re interested in learning more about this research, you can check out the paper [here].