Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models

Virginia Tech
ICML 2024

Abstract

Current literature, aiming to surpass the "Chain-of-Thought" approach, often resorts to external modi operandi involving halting, modifying, and then resuming the generation process to boost Large Language Models' (LLMs) reasoning capacities. Due to their myopic perspective, they escalate the number of query requests, leading to increased costs, memory, and computational overheads. Addressing this, we propose the Algorithm of Thoughts---a novel strategy that propels LLMs through algorithmic reasoning pathways. By employing algorithmic examples fully in-context, this overarching view of the whole process exploits the innate recurrence dynamics of LLMs, expanding their idea exploration with merely one or a few queries. Our technique outperforms earlier single-query methods and even more recent multi-query strategies that employ an extensive tree search algorithms while using significantly fewer tokens. Intriguingly, our results suggest that instructing an LLM using an algorithm can lead to performance surpassing that of the algorithm itself, hinting at LLM's inherent ability to weave its intuition into optimized searches. We probe into the underpinnings of our method's efficacy and its nuances in application.

Comparison of COT vs AOT

Comparison between standard prompting, CoT, and AoT in the game of 24. While standard prompting aims for a direct answer, CoT sketches out the successive steps to the final solution. AoT’s in-context example, distinct from CoT, integrates the search process, highlighted by markers '1', ..., '3' as "first operations" guiding subtree exploration for the problem set '8 6 4 4'. For clarity, only a single in-context example is displayed, with a focus on the third subtree exploration. AoT produces prospective search steps (e.g., the subtree exploration '5. 11 + 1') and evaluates potential subsequent steps to either progress towards a solution or retrace to another viable subtree.

Comparison of COT vs AOT

Illustration depicting different strategies for solving reasoning problems with LLMs. Each box represents a distinct idea, serving as a cohesive string of words that create a step-by-step approach to reasoning. Green boxes symbolize ideas considered promising by the LLM, whereas red boxes denote concepts deemed less promising.

BibTeX

@article{sel2023algorithm,
  title={Algorithm of thoughts: Enhancing exploration of ideas in large language models},
  author={Sel, Bilgehan and Al-Tawaha, Ahmad and Khattar, Vanshaj and Wang, Lu and Jia, Ruoxi and Jin, Ming},
  journal={arXiv preprint arXiv:2308.10379},
  year={2023}
}