This episode explores Graph of Thoughts (GoT), a prompting scheme designed to enhance the reasoning abilities of large language models (LLMs). GoT is compared to other methods like Chain-of-Thought (CoT), Self-Consistency with CoT (CoT-SC), and Tree of Thoughts (ToT). GoT improves performance by utilizing thought transformations such as aggregation, allowing for larger thought volumes—the number of previous thoughts influencing a current thought. It offers a superior balance between latency (number of steps) and volume, resulting in better task performance.The episode also discusses GoT's practical applications, including set intersection, keyword counting, and document merging, providing specific examples and prompts for each. GoT consistently outperforms other prompting schemes in accuracy and cost, demonstrating its potential to improve LLM capabilities through its graph-based structure, which allows for more complex and flexible reasoning.
https://arxiv.org/pdf/2308.09687