🧠 On the Diagram of Thought
This paper introduces a new framework called Diagram of Thought (DoT) that models how large language models (LLMs) reason. Unlike traditional methods that represent reasoning as linear chains or trees, DoT utilizes a directed acyclic graph (DAG) structure. This structure allows LLMs to navigate complex reasoning pathways while ensuring logical consistency. By incorporating feedback mechanisms and leveraging auto-regressive next-token prediction, DoT enables LLMs to iteratively refine their reasoning process. The authors also formalize the DoT framework using Topos Theory, providing a mathematical foundation for its logical consistency and soundness. This approach enhances both training and inference within a single LLM, eliminating the need for multiple models or external control mechanisms. DoT offers a promising framework for developing next-generation reasoning-specialized LLMs.
📎
Link to paper