Sveriges mest populära poddar

LlamaCast

LLMs Know More Than They Show

8 min • 18 oktober 2024
🕵️‍♀️ LLMs Know More Than They Show

This research examines the inner workings of large language models (LLMs) to understand and reduce their tendency to generate false information, known as "hallucinations." The authors find that LLMs internally encode information about the truthfulness of their outputs, with these signals concentrated in tokens related to exact answers. However, these truth signals are task-specific and may not apply universally across different tasks. They also find that LLMs' internal representations can predict error types, enabling more targeted error mitigation strategies. Interestingly, LLMs sometimes internally recognize the correct answer but still produce an incorrect one, highlighting a disconnect between internal knowledge and external output. This suggests potential for using LLMs' internal knowledge to reduce errors, requiring further study.

📎 Link to paper
Förekommer på
00:00 -00:00