😶 Anti-Social Behavior and Persuasion Ability of LLMs
This study explores the behavior of Large Language Models (LLMs) in a simulated prison environment, inspired by the Stanford Prison Experiment. It focuses on two key aspects: persuasion, where a prisoner tries to convince a guard to grant more yard time or help escape, and anti-social behavior, such as toxicity and violence. The analysis reveals that some models, like Mixtral and Mistral, struggle to maintain their assigned roles. Persuasion is more successful when asking for yard time than for escape. Additionally, the guard's personality significantly affects the occurrence of anti-social behavior, while the prisoner's goal has minimal impact. The study underscores the importance of addressing potential negative behaviors in AI interactions, emphasizing the need for safeguards and more research on AI safety and ethics.
📎
Link to paper