Sveriges mest populära poddar

LlamaCast

Moshi

11 min • 18 oktober 2024
🟢 Moshi: a speech-text foundation model for real-time dialogue

The paper discusses a new multimodal foundation model called Moshi designed for real-time, full-duplex spoken dialogue. This model uses a text-based LLM called Helium to provide reasoning abilities and a neural audio codec called Mimi to encode audio into tokens. Moshi is innovative because it can handle overlapping speech and model both the user's and the system's speech in a single stream. The paper also explores the model's performance on various tasks like question answering and its ability to generate speech in different voices. Finally, it addresses safety concerns such as toxicity, regurgitation, and voice consistency, and proposes solutions using watermarking techniques.

📎 Link to paper
🤖 Try their demo
Förekommer på
00:00 -00:00