How do we reach the holy grail of a clinically safe LLM for healthcare? Dev and Doc are back to discuss news with Meta's LlaMA model and potential of healthcare LLMs finetuned on top like BioLlaMa. We discuss the key steps in building a clinically safe LLM for healthcare for healthcare and how this was pursued by Hippocratic AI's latest model - Polaris.
👨🏻⚕️Doc - Dr. Joshua Au Yeung - https://www.linkedin.com/in/dr-joshua-auyeung/
🤖Dev - Zeljko Kraljevic https://twitter.com/zeljkokr
The podcast 🎙️
🔊Spotify: https://podcasters.spotify.com/pod/show/devanddoc
📙Substack: https://aiforhealthcare.substack.com/
Hey! If you are enjoying our conversations, reach out, share your thoughts and journey with us. Don't forget to subscribe whilst you're here :)
🎞️ Editor- Dragan Kraljević https://www.instagram.com/dragan_kraljevic/
🎨Brand design and art direction - Ana Grigorovici https://www.behance.net/anagrigorovici027d
References
Hippocratic AI LLM- https://arxiv.org/pdf/2403.13313
BioLLM tweet - https://twitter.com/aadityaura/status/1783662626901528803
Foresight lancet paper -https://www.thelancet.com/journals/landig/article/PIIS2589-7500(24)00025-6/fulltext
Linear processing units- https://wow.groq.com/lpu-inference-engine/
Timestamps
00:00 Start
01:10 Intro- llama3 , a chatGPT level model in our hands
06:53 Linear processing units to run LLMs
09:42 BioLLM for medical question and answering
11:13 quality and size of dataset, using youtube transcripts
12:41 Question and answering pairs do not reflect the real world - holy grail of healthcare llm
18:43 Dev has Beef with hippocratic AI
20:25 Step1 Training a clinical foundational model from scratch
22:43 Step 2 Instruction tuning with multi-turn simulated conversation
24:15 Step 3 training the model to guide model in tangential conversations
27:42 Focusing on the hospital back office and specialist nurse phone calls
33:02 Evaluating Polaris - clinical safety LLM , bedside manner, medical safety advice