#17 How to build a clinically safe Large Language Model - Hippocratic AI, Llama3, Biollama
Manage episode 428686712 series 3585389
Dev and Doc에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Dev and Doc 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
How do we reach the holy grail of a clinically safe LLM for healthcare? Dev and Doc are back to discuss news with Meta's LlaMA model and potential of healthcare LLMs finetuned on top like BioLlaMa. We discuss the key steps in building a clinically safe LLM for healthcare for healthcare and how this was pursued by Hippocratic AI's latest model - Polaris. 👨🏻⚕️Doc - Dr. Joshua Au Yeung - https://www.linkedin.com/in/dr-joshua-auyeung/ 🤖Dev - Zeljko Kraljevic https://twitter.com/zeljkokr The podcast 🎙️ 🔊Spotify: https://podcasters.spotify.com/pod/show/devanddoc 📙Substack: https://aiforhealthcare.substack.com/ Hey! If you are enjoying our conversations, reach out, share your thoughts and journey with us. Don't forget to subscribe whilst you're here :) 🎞️ Editor- Dragan Kraljević https://www.instagram.com/dragan_kraljevic/ 🎨Brand design and art direction - Ana Grigorovici https://www.behance.net/anagrigorovici027d References Hippocratic AI LLM- https://arxiv.org/pdf/2403.13313 BioLLM tweet - https://twitter.com/aadityaura/status/1783662626901528803 Foresight lancet paper -https://www.thelancet.com/journals/landig/article/PIIS2589-7500(24)00025-6/fulltext Linear processing units- https://wow.groq.com/lpu-inference-engine/ Timestamps 00:00 Start 01:10 Intro- llama3 , a chatGPT level model in our hands 06:53 Linear processing units to run LLMs 09:42 BioLLM for medical question and answering 11:13 quality and size of dataset, using youtube transcripts 12:41 Question and answering pairs do not reflect the real world - holy grail of healthcare llm 18:43 Dev has Beef with hippocratic AI 20:25 Step1 Training a clinical foundational model from scratch 22:43 Step 2 Instruction tuning with multi-turn simulated conversation 24:15 Step 3 training the model to guide model in tangential conversations 27:42 Focusing on the hospital back office and specialist nurse phone calls 33:02 Evaluating Polaris - clinical safety LLM , bedside manner, medical safety advice
…
continue reading
30 에피소드