Artwork

Mark Moyou, PhD and Mark Moyou에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Mark Moyou, PhD and Mark Moyou 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Player FM -팟 캐스트 앱
Player FM 앱으로 오프라인으로 전환하세요!

Sanyam Bhutani: LLM Experimentation, Podcasting Insights, and AI Innovations - AI Portfolio Podcast

1:21:46
 
공유
 

Manage episode 437345387 series 3596668
Mark Moyou, PhD and Mark Moyou에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Mark Moyou, PhD and Mark Moyou 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

Sanyam Bhutani, a leading figure in the data science community. Sanyam is a Sr. Data Scientist at H2O.ai, with previous tenures at Weights & Biases and H2O.ai, and an International Fellow at fast.ai. As a Kaggle Grandmaster, his contributions to the field are well-recognized and highly respected.
Sanyam delves into the nuances of fine-tuning and optimizing Large Language Models (LLMs). He provides a detailed exploration of the current state and future potential of LLMs, breaking down their architecture and functionality in a way that's accessible to both newcomers and seasoned data scientists. Sanyam discusses the importance of fine-tuning in enhancing the performance and applicability of LLMs, providing practical insights and strategies for effective implementation.
📲 Radek Osmulski Socials:
LinkedIn: https://www.linkedin.com/in/sanyambhutani/
Twitter: https://x.com/bhutanisanyam1?lang=en
📲 Mark Moyou, PhD Socials:
LinkedIn: https://www.linkedin.com/in/markmoyou/
Twitter: https://twitter.com/MarkMoyou
📗 Chapters
00:00 Intro
02:46 200 days of LLMs
06:16 Venture Capital
08:40 Setting Goals in Public
09:45 Fine tuning Experiment
14:02 Kaggle Grandmasters Team
15:55 Doing Challenges & Reading Research Papers
17:47 Hardest topic to learn in AI
19:05 Are you afraid to ask stupid questions?
20:43 Learning how LLMs work
22:54 Academic vs Product First Mindset
27:51 Training or Inference on LLMs
29:15 Favorite LLM Agent
32:10 How to go about learning LLMs?
36:55 Open Source LLMs on Research Papers
37:41 Capability of Modern GPUs
45:48 Journey to H20.ai
50:07 Why Sanyam stopped podcasting?
56:25 Podcasting Experience
58:39 Top Data Scientists
01:00:19 Advice for New Podcasts
01:03:32 Breaking into Data Science
01:12:23 Career Optimization Function
01:14:02 Making Progress Everyday
01:15:05 Advice for New Professionals
01:17:00 Book Recommendations
01:18:04 Rapid Round

  continue reading

16 에피소드

Artwork
icon공유
 
Manage episode 437345387 series 3596668
Mark Moyou, PhD and Mark Moyou에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Mark Moyou, PhD and Mark Moyou 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

Sanyam Bhutani, a leading figure in the data science community. Sanyam is a Sr. Data Scientist at H2O.ai, with previous tenures at Weights & Biases and H2O.ai, and an International Fellow at fast.ai. As a Kaggle Grandmaster, his contributions to the field are well-recognized and highly respected.
Sanyam delves into the nuances of fine-tuning and optimizing Large Language Models (LLMs). He provides a detailed exploration of the current state and future potential of LLMs, breaking down their architecture and functionality in a way that's accessible to both newcomers and seasoned data scientists. Sanyam discusses the importance of fine-tuning in enhancing the performance and applicability of LLMs, providing practical insights and strategies for effective implementation.
📲 Radek Osmulski Socials:
LinkedIn: https://www.linkedin.com/in/sanyambhutani/
Twitter: https://x.com/bhutanisanyam1?lang=en
📲 Mark Moyou, PhD Socials:
LinkedIn: https://www.linkedin.com/in/markmoyou/
Twitter: https://twitter.com/MarkMoyou
📗 Chapters
00:00 Intro
02:46 200 days of LLMs
06:16 Venture Capital
08:40 Setting Goals in Public
09:45 Fine tuning Experiment
14:02 Kaggle Grandmasters Team
15:55 Doing Challenges & Reading Research Papers
17:47 Hardest topic to learn in AI
19:05 Are you afraid to ask stupid questions?
20:43 Learning how LLMs work
22:54 Academic vs Product First Mindset
27:51 Training or Inference on LLMs
29:15 Favorite LLM Agent
32:10 How to go about learning LLMs?
36:55 Open Source LLMs on Research Papers
37:41 Capability of Modern GPUs
45:48 Journey to H20.ai
50:07 Why Sanyam stopped podcasting?
56:25 Podcasting Experience
58:39 Top Data Scientists
01:00:19 Advice for New Podcasts
01:03:32 Breaking into Data Science
01:12:23 Career Optimization Function
01:14:02 Making Progress Everyday
01:15:05 Advice for New Professionals
01:17:00 Book Recommendations
01:18:04 Rapid Round

  continue reading

16 에피소드

모든 에피소드

×
 
Loading …

플레이어 FM에 오신것을 환영합니다!

플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.

 

빠른 참조 가이드