Artwork

Massive Studios에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Massive Studios 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Player FM -팟 캐스트 앱
Player FM 앱으로 오프라인으로 전환하세요!

Sizing AI Workloads

33:34
 
공유
 

Manage episode 414210643 series 2285741
Massive Studios에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Massive Studios 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

John Yue (CEO & Co-Founder @ inference.ai) discusses AI workload sizing, matching GPUs to workloads, availability of GPUs vs. costs, and more.
SHOW: 815
CLOUD NEWS OF THE WEEK -
http://bit.ly/cloudcast-cnotw
NEW TO CLOUD? CHECK OUT OUR OTHER PODCAST -
"CLOUDCAST BASICS"
SHOW NOTES:

Topic 1 - Our topic for today is sizing and IaaS hosting for AI/ML. We’ve covered a lot of basics lately, today we’re going to dig deeper. There is a surprising amount of depth to AI sizing, and it isn’t just speeds and feeds of GPUs. We’d like to welcome John Yue (CEO & Co-Founder @ inference.ai) for this discussion. John, welcome to the show
Topic 2 - Let’s start with sizing, I’ve talked to a lot of customers recently with my day job, and it is amazing how deep AI/ML sizing can go. First, you have to size for training/fine-tuning differently than you would for the inference stage. Second, some just think, pick the biggest GPUs you can afford and go. How should your customers approach this? (GPU’s, software dependencies, etc.)
Topic 2a - Follow-up question what are the business side, what are the business parameters that need to be considered? (budget, cost efficiency, latency/response time, timeline, etc.)
Topic 3 - The whole process can be overwhelming and as we mentioned, some organizations may not think of everything. You recently announced a chatbot to help with this exact process, ChatGPU. Tell everyone a bit about that and how it came to be.
Topic 4 - This is almost like a match-making service, correct? Everyone wants an H100, but not everyone needs or can afford an H100.
Topic 5 - How does GPU availability play into all of this? NVIDIA is sold out for something like 2 years at this point; how is that sustainable? Does everything need to run on a “Ferrari class” NVIDIA GPU?
Topic 6 - What’s next in the IaaS for AI/ML space? What does a next-generation data center for AI/ML look like? Will the Industry move away from GPUs to reduce dependence on NVIDIA?
FEEDBACK?

  continue reading

873 에피소드

Artwork

Sizing AI Workloads

The Cloudcast

1,281 subscribers

published

icon공유
 
Manage episode 414210643 series 2285741
Massive Studios에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Massive Studios 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

John Yue (CEO & Co-Founder @ inference.ai) discusses AI workload sizing, matching GPUs to workloads, availability of GPUs vs. costs, and more.
SHOW: 815
CLOUD NEWS OF THE WEEK -
http://bit.ly/cloudcast-cnotw
NEW TO CLOUD? CHECK OUT OUR OTHER PODCAST -
"CLOUDCAST BASICS"
SHOW NOTES:

Topic 1 - Our topic for today is sizing and IaaS hosting for AI/ML. We’ve covered a lot of basics lately, today we’re going to dig deeper. There is a surprising amount of depth to AI sizing, and it isn’t just speeds and feeds of GPUs. We’d like to welcome John Yue (CEO & Co-Founder @ inference.ai) for this discussion. John, welcome to the show
Topic 2 - Let’s start with sizing, I’ve talked to a lot of customers recently with my day job, and it is amazing how deep AI/ML sizing can go. First, you have to size for training/fine-tuning differently than you would for the inference stage. Second, some just think, pick the biggest GPUs you can afford and go. How should your customers approach this? (GPU’s, software dependencies, etc.)
Topic 2a - Follow-up question what are the business side, what are the business parameters that need to be considered? (budget, cost efficiency, latency/response time, timeline, etc.)
Topic 3 - The whole process can be overwhelming and as we mentioned, some organizations may not think of everything. You recently announced a chatbot to help with this exact process, ChatGPU. Tell everyone a bit about that and how it came to be.
Topic 4 - This is almost like a match-making service, correct? Everyone wants an H100, but not everyone needs or can afford an H100.
Topic 5 - How does GPU availability play into all of this? NVIDIA is sold out for something like 2 years at this point; how is that sustainable? Does everything need to run on a “Ferrari class” NVIDIA GPU?
Topic 6 - What’s next in the IaaS for AI/ML space? What does a next-generation data center for AI/ML look like? Will the Industry move away from GPUs to reduce dependence on NVIDIA?
FEEDBACK?

  continue reading

873 에피소드

All episodes

×
 
Loading …

플레이어 FM에 오신것을 환영합니다!

플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.

 

빠른 참조 가이드