Player FM 앱으로 오프라인으로 전환하세요!
Episode 21: Deploying LLMs in Production: Lessons Learned
Manage episode 383681385 series 3317544
Hugo speaks with Hamel Husain, a machine learning engineer who loves building machine learning infrastructure and tools 👷. Hamel leads and contributes to many popular open-source machine learning projects. He also has extensive experience (20+ years) as a machine learning engineer across various industries, including large tech companies like Airbnb and GitHub. At GitHub, he led CodeSearchNet, a large language model for semantic search that was a precursor to CoPilot. Hamel is the founder of Parlance-Labs, a research and consultancy focused on LLMs.
They talk about generative AI, large language models, the business value they can generate, and how to get started.
They delve into
- Where Hamel is seeing the most business interest in LLMs (spoiler: the answer isn’t only tech);
- Common misconceptions about LLMs;
- The skills you need to work with LLMs and GenAI models;
- Tools and techniques, such as fine-tuning, RAGs, LoRA, hardware, and more!
- Vendor APIs vs OSS models.
LINKS
- Our upcoming livestream LLMs, OpenAI Dev Day, and the Existential Crisis for Machine Learning Engineering with Jeremy Howard (Fast.ai), Shreya Shankar (UC Berkeley), and Hamel Husain (Parlance Labs): Sign up for free!
- Our recent livestream Data and DevOps Tools for Evaluating and Productionizing LLMs with Hamel and Emil Sedgh, Lead AI engineer at Rechat -- in it, we showcase an actual industrial use case that Hamel and Emil are working on with Rechat, a real estate CRM, taking you through LLM workflows and tools.
- Extended Guide: Instruction-tune Llama 2 by Philipp Schmid
- The livestream recoding of this episode!
- Hamel on twitter
62 에피소드
Manage episode 383681385 series 3317544
Hugo speaks with Hamel Husain, a machine learning engineer who loves building machine learning infrastructure and tools 👷. Hamel leads and contributes to many popular open-source machine learning projects. He also has extensive experience (20+ years) as a machine learning engineer across various industries, including large tech companies like Airbnb and GitHub. At GitHub, he led CodeSearchNet, a large language model for semantic search that was a precursor to CoPilot. Hamel is the founder of Parlance-Labs, a research and consultancy focused on LLMs.
They talk about generative AI, large language models, the business value they can generate, and how to get started.
They delve into
- Where Hamel is seeing the most business interest in LLMs (spoiler: the answer isn’t only tech);
- Common misconceptions about LLMs;
- The skills you need to work with LLMs and GenAI models;
- Tools and techniques, such as fine-tuning, RAGs, LoRA, hardware, and more!
- Vendor APIs vs OSS models.
LINKS
- Our upcoming livestream LLMs, OpenAI Dev Day, and the Existential Crisis for Machine Learning Engineering with Jeremy Howard (Fast.ai), Shreya Shankar (UC Berkeley), and Hamel Husain (Parlance Labs): Sign up for free!
- Our recent livestream Data and DevOps Tools for Evaluating and Productionizing LLMs with Hamel and Emil Sedgh, Lead AI engineer at Rechat -- in it, we showcase an actual industrial use case that Hamel and Emil are working on with Rechat, a real estate CRM, taking you through LLM workflows and tools.
- Extended Guide: Instruction-tune Llama 2 by Philipp Schmid
- The livestream recoding of this episode!
- Hamel on twitter
62 에피소드
모든 에피소드
×플레이어 FM에 오신것을 환영합니다!
플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.