GPU Accelerated Machine Learning with WSL 2
저장한 시리즈 ("피드 비활성화" status)
When? This feed was archived on April 08, 2022 07:52 (). Last successful fetch was on March 08, 2022 11:18 ()
Why? 피드 비활성화 status. 잠시 서버에 문제가 발생해 팟캐스트를 불러오지 못합니다.
What now? You might be able to find a more up-to-date version using the search function. This series will no longer be checked for updates. If you believe this to be in error, please check if the publisher's feed link below is valid and contact support to request the feed be restored or if you have any other concerns about this.
Manage episode 273973655 series 2707140
Adding GPU compute support to Windows Subsystem for Linux (WSL) has been the #1 most requested feature since the first WSL release.
Learn how Windows and WSL 2 now support GPU Accelerated Machine Learning (GPU compute) using NVIDIA CUDA, including TensorFlow and PyTorch, as well as all the Docker and NVIDIA Container Toolkit support available in a native Linux environment.
Clark Rahig will explain a bit about what it means to accelerate your GPU to help with training Machine Learning (ML) models, introducing concepts like parallelism, and then showing how to set up and run your full ML workflow (including GPU acceleration) with NVIDIA CUDA and TensorFlow in WSL 2.
Additionally, Clarke will demonstrate how students and beginners can start building knowledge in the Machine Learning (ML) space on their existing hardware by using the TensorFlow with DirectML package.
Learn more:
- Related Microsoft Windows Blog Posts: https://aka.ms/GPUinWSL
- GPU-Accelerated ML Training Docs: https://aka.ms/GPUinWSLdocs
- NVIDIA Docs: https://developer.nvidia.com/cuda/wsl
- DirectML repo (Get started, Samples, etc): https://aka.ms/DirectML
- Follow Clark Rahig on Twitter: @crahrig
12 에피소드