Artwork

Alexandre Andorra에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Alexandre Andorra 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Player FM -팟 캐스트 앱
Player FM 앱으로 오프라인으로 전환하세요!

#78 Exploring MCMC Sampler Algorithms, with Matt D. Hoffman

1:02:41
 
공유
 

Manage episode 356845007 series 2635823
Alexandre Andorra에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Alexandre Andorra 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!

Matt Hoffman has already worked on many topics in his life – music information retrieval, speech enhancement, user behavior modeling, social network analysis, astronomy, you name it.

Obviously, picking questions for him was hard, so we ended up talking more or less freely — which is one of my favorite types of episodes, to be honest.

You’ll hear about the circumstances Matt would advise picking up Bayesian stats, generalized HMC, blocked samplers, why do the samplers he works on have food-based names, etc.

In case you don’t know him, Matt is a research scientist at Google. Before that, he did a postdoc in the Columbia Stats department, working with Andrew Gelman, and a Ph.D at Princeton, working with David Blei and Perry Cook.

Matt is probably best known for his work in approximate Bayesian inference algorithms, such as stochastic variational inference and the no-U-turn sampler, but he’s also worked on a wide range of applications, and contributed to software such as Stan and TensorFlow Probability.

Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !

Thank you to my Patrons for making this episode possible!

Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor, Thomas Wiecki, Chad Scherrer, Nathaniel Neitzke, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Joshua Duncan, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Raul Maldonado, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, David Haas, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Trey Causey, Andreas Kröpelin, Raphaël R, Nicolas Rode and Gabriel Stechschulte.

Visit https://www.patreon.com/learnbayesstats to unlock exclusive Bayesian swag ;)

Links from the show:


Abstract

written by Christoph Bamberg

In this episode, Matt D. Hoffman, a Google research scientist discussed his work on probabilistic sampling algorithms with me. Matt has a background in music information retrieval, speech enhancement, user behavior modeling, social network analysis, and astronomy.

He came to machine learning (ML) and computer science through his interest in synthetic music and later took a Bayesian modeling class during his PhD.

He mostly works on algorithms, including Markov Chain Monte Carlo (MCMC) methods that can take advantage of hardware acceleration, believing that running many small chains in parallel is better for handling autocorrelation than running a few longer chains.

Matt is interested in Bayesian neural networks but is also skeptical about their use in practice.

He recently contributed to a generalised Hamilton Monte Carlo (HMC) sampler, and previously worked on an alternative to the No-U-Turn-Sampler (NUTS) called MEADS. We discuss the applications for these samplers and how they differ from one another.

In addition, Matt introduces an improved R-hat diagnostic tool, nested R-hat, that he and colleagues developed.


Automated Transcript

Please note that the following transcript was generated automatically and may therefore contain errors. Feel free to reach out if you’re willing to correct them.

  continue reading

119 에피소드

Artwork
icon공유
 
Manage episode 356845007 series 2635823
Alexandre Andorra에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Alexandre Andorra 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!

Matt Hoffman has already worked on many topics in his life – music information retrieval, speech enhancement, user behavior modeling, social network analysis, astronomy, you name it.

Obviously, picking questions for him was hard, so we ended up talking more or less freely — which is one of my favorite types of episodes, to be honest.

You’ll hear about the circumstances Matt would advise picking up Bayesian stats, generalized HMC, blocked samplers, why do the samplers he works on have food-based names, etc.

In case you don’t know him, Matt is a research scientist at Google. Before that, he did a postdoc in the Columbia Stats department, working with Andrew Gelman, and a Ph.D at Princeton, working with David Blei and Perry Cook.

Matt is probably best known for his work in approximate Bayesian inference algorithms, such as stochastic variational inference and the no-U-turn sampler, but he’s also worked on a wide range of applications, and contributed to software such as Stan and TensorFlow Probability.

Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !

Thank you to my Patrons for making this episode possible!

Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor, Thomas Wiecki, Chad Scherrer, Nathaniel Neitzke, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Joshua Duncan, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Raul Maldonado, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, David Haas, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Trey Causey, Andreas Kröpelin, Raphaël R, Nicolas Rode and Gabriel Stechschulte.

Visit https://www.patreon.com/learnbayesstats to unlock exclusive Bayesian swag ;)

Links from the show:


Abstract

written by Christoph Bamberg

In this episode, Matt D. Hoffman, a Google research scientist discussed his work on probabilistic sampling algorithms with me. Matt has a background in music information retrieval, speech enhancement, user behavior modeling, social network analysis, and astronomy.

He came to machine learning (ML) and computer science through his interest in synthetic music and later took a Bayesian modeling class during his PhD.

He mostly works on algorithms, including Markov Chain Monte Carlo (MCMC) methods that can take advantage of hardware acceleration, believing that running many small chains in parallel is better for handling autocorrelation than running a few longer chains.

Matt is interested in Bayesian neural networks but is also skeptical about their use in practice.

He recently contributed to a generalised Hamilton Monte Carlo (HMC) sampler, and previously worked on an alternative to the No-U-Turn-Sampler (NUTS) called MEADS. We discuss the applications for these samplers and how they differ from one another.

In addition, Matt introduces an improved R-hat diagnostic tool, nested R-hat, that he and colleagues developed.


Automated Transcript

Please note that the following transcript was generated automatically and may therefore contain errors. Feel free to reach out if you’re willing to correct them.

  continue reading

119 에피소드

모든 에피소드

×
 
Loading …

플레이어 FM에 오신것을 환영합니다!

플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.

 

빠른 참조 가이드