Artwork

TWIML and Sam Charrington에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 TWIML and Sam Charrington 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Player FM -팟 캐스트 앱
Player FM 앱으로 오프라인으로 전환하세요!

Service Cards and ML Governance with Michael Kearns - #610

39:08
 
공유
 

Manage episode 351337285 series 2355587
TWIML and Sam Charrington에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 TWIML and Sam Charrington 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

Today we conclude our AWS re:Invent 2022 series joined by Michael Kearns, a professor in the department of computer and information science at UPenn, as well as an Amazon Scholar. In our conversation, we briefly explore Michael’s broader research interests in responsible AI and ML governance and his role at Amazon. We then discuss the announcement of service cards, and their take on “model cards” at a holistic, system level as opposed to an individual model level. We walk through the information represented on the cards, as well as explore the decision-making process around specific information being omitted from the cards. We also get Michael’s take on the years-old debate of algorithmic bias vs dataset bias, what some of the current issues are around this topic, and what research he has seen (and hopes to see) addressing issues of “fairness” in large language models.

The complete show notes for this episode can be found at twimlai.com/go/610.

  continue reading

696 에피소드

Artwork
icon공유
 
Manage episode 351337285 series 2355587
TWIML and Sam Charrington에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 TWIML and Sam Charrington 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

Today we conclude our AWS re:Invent 2022 series joined by Michael Kearns, a professor in the department of computer and information science at UPenn, as well as an Amazon Scholar. In our conversation, we briefly explore Michael’s broader research interests in responsible AI and ML governance and his role at Amazon. We then discuss the announcement of service cards, and their take on “model cards” at a holistic, system level as opposed to an individual model level. We walk through the information represented on the cards, as well as explore the decision-making process around specific information being omitted from the cards. We also get Michael’s take on the years-old debate of algorithmic bias vs dataset bias, what some of the current issues are around this topic, and what research he has seen (and hopes to see) addressing issues of “fairness” in large language models.

The complete show notes for this episode can be found at twimlai.com/go/610.

  continue reading

696 에피소드

All episodes

×
 
Loading …

플레이어 FM에 오신것을 환영합니다!

플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.

 

빠른 참조 가이드