Artwork

SAS Podcast Admins, Kimberly Nevala, and Strategic Advisor - SAS에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 SAS Podcast Admins, Kimberly Nevala, and Strategic Advisor - SAS 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Player FM -팟 캐스트 앱
Player FM 앱으로 오프라인으로 전환하세요!

AI: Competitor or Collaborator with Lama Nachman

38:48
 
공유
 

Manage episode 393677215 series 3546664
SAS Podcast Admins, Kimberly Nevala, and Strategic Advisor - SAS에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 SAS Podcast Admins, Kimberly Nevala, and Strategic Advisor - SAS 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

Lama Nachman is an Intel fellow and the director of Intel’s Human & AI Systems Research Lab. She also led Intel’s Responsible AI program. Lama’s team researches how AI can be applied to deliver contextually appropriate experiences that increase accessibility and amplify human potential.

In this inspirational discussion, Lama exposes the need for equity in AI, demonstrates the difficulty in empowering authentic human interaction, and why ‘Wizard of Oz’ approaches as well as a willingness to go back to the drawing board are critical.

Through the lens of her work in early childhood education to manufacturing and assistive technologies, Lama deftly illustrates the ethical dilemmas that arise with any AI application - no matter how well-meaning. Kimberly and Lama discuss why perfectionism in the enemy of progress and the need to design for uncertainty in AI. Speaking to her quest to give people suffering from ALS back their voice, Lama stresses how designing for authenticity over expediency is critical to unlock the human experience.

While pondering the many ethical conundrums that keep her up at night, Lama shows how an expansive, multi-disciplinary approach is critical to mitigate harm. Any why cooperation between humans and AI maximizes the potential of both.

A full transcript of this episode can be found here.

Our final episode this season features Dr. Ansgar Koene. Ansgar is the Global AI Ethics and Regulatory Leader at EY and a Sr. Research Fellow who specializes in social media, data ethics and AI regulation. Subscribe now to Pondering AI so you don’t miss him.

  continue reading

58 에피소드

Artwork
icon공유
 
Manage episode 393677215 series 3546664
SAS Podcast Admins, Kimberly Nevala, and Strategic Advisor - SAS에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 SAS Podcast Admins, Kimberly Nevala, and Strategic Advisor - SAS 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

Lama Nachman is an Intel fellow and the director of Intel’s Human & AI Systems Research Lab. She also led Intel’s Responsible AI program. Lama’s team researches how AI can be applied to deliver contextually appropriate experiences that increase accessibility and amplify human potential.

In this inspirational discussion, Lama exposes the need for equity in AI, demonstrates the difficulty in empowering authentic human interaction, and why ‘Wizard of Oz’ approaches as well as a willingness to go back to the drawing board are critical.

Through the lens of her work in early childhood education to manufacturing and assistive technologies, Lama deftly illustrates the ethical dilemmas that arise with any AI application - no matter how well-meaning. Kimberly and Lama discuss why perfectionism in the enemy of progress and the need to design for uncertainty in AI. Speaking to her quest to give people suffering from ALS back their voice, Lama stresses how designing for authenticity over expediency is critical to unlock the human experience.

While pondering the many ethical conundrums that keep her up at night, Lama shows how an expansive, multi-disciplinary approach is critical to mitigate harm. Any why cooperation between humans and AI maximizes the potential of both.

A full transcript of this episode can be found here.

Our final episode this season features Dr. Ansgar Koene. Ansgar is the Global AI Ethics and Regulatory Leader at EY and a Sr. Research Fellow who specializes in social media, data ethics and AI regulation. Subscribe now to Pondering AI so you don’t miss him.

  continue reading

58 에피소드

Alle Folgen

×
 
Loading …

플레이어 FM에 오신것을 환영합니다!

플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.

 

빠른 참조 가이드