Artwork

Dyan Finkhousen and Dyan Finkhousen: CEO of Shoshin Works에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Dyan Finkhousen and Dyan Finkhousen: CEO of Shoshin Works 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Player FM -팟 캐스트 앱
Player FM 앱으로 오프라인으로 전환하세요!

38: AI and Decentralized Systems of Trust

37:50
 
공유
 

Manage episode 398336069 series 3453656
Dyan Finkhousen and Dyan Finkhousen: CEO of Shoshin Works에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Dyan Finkhousen and Dyan Finkhousen: CEO of Shoshin Works 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

How do you solve the problem of trust? Confidence is a prerequisite for large-scale adoption.

With the deluge of speculation about Generative AI, organizations everywhere are establishing regulations and certifications to solve for fidelity, trust, and assurance. We're also seeing substantial debate regarding the viability of such efforts at this stage in the adoption of AI-empowered capabilities.

Guest Emre Kazim is the co-Founder and co-CEO of Holistic AI, a hot startup empowering organizations to adopt and scale AI with confidence - which at its core, is solving the problem of trust. Emre shares an overview of the current ecosystems within and across the AI landscape - policy, industry, and society... federal, state, and local levels of activity. The path to AI governance is heating up and challenges us to answer the difficult questions - who will be the ultimate arbiter of trust and what are those entities' self-interests; what trade-offs are made and tolerated within AI-enabled systems of trust; what gaps exist between AI-powered capabilities, regulations, and policy... As you might suspect, the proximity of policy-making to the development of the capabilities they govern is key. Controlling the harm depends upon understanding the specific use case, the context of use, and the underlying layers of operability and interoperability - enabling a more precise assessment of the corresponding risk. Steady experimentation in practical, tangible use cases are the path forward enabling the real shift possible with AI.

At the core - trust comes from confidence, and confidence comes from using these systems over time. Good governance will come from the same ecosystem that innovated these capabilities.

Guest:

Emre Kazim, Co-Founder & Co-CEO, Holistic AI

Co-Hosts:

James Villarrubia, White House Presidential Innovation Fellow & Digital Strategist for CAS, NASA G. Edward Powell, PhD, CEO, TensorX, Inc

Series Hosts:

Vikram Shyam, Lead Futurist, NASA Glenn Research Center Dyan Finkhousen, Founder & CEO, Shoshin Works

  continue reading

126 에피소드

Artwork
icon공유
 
Manage episode 398336069 series 3453656
Dyan Finkhousen and Dyan Finkhousen: CEO of Shoshin Works에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Dyan Finkhousen and Dyan Finkhousen: CEO of Shoshin Works 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

How do you solve the problem of trust? Confidence is a prerequisite for large-scale adoption.

With the deluge of speculation about Generative AI, organizations everywhere are establishing regulations and certifications to solve for fidelity, trust, and assurance. We're also seeing substantial debate regarding the viability of such efforts at this stage in the adoption of AI-empowered capabilities.

Guest Emre Kazim is the co-Founder and co-CEO of Holistic AI, a hot startup empowering organizations to adopt and scale AI with confidence - which at its core, is solving the problem of trust. Emre shares an overview of the current ecosystems within and across the AI landscape - policy, industry, and society... federal, state, and local levels of activity. The path to AI governance is heating up and challenges us to answer the difficult questions - who will be the ultimate arbiter of trust and what are those entities' self-interests; what trade-offs are made and tolerated within AI-enabled systems of trust; what gaps exist between AI-powered capabilities, regulations, and policy... As you might suspect, the proximity of policy-making to the development of the capabilities they govern is key. Controlling the harm depends upon understanding the specific use case, the context of use, and the underlying layers of operability and interoperability - enabling a more precise assessment of the corresponding risk. Steady experimentation in practical, tangible use cases are the path forward enabling the real shift possible with AI.

At the core - trust comes from confidence, and confidence comes from using these systems over time. Good governance will come from the same ecosystem that innovated these capabilities.

Guest:

Emre Kazim, Co-Founder & Co-CEO, Holistic AI

Co-Hosts:

James Villarrubia, White House Presidential Innovation Fellow & Digital Strategist for CAS, NASA G. Edward Powell, PhD, CEO, TensorX, Inc

Series Hosts:

Vikram Shyam, Lead Futurist, NASA Glenn Research Center Dyan Finkhousen, Founder & CEO, Shoshin Works

  continue reading

126 에피소드

모든 에피소드

×
 
Loading …

플레이어 FM에 오신것을 환영합니다!

플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.

 

빠른 참조 가이드

탐색하는 동안 이 프로그램을 들어보세요.
재생