Artwork

Brian T. O’Neill from Designing for Analytics에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Brian T. O’Neill from Designing for Analytics 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Player FM -팟 캐스트 앱
Player FM 앱으로 오프라인으로 전환하세요!

175 - The MIRRR UX Framework for Designing Trustworthy Agentic AI Applications (Part 1)

28:51
 
공유
 

Manage episode 498552385 series 2527129
Brian T. O’Neill from Designing for Analytics에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Brian T. O’Neill from Designing for Analytics 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

In this episode of Experiencing Data, I introduce part 1 of my new MIRRR UX framework for designing trustworthy agentic AI applications—you know, the kind that might actually get used and have the opportunity to create the desired business value everyone seeks! One of the biggest challenges with both traditional analytics, ML, and now, LLM-driven AI agents, is getting end users and stakeholders to trust and utilize these data products—especially if we’re asking humans in the loop to make changes to their behavior or ways of working.

In this episode, I challenge the idea that software UIs will vanish with the rise of AI-based automation. In fact, the MIRRR framework is based on the idea that AI agents should be “in the human loop,” and a control surface (user interface) may in many situations be essential to ensure any automated workers engender trust with their human overlords.

By properly considering the control and oversight that end users and stakeholders need, you can enable the business value and UX outcomes that your paying customers, stakeholders, and application users seek from agentic AI.

Using use cases from insurance claims processing, in this episode, I introduce the first two of five control points in the MIRRR framework—Monitor and Interrupt. These control points represent core actions that define how AI agents often should operate and interact within human systems:

  • Monitor – enabling appropriate transparency into AI agent behavior and performance
  • Interrupt – designing both manual and automated pausing mechanisms to ensure human oversight remains possible when needed

…and in a couple weeks, stay tuned for part 2 where I’ll wrap up this first version of my MIRRR framework.

Highlights / Skip to:
  • 00:34 Introducing the MIRRR UX Framework for designing trustworthy agentic AI Applications.
  • 01:27 The importance of trust in AI systems and how it is linked to user adoption
  • 03:06 Cultural shifts, AI hype, and growing AI skepticism
  • 04:13 Human centered design practices for agentic AI
  • 06:48 I discuss how understanding your users’ needs does not change with agentic AI, and that trust in agentic applications has direct ties to user adoption and value creation
  • 11:32 Measuring success of agentic applications with UX outcomes
  • 15:26 Introducing the first two of five MIRRR framework control points:
    • 16:29 M is for Monitor; understanding the agent’s “performance,” and the right
      level of transparency end users need, from individual tasks to aggregate views
    • 20:29 I is for Interrupt; when and why users may need to stop the agent—and
      what happens next
  • 28:02 Conclusion and next steps
  continue reading

114 에피소드

Artwork
icon공유
 
Manage episode 498552385 series 2527129
Brian T. O’Neill from Designing for Analytics에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Brian T. O’Neill from Designing for Analytics 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

In this episode of Experiencing Data, I introduce part 1 of my new MIRRR UX framework for designing trustworthy agentic AI applications—you know, the kind that might actually get used and have the opportunity to create the desired business value everyone seeks! One of the biggest challenges with both traditional analytics, ML, and now, LLM-driven AI agents, is getting end users and stakeholders to trust and utilize these data products—especially if we’re asking humans in the loop to make changes to their behavior or ways of working.

In this episode, I challenge the idea that software UIs will vanish with the rise of AI-based automation. In fact, the MIRRR framework is based on the idea that AI agents should be “in the human loop,” and a control surface (user interface) may in many situations be essential to ensure any automated workers engender trust with their human overlords.

By properly considering the control and oversight that end users and stakeholders need, you can enable the business value and UX outcomes that your paying customers, stakeholders, and application users seek from agentic AI.

Using use cases from insurance claims processing, in this episode, I introduce the first two of five control points in the MIRRR framework—Monitor and Interrupt. These control points represent core actions that define how AI agents often should operate and interact within human systems:

  • Monitor – enabling appropriate transparency into AI agent behavior and performance
  • Interrupt – designing both manual and automated pausing mechanisms to ensure human oversight remains possible when needed

…and in a couple weeks, stay tuned for part 2 where I’ll wrap up this first version of my MIRRR framework.

Highlights / Skip to:
  • 00:34 Introducing the MIRRR UX Framework for designing trustworthy agentic AI Applications.
  • 01:27 The importance of trust in AI systems and how it is linked to user adoption
  • 03:06 Cultural shifts, AI hype, and growing AI skepticism
  • 04:13 Human centered design practices for agentic AI
  • 06:48 I discuss how understanding your users’ needs does not change with agentic AI, and that trust in agentic applications has direct ties to user adoption and value creation
  • 11:32 Measuring success of agentic applications with UX outcomes
  • 15:26 Introducing the first two of five MIRRR framework control points:
    • 16:29 M is for Monitor; understanding the agent’s “performance,” and the right
      level of transparency end users need, from individual tasks to aggregate views
    • 20:29 I is for Interrupt; when and why users may need to stop the agent—and
      what happens next
  • 28:02 Conclusion and next steps
  continue reading

114 에피소드

모든 에피소드

×
 
Loading …

플레이어 FM에 오신것을 환영합니다!

플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.

 

빠른 참조 가이드

탐색하는 동안 이 프로그램을 들어보세요.
재생