Artwork

Google Cloud Platform에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Google Cloud Platform 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Player FM -팟 캐스트 앱
Player FM 앱으로 오프라인으로 전환하세요!

Vertex Explainable AI with Irina Sigler and Ivan Nardini

26:24
 
공유
 

Manage episode 336612397 series 94183
Google Cloud Platform에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Google Cloud Platform 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

Max Saltonstall and new host Anu Srivastava are in the studio today talking about Vertex Explainable AI with guests Irina Sigler and Ivan Nardini. Vertex Explainable AI was born from a need for developers to better understand how their models determine classifications. Trusting the operation of models for business decision making and easier debugging are two reasons this classification understanding is so important.

Explainable models help developers understand and describe how their trained models are making decisions. Google's managed service, Vertex Explainable AI, offers Feature Attribution and Example Based Explanations to provide better understanding of model decision making. Irina describes these two services and how each works to foster better decision-making based on AI models. One or both services can be used in every stage of model building and to create a more precise model with better results. Example Based Explanations, Irina tells us, also makes it easier to explain the model to those who may not have strong technical backgrounds.

Ivan runs us through a sample build of a model taking advantage of the Vertex Explainable AI tools. Presets provide easier setup and use as well. We talk more about the benefits of being able to easily explain your models. When decision-makers understand the importance of your AI tool, it's more likely to be cleared for production, for example. When you understand why your model is making certain choices, you can trust the model's outcomes as part of your decision-making process.

Irina Sigler

Irina Sigler is a Product Manager on the Vertex Explainable AI team. Before joining Google, Irina worked at McKinsey and did her Ph.D. in Explainable AI. She graduated from the Freie Universität Berlin and HEC Paris.

Ivan Nardini

Ivan Nardini is a customer engineer specialized in ML and passionate about Developer Advocacy and MLE. He is currently collaborating and enabling Data Science developers and practitioners to define and implement MLOps on Vertex AI. He also leads a worldwide hackathon community initiative and he is an active contributor in Google Cloud.

Cool things of the week
  • Unify data lakes and warehouses with BigLake, now generally available blog
  • What it's like to have a hybrid internship at Google blog
Interview
  • Vertex AI site
  • Explainable AI site
  • Vertex Explainable AI docs
  • Vertex Explainable AI Notebooks docs
  • Feature Attribution docs
  • AI Explanations Whitepaper site
  • Explainable AI with Google Cloud Vertex AI article
  • Why you need to explain machine learning models blog
What's something cool you're working on?

Anu just got back from a nice vacation and is picking back up on how to use our AI APIs with Serverless workflows. She's working on some exciting tutorials for our AI backed Translation API.

Max just got back from family dance camp and is working to make excellent intern experiences.

Hosts

Max Saltonstall and Anu Srivastava

  continue reading

359 에피소드

Artwork
icon공유
 
Manage episode 336612397 series 94183
Google Cloud Platform에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Google Cloud Platform 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

Max Saltonstall and new host Anu Srivastava are in the studio today talking about Vertex Explainable AI with guests Irina Sigler and Ivan Nardini. Vertex Explainable AI was born from a need for developers to better understand how their models determine classifications. Trusting the operation of models for business decision making and easier debugging are two reasons this classification understanding is so important.

Explainable models help developers understand and describe how their trained models are making decisions. Google's managed service, Vertex Explainable AI, offers Feature Attribution and Example Based Explanations to provide better understanding of model decision making. Irina describes these two services and how each works to foster better decision-making based on AI models. One or both services can be used in every stage of model building and to create a more precise model with better results. Example Based Explanations, Irina tells us, also makes it easier to explain the model to those who may not have strong technical backgrounds.

Ivan runs us through a sample build of a model taking advantage of the Vertex Explainable AI tools. Presets provide easier setup and use as well. We talk more about the benefits of being able to easily explain your models. When decision-makers understand the importance of your AI tool, it's more likely to be cleared for production, for example. When you understand why your model is making certain choices, you can trust the model's outcomes as part of your decision-making process.

Irina Sigler

Irina Sigler is a Product Manager on the Vertex Explainable AI team. Before joining Google, Irina worked at McKinsey and did her Ph.D. in Explainable AI. She graduated from the Freie Universität Berlin and HEC Paris.

Ivan Nardini

Ivan Nardini is a customer engineer specialized in ML and passionate about Developer Advocacy and MLE. He is currently collaborating and enabling Data Science developers and practitioners to define and implement MLOps on Vertex AI. He also leads a worldwide hackathon community initiative and he is an active contributor in Google Cloud.

Cool things of the week
  • Unify data lakes and warehouses with BigLake, now generally available blog
  • What it's like to have a hybrid internship at Google blog
Interview
  • Vertex AI site
  • Explainable AI site
  • Vertex Explainable AI docs
  • Vertex Explainable AI Notebooks docs
  • Feature Attribution docs
  • AI Explanations Whitepaper site
  • Explainable AI with Google Cloud Vertex AI article
  • Why you need to explain machine learning models blog
What's something cool you're working on?

Anu just got back from a nice vacation and is picking back up on how to use our AI APIs with Serverless workflows. She's working on some exciting tutorials for our AI backed Translation API.

Max just got back from family dance camp and is working to make excellent intern experiences.

Hosts

Max Saltonstall and Anu Srivastava

  continue reading

359 에피소드

모든 에피소드

×
 
Loading …

플레이어 FM에 오신것을 환영합니다!

플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.

 

빠른 참조 가이드

탐색하는 동안 이 프로그램을 들어보세요.
재생