Artwork

New York Magazine and Vox Media에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 New York Magazine and Vox Media 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Player FM -팟 캐스트 앱
Player FM 앱으로 오프라인으로 전환하세요!

Did a Chatbot Cause Her Son’s Death? Megan Garcia v. Character.AI & Google

1:13:07
 
공유
 

Manage episode 453907466 series 3392712
New York Magazine and Vox Media에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 New York Magazine and Vox Media 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

What if the worst fears around AI come true? For Megan Garcia, that’s already happened. In February, after spending months interacting with chatbots created by Character.AI, her 14-year-old son Sewell took his own life. Garcia blames Character.AI, and she is suing them and Google, who she believes significantly contributed to Character.AI’s alleged wrongdoing.

Kara interviews Garcia and Meetali Jain, one of her lawyers and the founder of the Tech Justice Law Project, and they discuss the allegations made by Megan against Character.AI and Google.

When reached for comment, a spokesperson at Character.AI responded with the following statement:

We do not comment on pending litigation.

We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family. We take the safety of our users very seriously, and our dedicated Trust and Safety team has worked to implement new safety features over the past seven months, including a pop-up directing users to the National Suicide Prevention Lifeline that is triggered by terms of self-harm or suicidal ideation.

Our goal is to provide a creative space that is engaging, immersive, and safe. To achieve this, we are creating a fundamentally different experience for users under 18 that prioritizes safety, including reducing the likelihood of encountering sensitive or suggestive content, while preserving their ability to use the platform.

As we continue to invest in the platform and the user experience, we are introducing new safety features in addition to the tools already in place that restrict the model and filter the content provided to the user. These include improved detection, response and intervention related to user inputs that violate our Terms or Community Guidelines, as well as a time-spent notification. For more information on these new features as well as other safety and IP moderation updates to the platform, please refer to the Character.AI blog.

When reached for comment, Google spokesperson Jose Castaneda responded with the following statement:

Our hearts go out to the family during this unimaginably difficult time. Just to clarify, Google and Character AI are completely separate, unrelated companies and Google has never had a role in designing or managing their AI model or technologies, nor have we used them in our products. User safety is a top concern of ours, and that’s why – as has been widely reported – we’ve taken a cautious and responsible approach to developing and rolling out our AI products, with rigorous testing and safety processes.

Questions? Comments? Email us at on@voxmedia.com or find us on Instagram and TikTok @onwithkaraswisher

Learn more about your ad choices. Visit podcastchoices.com/adchoices

  continue reading

239 에피소드

Artwork
icon공유
 
Manage episode 453907466 series 3392712
New York Magazine and Vox Media에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 New York Magazine and Vox Media 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

What if the worst fears around AI come true? For Megan Garcia, that’s already happened. In February, after spending months interacting with chatbots created by Character.AI, her 14-year-old son Sewell took his own life. Garcia blames Character.AI, and she is suing them and Google, who she believes significantly contributed to Character.AI’s alleged wrongdoing.

Kara interviews Garcia and Meetali Jain, one of her lawyers and the founder of the Tech Justice Law Project, and they discuss the allegations made by Megan against Character.AI and Google.

When reached for comment, a spokesperson at Character.AI responded with the following statement:

We do not comment on pending litigation.

We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family. We take the safety of our users very seriously, and our dedicated Trust and Safety team has worked to implement new safety features over the past seven months, including a pop-up directing users to the National Suicide Prevention Lifeline that is triggered by terms of self-harm or suicidal ideation.

Our goal is to provide a creative space that is engaging, immersive, and safe. To achieve this, we are creating a fundamentally different experience for users under 18 that prioritizes safety, including reducing the likelihood of encountering sensitive or suggestive content, while preserving their ability to use the platform.

As we continue to invest in the platform and the user experience, we are introducing new safety features in addition to the tools already in place that restrict the model and filter the content provided to the user. These include improved detection, response and intervention related to user inputs that violate our Terms or Community Guidelines, as well as a time-spent notification. For more information on these new features as well as other safety and IP moderation updates to the platform, please refer to the Character.AI blog.

When reached for comment, Google spokesperson Jose Castaneda responded with the following statement:

Our hearts go out to the family during this unimaginably difficult time. Just to clarify, Google and Character AI are completely separate, unrelated companies and Google has never had a role in designing or managing their AI model or technologies, nor have we used them in our products. User safety is a top concern of ours, and that’s why – as has been widely reported – we’ve taken a cautious and responsible approach to developing and rolling out our AI products, with rigorous testing and safety processes.

Questions? Comments? Email us at on@voxmedia.com or find us on Instagram and TikTok @onwithkaraswisher

Learn more about your ad choices. Visit podcastchoices.com/adchoices

  continue reading

239 에피소드

모든 에피소드

×
 
Loading …

플레이어 FM에 오신것을 환영합니다!

플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.

 

빠른 참조 가이드