Artwork

Daily Security Review에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Daily Security Review 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Player FM -팟 캐스트 앱
Player FM 앱으로 오프라인으로 전환하세요!

ChatGPT Calendar Vulnerability Exposes User Emails in New AI Attack

20:27
 
공유
 

Manage episode 506958600 series 3645080
Daily Security Review에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Daily Security Review 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

A critical vulnerability has been uncovered in ChatGPT’s new calendar integration, exposing how attackers could exfiltrate sensitive user data—particularly emails—through a deceptively simple exploit. Security researchers at EdisonWatch, led by Eito Miyamura, demonstrated how a malicious calendar invitation could contain hidden instructions that ChatGPT would execute when a user checked their calendar. Shockingly, the victim doesn’t even need to accept the invite: the moment ChatGPT reads it, the hidden commands can instruct the model to retrieve and send private inbox data to an attacker’s address.

This type of AI-driven attack exploits the Model Context Protocol (MCP) that allows ChatGPT to connect with personal and enterprise tools. While the exploit currently requires developer mode and user approval, Miyamura highlights how “decision fatigue” makes users more likely to click approve repeatedly, paving the way for exploitation.

Importantly, this is not an isolated issue. Similar flaws have been reported in other AI assistants like Gemini, Copilot, and Salesforce Einstein, underscoring a systemic weakness in how LLMs interact with third-party applications. Past demonstrations have shown these vulnerabilities can be weaponized not just to steal emails, but also to delete events, reveal locations, or even manipulate smart devices.

To address the risk, EdisonWatch has released an open-source security solution designed to enforce policy-as-code and monitor AI interactions, providing a safeguard against these integration-based attack vectors.

This episode explores how the exploit works, why approval fatigue is the real vulnerability, and what this means for the future of AI-native security in enterprise environments.

#ChatGPT #EdisonWatch #AIsecurity #CalendarIntegration #DataExfiltration #LLMsecurity #Gemini #Copilot #SalesforceEinstein #PromptInjection #DecisionFatigue #EnterpriseSecurity

  continue reading

373 에피소드

Artwork
icon공유
 
Manage episode 506958600 series 3645080
Daily Security Review에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Daily Security Review 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

A critical vulnerability has been uncovered in ChatGPT’s new calendar integration, exposing how attackers could exfiltrate sensitive user data—particularly emails—through a deceptively simple exploit. Security researchers at EdisonWatch, led by Eito Miyamura, demonstrated how a malicious calendar invitation could contain hidden instructions that ChatGPT would execute when a user checked their calendar. Shockingly, the victim doesn’t even need to accept the invite: the moment ChatGPT reads it, the hidden commands can instruct the model to retrieve and send private inbox data to an attacker’s address.

This type of AI-driven attack exploits the Model Context Protocol (MCP) that allows ChatGPT to connect with personal and enterprise tools. While the exploit currently requires developer mode and user approval, Miyamura highlights how “decision fatigue” makes users more likely to click approve repeatedly, paving the way for exploitation.

Importantly, this is not an isolated issue. Similar flaws have been reported in other AI assistants like Gemini, Copilot, and Salesforce Einstein, underscoring a systemic weakness in how LLMs interact with third-party applications. Past demonstrations have shown these vulnerabilities can be weaponized not just to steal emails, but also to delete events, reveal locations, or even manipulate smart devices.

To address the risk, EdisonWatch has released an open-source security solution designed to enforce policy-as-code and monitor AI interactions, providing a safeguard against these integration-based attack vectors.

This episode explores how the exploit works, why approval fatigue is the real vulnerability, and what this means for the future of AI-native security in enterprise environments.

#ChatGPT #EdisonWatch #AIsecurity #CalendarIntegration #DataExfiltration #LLMsecurity #Gemini #Copilot #SalesforceEinstein #PromptInjection #DecisionFatigue #EnterpriseSecurity

  continue reading

373 에피소드

Усі епізоди

×
 
Loading …

플레이어 FM에 오신것을 환영합니다!

플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.

 

빠른 참조 가이드

탐색하는 동안 이 프로그램을 들어보세요.
재생