Artwork

Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Player FM -팟 캐스트 앱
Player FM 앱으로 오프라인으로 전환하세요!

Differential privacy: Balancing data privacy and utility in AI

28:17
 
공유
 

Manage episode 421872293 series 3475282
Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

Explore the basics of differential privacy and its critical role in protecting individual anonymity. The hosts explain the latest guidelines and best practices in applying differential privacy to data for models such as AI. Learn how this method also makes sure that personal data remains confidential, even when datasets are analyzed or hacked.
Show Notes

  • Intro and AI news (00:00)
  • What is differential privacy? (06:34)
    • Differential privacy is a process for sensitive data anonymization that offers each individual in a dataset the same privacy they would experience if they were removed from the dataset entirely.
    • NIST’s recent paper SP 800-226 IPD: “Any privacy harms that result form a differentially private analysis could have happened if you had not contributed your data”.
    • There are two main types of differential privacy: global (NIST calls it Central) and local
  • Why should people care about differential privacy? (11:30)
    • Interest has been increasing for organizations to intentionally and systematically prioritize the privacy and safety of user data
    • Speed up deployments of AI systems for enterprise customers since connections to raw data do not need to be established
    • Increase data security for customers that utilize sensitive data in their modeling systems
    • Minimize the risk of sensitive data exposure for your data privileges - i.e. Don’t be THAT organization
  • Guidelines and resources for applied differential privacy
  • Practical examples of applied differential privacy (15:58)
    • Continuous Features - cite: Dwork, McSherry, Nissim, and Smith’s 2006 seminal paper "Calibrating Noise to Sensitivity in Private Data Analysis”[2], introduces a concept called ε-differential privacy
    • Categorical Features - cite: Warner (1965) created a randomized response technique in his paper titled: “Randomized Response: A Survey Technique for Eliminating Evasive Answer Bias”
  • Summary and key takeaways (23:59)
    • Differential privacy is going to be a part of how many of us need to manage data privacy
    • Data providers can’t provide us with anonymized data for analysis or when anonymization isn’t enough for our privacy needs
    • Hopeful that cohort targeting takes over for individual targeting
    • Remember: Differential privacy does not prevent bias!

What did you think? Let us know.

Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:

  • LinkedIn - Episode summaries, shares of cited articles, and more.
  • YouTube - Was it something that we said? Good. Share your favorite quotes.
  • Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
  continue reading

챕터

1. Differential privacy: Balancing data privacy and utility in AI (00:00:00)

2. Intro and AI news (00:00:03)

3. Understanding differential privacy (00:06:39)

4. Who needs to care about differential privacy and why? (00:11:30)

5. Ideal use cases and examples (00:15:58)

6. Summary and key takeaways (00:23:59)

20 에피소드

Artwork
icon공유
 
Manage episode 421872293 series 3475282
Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

Explore the basics of differential privacy and its critical role in protecting individual anonymity. The hosts explain the latest guidelines and best practices in applying differential privacy to data for models such as AI. Learn how this method also makes sure that personal data remains confidential, even when datasets are analyzed or hacked.
Show Notes

  • Intro and AI news (00:00)
  • What is differential privacy? (06:34)
    • Differential privacy is a process for sensitive data anonymization that offers each individual in a dataset the same privacy they would experience if they were removed from the dataset entirely.
    • NIST’s recent paper SP 800-226 IPD: “Any privacy harms that result form a differentially private analysis could have happened if you had not contributed your data”.
    • There are two main types of differential privacy: global (NIST calls it Central) and local
  • Why should people care about differential privacy? (11:30)
    • Interest has been increasing for organizations to intentionally and systematically prioritize the privacy and safety of user data
    • Speed up deployments of AI systems for enterprise customers since connections to raw data do not need to be established
    • Increase data security for customers that utilize sensitive data in their modeling systems
    • Minimize the risk of sensitive data exposure for your data privileges - i.e. Don’t be THAT organization
  • Guidelines and resources for applied differential privacy
  • Practical examples of applied differential privacy (15:58)
    • Continuous Features - cite: Dwork, McSherry, Nissim, and Smith’s 2006 seminal paper "Calibrating Noise to Sensitivity in Private Data Analysis”[2], introduces a concept called ε-differential privacy
    • Categorical Features - cite: Warner (1965) created a randomized response technique in his paper titled: “Randomized Response: A Survey Technique for Eliminating Evasive Answer Bias”
  • Summary and key takeaways (23:59)
    • Differential privacy is going to be a part of how many of us need to manage data privacy
    • Data providers can’t provide us with anonymized data for analysis or when anonymization isn’t enough for our privacy needs
    • Hopeful that cohort targeting takes over for individual targeting
    • Remember: Differential privacy does not prevent bias!

What did you think? Let us know.

Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:

  • LinkedIn - Episode summaries, shares of cited articles, and more.
  • YouTube - Was it something that we said? Good. Share your favorite quotes.
  • Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
  continue reading

챕터

1. Differential privacy: Balancing data privacy and utility in AI (00:00:00)

2. Intro and AI news (00:00:03)

3. Understanding differential privacy (00:06:39)

4. Who needs to care about differential privacy and why? (00:11:30)

5. Ideal use cases and examples (00:15:58)

6. Summary and key takeaways (00:23:59)

20 에피소드

모든 에피소드

×
 
Loading …

플레이어 FM에 오신것을 환영합니다!

플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.

 

빠른 참조 가이드