Artwork

Faculty of Law, University of Cambridge, Faculty of Law, and University of Cambridge에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Faculty of Law, University of Cambridge, Faculty of Law, and University of Cambridge 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Player FM -팟 캐스트 앱
Player FM 앱으로 오프라인으로 전환하세요!

Faithful or Traitor? The Right of Explanation in a Generative AI World: CIPIL Evening Seminar

49:02
 
공유
 

Manage episode 521034027 series 3623930
Faculty of Law, University of Cambridge, Faculty of Law, and University of Cambridge에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Faculty of Law, University of Cambridge, Faculty of Law, and University of Cambridge 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

Speaker: Professor Lilian Edwards, Emeritus Professor of Law, Innovation & Society, Newcastle Law School

Biography: Lilian Edwards is a leading academic in the field of Internet law. She has taught information technology law, e-commerce law, privacy law and Internet law at undergraduate and postgraduate level since 1996 and been involved with law and artificial intelligence (AI) since 1985. She is now Emerita Professor at Newcastle and Honorary Professor at CREAte, University of Glasgow, which she helped co-found. She is the editor and major author of Law, Policy and the Internet, one of the leading textbooks in the field of Internet law (Hart, 2018, new edition forthcoming with Urquhart and Goanta, 2026). She won the Future of Privacy Forum award in 2019 for best paper ("Slave to the Algorithm" with Michael Veale) and the award for best non-technical paper at FAccT in 2020, on automated hiring. In 2004 she won the Barbara Wellberry Memorial Prize in 2004 for work on online privacy where she invented the notion of data trusts, a concept which ten years later has been proposed in EU legislation. She is a former fellow of the Alan Turing Institute on Law and AI, and the Institute for the Future of Work. Edwards has consulted for inter alia the EU Commission, the OECD, and WIPO.

Abstract: The right to an explanation is having another moment. Well after the heyday of 2016-2018 when scholars tussled over whether the GDPR ( in either art 22 or arts 13-15) conferred a right to explanation, the CJEU case of Dun and Bradstreet has finally confirmed its existence, and the Platform Work Directive has wholesale revamped art 22 in its Algorithmic Management chapter. Most recently the EU AI Act added its own Frankenstein-like right to an explanation (art 86) of AI systems .

None of these provisions however pin down what the essence of the explanation should be, given many notions can be invoked here ; a faithful description of source code or training data; an account that enables challenge or contestation; a “plausible” description that may be appealing in a behaviouralist sense but might be actually misleading when operationalised eg to generate a medical course of treatment. Agarwal et al argue that the tendency of UI designers, and regulators and judges alike to lean towards the plausibility end, may be unsuited to large language models which represent far more of a black box in size and optimisation than conventional machine learning, and which are trained to present encouraging but not always accurate accounts of their workings. Yet this is also the direction of travel taken by CJEU Dun & Bradstreet , above. This paper argues that explanations of large model outputs may present novel challenges needing thoughtful legal mandates.

For more information (and to download slides) see:

https://www.cipil.law.cam.ac.uk/seminars-and-events/cipil-seminars

  continue reading

165 에피소드

Artwork
icon공유
 
Manage episode 521034027 series 3623930
Faculty of Law, University of Cambridge, Faculty of Law, and University of Cambridge에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Faculty of Law, University of Cambridge, Faculty of Law, and University of Cambridge 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

Speaker: Professor Lilian Edwards, Emeritus Professor of Law, Innovation & Society, Newcastle Law School

Biography: Lilian Edwards is a leading academic in the field of Internet law. She has taught information technology law, e-commerce law, privacy law and Internet law at undergraduate and postgraduate level since 1996 and been involved with law and artificial intelligence (AI) since 1985. She is now Emerita Professor at Newcastle and Honorary Professor at CREAte, University of Glasgow, which she helped co-found. She is the editor and major author of Law, Policy and the Internet, one of the leading textbooks in the field of Internet law (Hart, 2018, new edition forthcoming with Urquhart and Goanta, 2026). She won the Future of Privacy Forum award in 2019 for best paper ("Slave to the Algorithm" with Michael Veale) and the award for best non-technical paper at FAccT in 2020, on automated hiring. In 2004 she won the Barbara Wellberry Memorial Prize in 2004 for work on online privacy where she invented the notion of data trusts, a concept which ten years later has been proposed in EU legislation. She is a former fellow of the Alan Turing Institute on Law and AI, and the Institute for the Future of Work. Edwards has consulted for inter alia the EU Commission, the OECD, and WIPO.

Abstract: The right to an explanation is having another moment. Well after the heyday of 2016-2018 when scholars tussled over whether the GDPR ( in either art 22 or arts 13-15) conferred a right to explanation, the CJEU case of Dun and Bradstreet has finally confirmed its existence, and the Platform Work Directive has wholesale revamped art 22 in its Algorithmic Management chapter. Most recently the EU AI Act added its own Frankenstein-like right to an explanation (art 86) of AI systems .

None of these provisions however pin down what the essence of the explanation should be, given many notions can be invoked here ; a faithful description of source code or training data; an account that enables challenge or contestation; a “plausible” description that may be appealing in a behaviouralist sense but might be actually misleading when operationalised eg to generate a medical course of treatment. Agarwal et al argue that the tendency of UI designers, and regulators and judges alike to lean towards the plausibility end, may be unsuited to large language models which represent far more of a black box in size and optimisation than conventional machine learning, and which are trained to present encouraging but not always accurate accounts of their workings. Yet this is also the direction of travel taken by CJEU Dun & Bradstreet , above. This paper argues that explanations of large model outputs may present novel challenges needing thoughtful legal mandates.

For more information (and to download slides) see:

https://www.cipil.law.cam.ac.uk/seminars-and-events/cipil-seminars

  continue reading

165 에피소드

모든 에피소드

×
 
Loading …

플레이어 FM에 오신것을 환영합니다!

플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.

 

빠른 참조 가이드

탐색하는 동안 이 프로그램을 들어보세요.
재생