Artwork

Oxford University에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Oxford University 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Player FM -팟 캐스트 앱
Player FM 앱으로 오프라인으로 전환하세요!

Does AI threaten Human Autonomy?

1:38:20
 
공유
 

저장한 시리즈 ("피드 비활성화" status)

When? This feed was archived on March 03, 2025 13:10 (9M ago). Last successful fetch was on January 17, 2023 05:31 (3y ago)

Why? 피드 비활성화 status. 잠시 서버에 문제가 발생해 팟캐스트를 불러오지 못합니다.

What now? You might be able to find a more up-to-date version using the search function. This series will no longer be checked for updates. If you believe this to be in error, please check if the publisher's feed link below is valid and contact support to request the feed be restored or if you have any other concerns about this.

Manage episode 337249568 series 3380500
Oxford University에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Oxford University 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
This event is also part of the Humanities Cultural Programme, one of the founding stones for the future Stephen A. Schwarzman Centre for the Humanities. How can AI systems influence our decision-making in ways that undermine autonomy? Do they do so in new or more problematic ways? To what extent can we outsource tasks to AI systems without losing our autonomy? Do we need a new conception of autonomy that incorporates considerations of the digital self? Autonomy is a core value in contemporary Western societies – it is a value that is invoked across a range of debates in practical ethics, and it lies at the heart of liberal democratic theory. It is therefore no surprise that AI policy documents frequently champion the importance of ensuring the protection of human autonomy. At first glance, this sort of protection may appear unnecessary – after all, in some ways, it seems that AI systems can serve to significantly enhance our autonomy. They can give us more information upon which to base our choices, and they may allow us to achieve many of our goals more effectively and efficiently. However, it is becoming increasingly clear that AI systems do pose a number of threats to our autonomy. One (but not the only) example is the fact that they enable the pervasive and covert use of manipulative and deceptive techniques that aim to target and exploit well-documented vulnerabilities in our decision-making. This raises the question of whether it is possible to harness the considerable power of AI to improve our lives in a manner that is compatible with respect for autonomy, and whether we need to reconceptualize both the nature and value of autonomy in the digital age. In this session, Carina Prunkl, Jessica Morley and Jonathan Pugh engage with these general questions, using the example of mHealth tools as an illuminating case study for a debate about the various ways in which an AI system can both enhance and hinder our autonomy. Speakers Dr Carina Prunkl, Research Fellow at the Institute for Ethics in AI, University of Oxford (where she is one of the inaugural team); also Research Affiliate at the Centre for the Governance of AI, Future of Humanity Institute. Carina works on the ethics and governance of AI, with a particular focus on autonomy, and has both publicly advocated and published on the importance of accountability mechanisms for AI. Jessica Morley, Policy Lead at Oxford’s DataLab, leading its engagement work to encourage use of modern computational analytics in the NHS, and ensuring public trust in health data records (notably those developed in response to the COVID-19 pandemic). Jess is also pursuing a related doctorate at the Oxford Internet Institute’s Digital Ethics Lab. As Technical Advisor for the Department of Health and Social Care, she co-authored the NHS Code of Conduct for data-driven technologies. Dr Jonathan Pugh, Senior Research Fellow at the Oxford Uehiro Centre for Practical Ethics, University of Oxford researching on how far AI Ethics should incorporate traditional conceptions of autonomy and “moral status”. He recently led a three-year project on the ethics of experimental Deep Brain Stimulation and “neuro-hacking”, and in 2020 published Autonomy, Rationality and Contemporary Bioethics (OUP). he has written on a wide range of ethical topics, but has particular interest in issues concerning personal autonomy and informed consent. Chair Professor Peter Millican is Gilbert Ryle Fellow and Professor of Philosophy at Hertford College, Oxford. He has researched and published over a wide range, including Early Modern Philosophy, Epistemology, Ethics, Philosophy of Language and of Religion, but has a particular focus on interdisciplinary connections with Computing and AI. He founded and oversees the Oxford undergraduate degree in Computer Science and Philosophy, which has been running since 2012, and last year he instituted this ongoing series of Ethics in AI Seminars.
  continue reading

27 에피소드

Artwork
icon공유
 

저장한 시리즈 ("피드 비활성화" status)

When? This feed was archived on March 03, 2025 13:10 (9M ago). Last successful fetch was on January 17, 2023 05:31 (3y ago)

Why? 피드 비활성화 status. 잠시 서버에 문제가 발생해 팟캐스트를 불러오지 못합니다.

What now? You might be able to find a more up-to-date version using the search function. This series will no longer be checked for updates. If you believe this to be in error, please check if the publisher's feed link below is valid and contact support to request the feed be restored or if you have any other concerns about this.

Manage episode 337249568 series 3380500
Oxford University에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Oxford University 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
This event is also part of the Humanities Cultural Programme, one of the founding stones for the future Stephen A. Schwarzman Centre for the Humanities. How can AI systems influence our decision-making in ways that undermine autonomy? Do they do so in new or more problematic ways? To what extent can we outsource tasks to AI systems without losing our autonomy? Do we need a new conception of autonomy that incorporates considerations of the digital self? Autonomy is a core value in contemporary Western societies – it is a value that is invoked across a range of debates in practical ethics, and it lies at the heart of liberal democratic theory. It is therefore no surprise that AI policy documents frequently champion the importance of ensuring the protection of human autonomy. At first glance, this sort of protection may appear unnecessary – after all, in some ways, it seems that AI systems can serve to significantly enhance our autonomy. They can give us more information upon which to base our choices, and they may allow us to achieve many of our goals more effectively and efficiently. However, it is becoming increasingly clear that AI systems do pose a number of threats to our autonomy. One (but not the only) example is the fact that they enable the pervasive and covert use of manipulative and deceptive techniques that aim to target and exploit well-documented vulnerabilities in our decision-making. This raises the question of whether it is possible to harness the considerable power of AI to improve our lives in a manner that is compatible with respect for autonomy, and whether we need to reconceptualize both the nature and value of autonomy in the digital age. In this session, Carina Prunkl, Jessica Morley and Jonathan Pugh engage with these general questions, using the example of mHealth tools as an illuminating case study for a debate about the various ways in which an AI system can both enhance and hinder our autonomy. Speakers Dr Carina Prunkl, Research Fellow at the Institute for Ethics in AI, University of Oxford (where she is one of the inaugural team); also Research Affiliate at the Centre for the Governance of AI, Future of Humanity Institute. Carina works on the ethics and governance of AI, with a particular focus on autonomy, and has both publicly advocated and published on the importance of accountability mechanisms for AI. Jessica Morley, Policy Lead at Oxford’s DataLab, leading its engagement work to encourage use of modern computational analytics in the NHS, and ensuring public trust in health data records (notably those developed in response to the COVID-19 pandemic). Jess is also pursuing a related doctorate at the Oxford Internet Institute’s Digital Ethics Lab. As Technical Advisor for the Department of Health and Social Care, she co-authored the NHS Code of Conduct for data-driven technologies. Dr Jonathan Pugh, Senior Research Fellow at the Oxford Uehiro Centre for Practical Ethics, University of Oxford researching on how far AI Ethics should incorporate traditional conceptions of autonomy and “moral status”. He recently led a three-year project on the ethics of experimental Deep Brain Stimulation and “neuro-hacking”, and in 2020 published Autonomy, Rationality and Contemporary Bioethics (OUP). he has written on a wide range of ethical topics, but has particular interest in issues concerning personal autonomy and informed consent. Chair Professor Peter Millican is Gilbert Ryle Fellow and Professor of Philosophy at Hertford College, Oxford. He has researched and published over a wide range, including Early Modern Philosophy, Epistemology, Ethics, Philosophy of Language and of Religion, but has a particular focus on interdisciplinary connections with Computing and AI. He founded and oversees the Oxford undergraduate degree in Computer Science and Philosophy, which has been running since 2012, and last year he instituted this ongoing series of Ethics in AI Seminars.
  continue reading

27 에피소드

모든 에피소드

×
 
Loading …

플레이어 FM에 오신것을 환영합니다!

플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.

 

빠른 참조 가이드

탐색하는 동안 이 프로그램을 들어보세요.
재생