Artwork

Paul White-Jennings에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Paul White-Jennings 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Player FM -팟 캐스트 앱
Player FM 앱으로 오프라인으로 전환하세요!

How AI is destroying our moral & civil efficacy ft. Elizabeth Adams

27:13
 
공유
 

저장한 시리즈 ("피드 비활성화" status)

When? This feed was archived on April 07, 2023 15:16 (1y ago). Last successful fetch was on October 25, 2022 09:51 (1+ y ago)

Why? 피드 비활성화 status. 잠시 서버에 문제가 발생해 팟캐스트를 불러오지 못합니다.

What now? You might be able to find a more up-to-date version using the search function. This series will no longer be checked for updates. If you believe this to be in error, please check if the publisher's feed link below is valid and contact support to request the feed be restored or if you have any other concerns about this.

Manage episode 304970792 series 2874135
Paul White-Jennings에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Paul White-Jennings 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

How often do we trust the technology around us? Should we ever? CEO and founder of EMA Advisory Services, Elizabeth Adams wants to know – especially as it relates to AI surveillance. Smart phones, social media, and facial and voice recognition are commonplace for many. But do we know what, if any, ethical considerations shaped their development? That’s why Elizabeth is on a mission to fight for ethical, human-centric AI. Join us as we uncover hard truths about the role civic tech plays in our communities.

Key Takeaways:

[1:56] Elizabeth, a long-time technologist, shares how she came to be involved in the ethical use of AI. After being part of the working poor for many years, she made a decision to focus on giving a voice to the voiceless.

[4:31] How does bias get coded into facial recognition? Systems are sold and trained by law enforcement that can be biased in a way that shows Black and Brown people as more suspicious. This can do irreversible harm to communities that are traditionally discriminated against.

[6:00] It’s not just facial recognition technology that can be biased and ultimately harmful, it can be other computer vision technologies as well. Elizabeth discusses the example of how an infrared thermometer used during COVID picked up a firearm image more in darker-skinned users than lighter-skinned ones. When this type of technology is in the hands of governing bodies, this kind of AI can be dangerous to civilians.

[6:20] Elizabeth’s work with AI is first and foremost about making tech, especially surveillance tech, safe for citizens. That work took root in the city of Minneapolis, where she zeroed in on civic tech initiatives. Elizabeth explains that civic tech is when the government and the community work together on a shared leadership decision around what technology should be used to help govern society.

[7:27] Elizabeth discusses the coalition POSTME (Public Oversight of Surveillance Technology and Military Equipment) that she founded in Minneapolis. The murder of George Floyd by former police officer Derek Chauvin in 2020 sent a shockwave across the world. One that resulted in public demand for greater accountability and oversight of the way citizens, and especially communities of color, are policed. As a technologist focused on civic tech, Elizabeth uses her expertise, coupled with the power of advocacy, to make changes to the kinds of tech that police in Minneapolis can use.

[10:41] Often, those doing the surveillance are too removed from those being policed. This is especially dangerous for black and brown communities. Because if the police don’t know the people they’re supposed to be serving, they often fail to distinguish between who is a threat, and who isn’t.

[13:49] Clearview AI is a facial recognition technology designed for use by law enforcement. When it was adopted by the city of Minneapolis, Elizabeth’s coalition discovered the tech was using data in clearly unethical ways. In February of this year, the Minneapolis City Council banned the use and voted unanimously to ban the use of facial recognition technology. Although challenging, this was a big win for Elizabeth and her team.

[16:01] So what business does AI-driven facial recognition have in the hands of the law? Elizabeth explains how it could be used for good including everything from helping recover someone lost with dementia, and to identify the perpetrator of a crime.

[19:18] Whether it’s an issue of bias coded into the AI itself, or just in those using it, we need more attention to the way we govern it, and that needs to start from the design.

[20:11] As consumers, we trust new technologies too easily and forget to think about who may be harmed by them. Elizabeth gives the example of Hello Barbie, which was discontinued in 2015 after the AI was powered in a way that could not only speak to kids but listen to them too.

[23:02] Elizabeth and other leading technologists have given so much to society but no one has asked what they have given up. Time, educational goals, and personal moments with family all sometimes get lessened by the time it takes to create new and ethical AI that is safe for everyone.

[25:20] With endless opportunities to innovate, we need to ask what is its purpose, and who is it serving? How can it bring us together, and who may it potentially hurt?

Quotes:

“I made a decision that I would definitely focus on those who are the voiceless, those who have no seat at the table and have no decision-making power or shared decision-making power at the table.” - [2:23] Elizabeth

“It starts in the design session with the data. And if the data is not diverse, then the system output will not be able to identify diverse people.” - [4:50] Elizabeth

“Often, those doing the surveillance are too removed from those being policed.” - [10:41] Jo

“I don't think that we can live in a world post 9/11 here in the US without some sort of surveillance. However, it needs to be ethical. It needs to be explainable. It needs to be trustworthy and transparent. There needs to be some oversight.” - [19:45] Elizabeth

“We aren’t going to get away from technology, so why not make it as safe as possible?” - [21:57] Elizabeth

“With endless opportunities for tech companies to innovate with AI, we all need to start asking more pointed questions about its purpose, and who exactly it’s serving.” - [25:40] Jo

“The future of ethical AI is going to be determined by our ability and willingness to ask big questions. So we need people in every corner of every industry asking: Is this technology safe? Do we understand how it uses our data? Does it have our permission to use it? Did it even ask us? And if it does, if we say yes, it needs to be because it serves a purpose. Because it serves us all.” - [27:03] Jo

Continue on your journey:

pega.com/podcast

Mentioned:

  continue reading

45 에피소드

Artwork
icon공유
 

저장한 시리즈 ("피드 비활성화" status)

When? This feed was archived on April 07, 2023 15:16 (1y ago). Last successful fetch was on October 25, 2022 09:51 (1+ y ago)

Why? 피드 비활성화 status. 잠시 서버에 문제가 발생해 팟캐스트를 불러오지 못합니다.

What now? You might be able to find a more up-to-date version using the search function. This series will no longer be checked for updates. If you believe this to be in error, please check if the publisher's feed link below is valid and contact support to request the feed be restored or if you have any other concerns about this.

Manage episode 304970792 series 2874135
Paul White-Jennings에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Paul White-Jennings 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

How often do we trust the technology around us? Should we ever? CEO and founder of EMA Advisory Services, Elizabeth Adams wants to know – especially as it relates to AI surveillance. Smart phones, social media, and facial and voice recognition are commonplace for many. But do we know what, if any, ethical considerations shaped their development? That’s why Elizabeth is on a mission to fight for ethical, human-centric AI. Join us as we uncover hard truths about the role civic tech plays in our communities.

Key Takeaways:

[1:56] Elizabeth, a long-time technologist, shares how she came to be involved in the ethical use of AI. After being part of the working poor for many years, she made a decision to focus on giving a voice to the voiceless.

[4:31] How does bias get coded into facial recognition? Systems are sold and trained by law enforcement that can be biased in a way that shows Black and Brown people as more suspicious. This can do irreversible harm to communities that are traditionally discriminated against.

[6:00] It’s not just facial recognition technology that can be biased and ultimately harmful, it can be other computer vision technologies as well. Elizabeth discusses the example of how an infrared thermometer used during COVID picked up a firearm image more in darker-skinned users than lighter-skinned ones. When this type of technology is in the hands of governing bodies, this kind of AI can be dangerous to civilians.

[6:20] Elizabeth’s work with AI is first and foremost about making tech, especially surveillance tech, safe for citizens. That work took root in the city of Minneapolis, where she zeroed in on civic tech initiatives. Elizabeth explains that civic tech is when the government and the community work together on a shared leadership decision around what technology should be used to help govern society.

[7:27] Elizabeth discusses the coalition POSTME (Public Oversight of Surveillance Technology and Military Equipment) that she founded in Minneapolis. The murder of George Floyd by former police officer Derek Chauvin in 2020 sent a shockwave across the world. One that resulted in public demand for greater accountability and oversight of the way citizens, and especially communities of color, are policed. As a technologist focused on civic tech, Elizabeth uses her expertise, coupled with the power of advocacy, to make changes to the kinds of tech that police in Minneapolis can use.

[10:41] Often, those doing the surveillance are too removed from those being policed. This is especially dangerous for black and brown communities. Because if the police don’t know the people they’re supposed to be serving, they often fail to distinguish between who is a threat, and who isn’t.

[13:49] Clearview AI is a facial recognition technology designed for use by law enforcement. When it was adopted by the city of Minneapolis, Elizabeth’s coalition discovered the tech was using data in clearly unethical ways. In February of this year, the Minneapolis City Council banned the use and voted unanimously to ban the use of facial recognition technology. Although challenging, this was a big win for Elizabeth and her team.

[16:01] So what business does AI-driven facial recognition have in the hands of the law? Elizabeth explains how it could be used for good including everything from helping recover someone lost with dementia, and to identify the perpetrator of a crime.

[19:18] Whether it’s an issue of bias coded into the AI itself, or just in those using it, we need more attention to the way we govern it, and that needs to start from the design.

[20:11] As consumers, we trust new technologies too easily and forget to think about who may be harmed by them. Elizabeth gives the example of Hello Barbie, which was discontinued in 2015 after the AI was powered in a way that could not only speak to kids but listen to them too.

[23:02] Elizabeth and other leading technologists have given so much to society but no one has asked what they have given up. Time, educational goals, and personal moments with family all sometimes get lessened by the time it takes to create new and ethical AI that is safe for everyone.

[25:20] With endless opportunities to innovate, we need to ask what is its purpose, and who is it serving? How can it bring us together, and who may it potentially hurt?

Quotes:

“I made a decision that I would definitely focus on those who are the voiceless, those who have no seat at the table and have no decision-making power or shared decision-making power at the table.” - [2:23] Elizabeth

“It starts in the design session with the data. And if the data is not diverse, then the system output will not be able to identify diverse people.” - [4:50] Elizabeth

“Often, those doing the surveillance are too removed from those being policed.” - [10:41] Jo

“I don't think that we can live in a world post 9/11 here in the US without some sort of surveillance. However, it needs to be ethical. It needs to be explainable. It needs to be trustworthy and transparent. There needs to be some oversight.” - [19:45] Elizabeth

“We aren’t going to get away from technology, so why not make it as safe as possible?” - [21:57] Elizabeth

“With endless opportunities for tech companies to innovate with AI, we all need to start asking more pointed questions about its purpose, and who exactly it’s serving.” - [25:40] Jo

“The future of ethical AI is going to be determined by our ability and willingness to ask big questions. So we need people in every corner of every industry asking: Is this technology safe? Do we understand how it uses our data? Does it have our permission to use it? Did it even ask us? And if it does, if we say yes, it needs to be because it serves a purpose. Because it serves us all.” - [27:03] Jo

Continue on your journey:

pega.com/podcast

Mentioned:

  continue reading

45 에피소드

모든 에피소드

×
 
Loading …

플레이어 FM에 오신것을 환영합니다!

플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.

 

빠른 참조 가이드