How can business help solve society’s biggest challenges? Welcome to Series 3 of Take on Tomorrow, the award-winning podcast from PwC that examines the biggest problems facing society and the role business can—and should—play in solving them. This series, we’re welcoming broadcaster and journalist Femi Oke to the show. She joins podcaster and journalist Lizzie O’Leary, and together with industry innovators, tech trailblazers and visionary leaders from around the globe, they’ll explore timely ...
…
continue reading
Audioboom and Information Security Forum Podcast에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Audioboom and Information Security Forum Podcast 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Player FM -팟 캐스트 앱
Player FM 앱으로 오프라인으로 전환하세요!
Player FM 앱으로 오프라인으로 전환하세요!
S30 Ep4: BONUS: Brian Lord - AI, Mis- and Disinformation in Election Fraud and Education
Manage episode 447359367 series 2984965
Audioboom and Information Security Forum Podcast에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Audioboom and Information Security Forum Podcast 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
This is the second of a two-part conversation between Steve and Brian Lord, who is currently the Chief Executive Officer of Protection Group International. Prior to joining PGI, Brian served as the Deputy Director of a UK Government Agency governing the organization's Cyber and Intelligence Operations. Today, Steve and Brian discuss the proliferation of mis- and disinformation online, the potential security threats posed by AI, and the need for educating children in cyber awareness from a young age.
Key Takeaways:
1. The private sector serves as a skilled and necessary support to the public sector, working to counter mis- and disinformation campaigns, including those involving AI.
2. AI’s increasing ability to create fabricated images poses a particular threat to youth and other vulnerable users.
Tune in to hear more about:
1. Brian gives his assessment of cybersecurity threats during election years. (16:04)
2. Exploitation of vulnerable users remains a major concern in the digital space, requiring awareness, innovative countermeasures, and regulation. (31:0)
Standout Quotes:
1. “I think when we look at AI, we need to recognize it is a potentially long term larger threat to our institutions, our critical mass and infrastructure, and we need to put in countermeasures to be able to do that. But we also need to recognize that the most immediate impact on that is around what we call high harms, if you like. And I think that was one of the reasons the UK — over a torturously long period of time — introduced the The Online Harms Bill to be able to counter some of those issues. So we need to get AI in perspective. It is a threat. Of course it is a threat. But I see then when one looks at AI applied in the cybersecurity test, you know, automatic intelligence developing hacking techniques, bear in mind, AI is available to both sides. It's not just available to the attackers, it's available to the defenders. So what we are simply going to do is see that same kind of thing that we have in the more human-based countering the cybersecurity threat in an AI space.” -Brian Lord
2. “The problem we have now — now, one can counter that by the education of children, keeping them aware, and so on and so forth— the problem you have now is the ability, because of the availability of imagery online and AI's ability to create imagery, one can create an entirely fabricated image of a vulnerable target and say, this is you. Even though it isn’t … when you're looking at the most vulnerable in our society, that's a very, very difficult thing to counter, because it doesn't matter whether it's real to whoever sees it, or the fear from the most vulnerable people, people who see it, they will believe that it is real. And we've seen that.” -Brian Lord
Mentioned in this episode:
• ISF Analyst Insight Podcast
Read the transcript of this episode
Subscribe to the ISF Podcast wherever you listen to podcasts
Connect with us on LinkedIn and Twitter
From the Information Security Forum, the leading authority on cyber, information security, and risk management.
Key Takeaways:
1. The private sector serves as a skilled and necessary support to the public sector, working to counter mis- and disinformation campaigns, including those involving AI.
2. AI’s increasing ability to create fabricated images poses a particular threat to youth and other vulnerable users.
Tune in to hear more about:
1. Brian gives his assessment of cybersecurity threats during election years. (16:04)
2. Exploitation of vulnerable users remains a major concern in the digital space, requiring awareness, innovative countermeasures, and regulation. (31:0)
Standout Quotes:
1. “I think when we look at AI, we need to recognize it is a potentially long term larger threat to our institutions, our critical mass and infrastructure, and we need to put in countermeasures to be able to do that. But we also need to recognize that the most immediate impact on that is around what we call high harms, if you like. And I think that was one of the reasons the UK — over a torturously long period of time — introduced the The Online Harms Bill to be able to counter some of those issues. So we need to get AI in perspective. It is a threat. Of course it is a threat. But I see then when one looks at AI applied in the cybersecurity test, you know, automatic intelligence developing hacking techniques, bear in mind, AI is available to both sides. It's not just available to the attackers, it's available to the defenders. So what we are simply going to do is see that same kind of thing that we have in the more human-based countering the cybersecurity threat in an AI space.” -Brian Lord
2. “The problem we have now — now, one can counter that by the education of children, keeping them aware, and so on and so forth— the problem you have now is the ability, because of the availability of imagery online and AI's ability to create imagery, one can create an entirely fabricated image of a vulnerable target and say, this is you. Even though it isn’t … when you're looking at the most vulnerable in our society, that's a very, very difficult thing to counter, because it doesn't matter whether it's real to whoever sees it, or the fear from the most vulnerable people, people who see it, they will believe that it is real. And we've seen that.” -Brian Lord
Mentioned in this episode:
• ISF Analyst Insight Podcast
Read the transcript of this episode
Subscribe to the ISF Podcast wherever you listen to podcasts
Connect with us on LinkedIn and Twitter
From the Information Security Forum, the leading authority on cyber, information security, and risk management.
293 에피소드
Manage episode 447359367 series 2984965
Audioboom and Information Security Forum Podcast에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Audioboom and Information Security Forum Podcast 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
This is the second of a two-part conversation between Steve and Brian Lord, who is currently the Chief Executive Officer of Protection Group International. Prior to joining PGI, Brian served as the Deputy Director of a UK Government Agency governing the organization's Cyber and Intelligence Operations. Today, Steve and Brian discuss the proliferation of mis- and disinformation online, the potential security threats posed by AI, and the need for educating children in cyber awareness from a young age.
Key Takeaways:
1. The private sector serves as a skilled and necessary support to the public sector, working to counter mis- and disinformation campaigns, including those involving AI.
2. AI’s increasing ability to create fabricated images poses a particular threat to youth and other vulnerable users.
Tune in to hear more about:
1. Brian gives his assessment of cybersecurity threats during election years. (16:04)
2. Exploitation of vulnerable users remains a major concern in the digital space, requiring awareness, innovative countermeasures, and regulation. (31:0)
Standout Quotes:
1. “I think when we look at AI, we need to recognize it is a potentially long term larger threat to our institutions, our critical mass and infrastructure, and we need to put in countermeasures to be able to do that. But we also need to recognize that the most immediate impact on that is around what we call high harms, if you like. And I think that was one of the reasons the UK — over a torturously long period of time — introduced the The Online Harms Bill to be able to counter some of those issues. So we need to get AI in perspective. It is a threat. Of course it is a threat. But I see then when one looks at AI applied in the cybersecurity test, you know, automatic intelligence developing hacking techniques, bear in mind, AI is available to both sides. It's not just available to the attackers, it's available to the defenders. So what we are simply going to do is see that same kind of thing that we have in the more human-based countering the cybersecurity threat in an AI space.” -Brian Lord
2. “The problem we have now — now, one can counter that by the education of children, keeping them aware, and so on and so forth— the problem you have now is the ability, because of the availability of imagery online and AI's ability to create imagery, one can create an entirely fabricated image of a vulnerable target and say, this is you. Even though it isn’t … when you're looking at the most vulnerable in our society, that's a very, very difficult thing to counter, because it doesn't matter whether it's real to whoever sees it, or the fear from the most vulnerable people, people who see it, they will believe that it is real. And we've seen that.” -Brian Lord
Mentioned in this episode:
• ISF Analyst Insight Podcast
Read the transcript of this episode
Subscribe to the ISF Podcast wherever you listen to podcasts
Connect with us on LinkedIn and Twitter
From the Information Security Forum, the leading authority on cyber, information security, and risk management.
Key Takeaways:
1. The private sector serves as a skilled and necessary support to the public sector, working to counter mis- and disinformation campaigns, including those involving AI.
2. AI’s increasing ability to create fabricated images poses a particular threat to youth and other vulnerable users.
Tune in to hear more about:
1. Brian gives his assessment of cybersecurity threats during election years. (16:04)
2. Exploitation of vulnerable users remains a major concern in the digital space, requiring awareness, innovative countermeasures, and regulation. (31:0)
Standout Quotes:
1. “I think when we look at AI, we need to recognize it is a potentially long term larger threat to our institutions, our critical mass and infrastructure, and we need to put in countermeasures to be able to do that. But we also need to recognize that the most immediate impact on that is around what we call high harms, if you like. And I think that was one of the reasons the UK — over a torturously long period of time — introduced the The Online Harms Bill to be able to counter some of those issues. So we need to get AI in perspective. It is a threat. Of course it is a threat. But I see then when one looks at AI applied in the cybersecurity test, you know, automatic intelligence developing hacking techniques, bear in mind, AI is available to both sides. It's not just available to the attackers, it's available to the defenders. So what we are simply going to do is see that same kind of thing that we have in the more human-based countering the cybersecurity threat in an AI space.” -Brian Lord
2. “The problem we have now — now, one can counter that by the education of children, keeping them aware, and so on and so forth— the problem you have now is the ability, because of the availability of imagery online and AI's ability to create imagery, one can create an entirely fabricated image of a vulnerable target and say, this is you. Even though it isn’t … when you're looking at the most vulnerable in our society, that's a very, very difficult thing to counter, because it doesn't matter whether it's real to whoever sees it, or the fear from the most vulnerable people, people who see it, they will believe that it is real. And we've seen that.” -Brian Lord
Mentioned in this episode:
• ISF Analyst Insight Podcast
Read the transcript of this episode
Subscribe to the ISF Podcast wherever you listen to podcasts
Connect with us on LinkedIn and Twitter
From the Information Security Forum, the leading authority on cyber, information security, and risk management.
293 에피소드
Tüm bölümler
×플레이어 FM에 오신것을 환영합니다!
플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.