Player FM 앱으로 오프라인으로 전환하세요!
Niv Braun on AI Security Measures and Emerging Threats
Manage episode 461131881 series 1450892
In today's episode, we're thrilled to have Niv Braun, co-founder and CEO of Noma Security, join us as we tackle some pressing issues in AI security.
With the rapid adoption of generative AI technologies, the landscape of data security is evolving at breakneck speed. We'll explore the increasing need to secure systems that handle sensitive AI data and pipelines, the rise of AI security careers, and the looming threats of adversarial attacks, model "hallucinations," and more. Niv will share his insights on how companies like Noma Security are working tirelessly to mitigate these risks without hindering innovation.
We'll also dive into real-world incidents, such as compromised open-source models and the infamous PyTorch breach, to illustrate the critical need for improved security measures. From the importance of continuous monitoring to the development of safer formats and the adoption of a zero trust approach, this episode is packed with valuable advice for organizations navigating the complex world of AI security.
So, whether you're a data scientist, AI engineer, or simply an enthusiast eager to learn more about the intersection of AI and security, this episode promises to offer a wealth of information and practical tips to help you stay ahead in this rapidly changing field. Tune in and join the conversation as we uncover the state of AI security and what it means for the future of technology.
Quotable Moments
00:00 Security spotlight shifts to data and AI.
03:36 Protect against misconfigurations, adversarial attacks, new risks.
09:17 Compromised model with undetectable data leaks.
12:07 Manual parsing needed for valid, malicious code detection.
15:44 Concerns over Agiface models may affect jobs.
20:00 Combines self-developed and third-party AI models.
20:55 Ensure models don't use sensitive or unauthorized data.
25:55 Zero Trust: mindset, philosophy, implementation, security framework.
30:51 LLM attacks will have significantly higher impact.
34:23 Need better security awareness, exposed secrets risk.
35:50 Be organized with visibility and governance.
39:51 Red teaming for AI security and safety.
44:33 Gen AI primarily used by consumers, not businesses.
47:57 Providing model guardrails and runtime protection services.
50:53 Ensure flexible, configurable architecture for varied needs.
52:35 AI, security, innovation discussed by Niamh Braun.
302 에피소드
Manage episode 461131881 series 1450892
In today's episode, we're thrilled to have Niv Braun, co-founder and CEO of Noma Security, join us as we tackle some pressing issues in AI security.
With the rapid adoption of generative AI technologies, the landscape of data security is evolving at breakneck speed. We'll explore the increasing need to secure systems that handle sensitive AI data and pipelines, the rise of AI security careers, and the looming threats of adversarial attacks, model "hallucinations," and more. Niv will share his insights on how companies like Noma Security are working tirelessly to mitigate these risks without hindering innovation.
We'll also dive into real-world incidents, such as compromised open-source models and the infamous PyTorch breach, to illustrate the critical need for improved security measures. From the importance of continuous monitoring to the development of safer formats and the adoption of a zero trust approach, this episode is packed with valuable advice for organizations navigating the complex world of AI security.
So, whether you're a data scientist, AI engineer, or simply an enthusiast eager to learn more about the intersection of AI and security, this episode promises to offer a wealth of information and practical tips to help you stay ahead in this rapidly changing field. Tune in and join the conversation as we uncover the state of AI security and what it means for the future of technology.
Quotable Moments
00:00 Security spotlight shifts to data and AI.
03:36 Protect against misconfigurations, adversarial attacks, new risks.
09:17 Compromised model with undetectable data leaks.
12:07 Manual parsing needed for valid, malicious code detection.
15:44 Concerns over Agiface models may affect jobs.
20:00 Combines self-developed and third-party AI models.
20:55 Ensure models don't use sensitive or unauthorized data.
25:55 Zero Trust: mindset, philosophy, implementation, security framework.
30:51 LLM attacks will have significantly higher impact.
34:23 Need better security awareness, exposed secrets risk.
35:50 Be organized with visibility and governance.
39:51 Red teaming for AI security and safety.
44:33 Gen AI primarily used by consumers, not businesses.
47:57 Providing model guardrails and runtime protection services.
50:53 Ensure flexible, configurable architecture for varied needs.
52:35 AI, security, innovation discussed by Niamh Braun.
302 에피소드
모든 에피소드
×
1 Lillian Pierson on Revolutionizing Growth Marketing with AI 59:31

1 Dean Guida on AI Insights, Data Analytics, and Business Growth 1:01:54

1 Arjun Patel on Vector Databases and the Future of Semantic Search 51:31

1 Inna Tokarev Sela on Approaching Data Challenges with Generative AI 58:03

1 Geoff Thatcher on How AI is Revolutionizing Storytelling 1:05:22

1 Alex Gold on DevOps for Data Science and Open Source Practices 59:29

1 David Hirschfeld on Creating High-Impact Teams & Validating Niches 52:40

1 Candace Gillhoolley on AI and Data Driven Marketing Strategies 55:35

1 Baruch Lev and Feng Gu on Data Driven Mergers and Why Most Deals Fail 56:25

1 Artem Rodichof on Empathetic AI and Its Potential Impact on Gaming and Society 56:56

1 Doug Finke on PowerShell, AI, and the future of Small Language Models 1:09:07

1 *LiveStream* Life and Career Updates from a Couple of ex-MVPS 22:25
플레이어 FM에 오신것을 환영합니다!
플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.