Player FM 앱으로 오프라인으로 전환하세요!
Responsible AI: Defining, Implementing, and Navigating the Future; With Guest: Diya Wynn
Manage episode 363717659 series 3461851
In this episode of The MLSecOps Podcast, Diya Wynn, Sr. Practice Manager in Responsible AI in the Machine Learning Solutions Lab at Amazon Web Services shares her background and the motivations that led her to pursue a career in Responsible AI.
Diya shares her passion for work related to diversity, equity, and inclusion (DEI), and how Responsible AI offers a unique opportunity to merge her passion for DEI with what her core focus has always been: technology. She explores the definition of Responsible AI as an operating approach focused on minimizing unintended impact and maximizing benefits. The group also spends some time in this episode discussing Generative AI and its potential to perpetuate biases and raise ethical concerns.
Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models
Recon: Automated Red Teaming for GenAI
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard Open Source Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform
41 에피소드
Manage episode 363717659 series 3461851
In this episode of The MLSecOps Podcast, Diya Wynn, Sr. Practice Manager in Responsible AI in the Machine Learning Solutions Lab at Amazon Web Services shares her background and the motivations that led her to pursue a career in Responsible AI.
Diya shares her passion for work related to diversity, equity, and inclusion (DEI), and how Responsible AI offers a unique opportunity to merge her passion for DEI with what her core focus has always been: technology. She explores the definition of Responsible AI as an operating approach focused on minimizing unintended impact and maximizing benefits. The group also spends some time in this episode discussing Generative AI and its potential to perpetuate biases and raise ethical concerns.
Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models
Recon: Automated Red Teaming for GenAI
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard Open Source Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform
41 에피소드
Όλα τα επεισόδια
×플레이어 FM에 오신것을 환영합니다!
플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.