Player FM 앱으로 오프라인으로 전환하세요!
EP 377: Confronting AI Bias and AI Discrimination in the Workplace
Manage episode 444554958 series 3470198
Send Everyday AI and Jordan a text message
Think AI is neutral? Think again. This is the workplace impact you never saw coming. What happens when the tech we rely on to be impartial actually reinforces bias? Join us for a deep dive into AI bias and discrimination with Samta Kapoor, EY’s Americas Energy AI and Responsible AI Leader.
Newsletter: Sign up for our free daily newsletter
More on this Episode: Episode Page
Join the discussion: Ask Jordan and Samta questions on AI
Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
Website: YourEverydayAI.com
Email The Show: info@youreverydayai.com
Connect with Jordan on LinkedIn
Topics Covered in This Episode:
1. Business Leaders Confronting AI Bias and Discrimination
2. AI Guardrails
3. Bias and Discrimination in AI Models
4. AI and the Future of Work
4. Responsible AI and the Future
Timestamps:
02:10 About Samta Kapoor and her role at EY
05:33 AI has risks, biases; guardrails recommended.
06:42 Governance ensures technology is scaled responsibly.
13:33 Models reflect biases; they mirror societal discrimination.
16:10 Embracing AI enhances adaptability, not job replacement.
19:04 Leveraging AI for business transformation and innovation.
23:05 Technology rapidly changing requires agile adaptation.
25:12 Address AI bias to reduce employee anxiety.
Keywords:
generative AI, AI bias, AI discrimination, business leaders, model bias, model discrimination, AI models, AI guardrails, AI governance, AI policy, Ernst and Young, AI risk, AI implementation, AI investment, AI hype, AI fear, AI training, workplace AI, AI understanding, AI usage, AI responsibilities, generative AI implementation, practical AI use cases, AI audit, AI technology advancement, multimodal models, AI tech enablement, AI innovation, company AI policies, AI anxiety.
411 에피소드
Manage episode 444554958 series 3470198
Send Everyday AI and Jordan a text message
Think AI is neutral? Think again. This is the workplace impact you never saw coming. What happens when the tech we rely on to be impartial actually reinforces bias? Join us for a deep dive into AI bias and discrimination with Samta Kapoor, EY’s Americas Energy AI and Responsible AI Leader.
Newsletter: Sign up for our free daily newsletter
More on this Episode: Episode Page
Join the discussion: Ask Jordan and Samta questions on AI
Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
Website: YourEverydayAI.com
Email The Show: info@youreverydayai.com
Connect with Jordan on LinkedIn
Topics Covered in This Episode:
1. Business Leaders Confronting AI Bias and Discrimination
2. AI Guardrails
3. Bias and Discrimination in AI Models
4. AI and the Future of Work
4. Responsible AI and the Future
Timestamps:
02:10 About Samta Kapoor and her role at EY
05:33 AI has risks, biases; guardrails recommended.
06:42 Governance ensures technology is scaled responsibly.
13:33 Models reflect biases; they mirror societal discrimination.
16:10 Embracing AI enhances adaptability, not job replacement.
19:04 Leveraging AI for business transformation and innovation.
23:05 Technology rapidly changing requires agile adaptation.
25:12 Address AI bias to reduce employee anxiety.
Keywords:
generative AI, AI bias, AI discrimination, business leaders, model bias, model discrimination, AI models, AI guardrails, AI governance, AI policy, Ernst and Young, AI risk, AI implementation, AI investment, AI hype, AI fear, AI training, workplace AI, AI understanding, AI usage, AI responsibilities, generative AI implementation, practical AI use cases, AI audit, AI technology advancement, multimodal models, AI tech enablement, AI innovation, company AI policies, AI anxiety.
411 에피소드
모든 에피소드
×플레이어 FM에 오신것을 환영합니다!
플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.