Artwork

DataRobot에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 DataRobot 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Player FM -팟 캐스트 앱
Player FM 앱으로 오프라인으로 전환하세요!

Bringing History and Foresight to Ethical AI - Meg Mitchell

1:03:18
 
공유
 

Manage episode 321690866 series 2842356
DataRobot에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 DataRobot 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

In this episode of More Intelligent Tomorrow, Meg talks to Michael Gilday about the opportunities in machine learning to create a more diverse, equitable, and inclusive future.
Dr. Margaret Mitchell (Meg) is a researcher in ethics-informed AI with a focus on natural language generation, computer vision, and other augmentative and assistive technologies.

Foresight is an indispensable tool for shaping and evaluating AI project outcomes. Instead of focusing on creating technology to improve something that already exists, a longer-term focus –one that is two, five, or ten years into the future–can help us understand what we should be working on today. It’s a fairly straightforward way of thinking, yet foresight is often brushed aside as incalculable.

Foresight also can present a liability issue. If you’re working on a technology that will be less discriminatory, for example, that means your technology right now is discriminatory. Fear of impending regulation and of misinterpretations that hamper development can cause a troubling lack of imagination within development teams.

Bringing in people who have a creative mindset or a different perspective can help technical teams see things in a more imaginative way. Science fiction writers, for example, are adept at bringing foresight into a project and help teams see how things might evolve over time. That, in turn, could help us be smarter about the kinds of development we do. But bringing people who are adept at foresight into a project, such as science fiction writers, creates an opportunity to think through how things might evolve over time. That, in turn, could help us be smarter about the kinds of development we do.

Similarly, historians can shed light on patterns of development over time. Instead of focusing on how rapidly technology is changing, they can offer a reflection on corresponding power dynamics and sociological changes that can also inform how we develop a technology.

A collaboration of humanities-oriented thinkers and science-oriented thinkers can help us think through the storyline of what a technology should be. There’s a need to focus not only on how well the model or system works in isolation but also how well it works in context.

“Understanding how people use a technology–and therefore understanding people– is not something computer scientists are always good at. It requires different skill sets, which makes collaboration with subject matter experts critical.”

To really understand what it means to have AI in our social contexts, we need social scientists, anthropologists, and historians. So, how do we bring a diversity of voices and experiences into these technological challenges and conversations?

“Now is a great time to focus our attention on the science of diversity and inclusion. We’re on a global scale we haven’t been able to see before. We have infinitely better access to different cultures and perspectives on differences and similarities like we’ve never had before.”

Listen to this episode of More Intelligent Tomorrow to learn about:

  • The culture of ethical behavior and the bottom-up-top-down approach with regulators and corporations
  • How no-code solutions are removing barriers in AI and machine learning work
  • Malicious actors vs. irresponsible ones and why ignorance is the biggest problem we face
  • Gender bias and progress bringing more women into in tech and STEM
  • How transparency can be prioritized over the obfuscation that is prevalent right now

  continue reading

69 에피소드

Artwork
icon공유
 
Manage episode 321690866 series 2842356
DataRobot에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 DataRobot 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

In this episode of More Intelligent Tomorrow, Meg talks to Michael Gilday about the opportunities in machine learning to create a more diverse, equitable, and inclusive future.
Dr. Margaret Mitchell (Meg) is a researcher in ethics-informed AI with a focus on natural language generation, computer vision, and other augmentative and assistive technologies.

Foresight is an indispensable tool for shaping and evaluating AI project outcomes. Instead of focusing on creating technology to improve something that already exists, a longer-term focus –one that is two, five, or ten years into the future–can help us understand what we should be working on today. It’s a fairly straightforward way of thinking, yet foresight is often brushed aside as incalculable.

Foresight also can present a liability issue. If you’re working on a technology that will be less discriminatory, for example, that means your technology right now is discriminatory. Fear of impending regulation and of misinterpretations that hamper development can cause a troubling lack of imagination within development teams.

Bringing in people who have a creative mindset or a different perspective can help technical teams see things in a more imaginative way. Science fiction writers, for example, are adept at bringing foresight into a project and help teams see how things might evolve over time. That, in turn, could help us be smarter about the kinds of development we do. But bringing people who are adept at foresight into a project, such as science fiction writers, creates an opportunity to think through how things might evolve over time. That, in turn, could help us be smarter about the kinds of development we do.

Similarly, historians can shed light on patterns of development over time. Instead of focusing on how rapidly technology is changing, they can offer a reflection on corresponding power dynamics and sociological changes that can also inform how we develop a technology.

A collaboration of humanities-oriented thinkers and science-oriented thinkers can help us think through the storyline of what a technology should be. There’s a need to focus not only on how well the model or system works in isolation but also how well it works in context.

“Understanding how people use a technology–and therefore understanding people– is not something computer scientists are always good at. It requires different skill sets, which makes collaboration with subject matter experts critical.”

To really understand what it means to have AI in our social contexts, we need social scientists, anthropologists, and historians. So, how do we bring a diversity of voices and experiences into these technological challenges and conversations?

“Now is a great time to focus our attention on the science of diversity and inclusion. We’re on a global scale we haven’t been able to see before. We have infinitely better access to different cultures and perspectives on differences and similarities like we’ve never had before.”

Listen to this episode of More Intelligent Tomorrow to learn about:

  • The culture of ethical behavior and the bottom-up-top-down approach with regulators and corporations
  • How no-code solutions are removing barriers in AI and machine learning work
  • Malicious actors vs. irresponsible ones and why ignorance is the biggest problem we face
  • Gender bias and progress bringing more women into in tech and STEM
  • How transparency can be prioritized over the obfuscation that is prevalent right now

  continue reading

69 에피소드

모든 에피소드

×
 
Loading …

플레이어 FM에 오신것을 환영합니다!

플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.

 

빠른 참조 가이드