Player FM 앱으로 오프라인으로 전환하세요!
It’s time for data scientists to collaborate with researchers in other disciplines
Manage episode 248276640 series 61203
In this episode of the Data Show, I spoke with Forough Poursabzi-Sangdeh, a postdoctoral researcher at Microsoft Research New York City. Poursabzi works in the interdisciplinary area of interpretable and interactive machine learning. As models and algorithms become more widespread, many important considerations are becoming active research areas: fairness and bias, safety and reliability, security and privacy, and Poursabzi’s area of focus—explainability and interpretability.
We had a great conversation spanning many topics, including:
- Current best practices and state-of-the-art methods used to explain or interpret deep learning—or, more generally, machine learning models.
- The limitations of current model interpretability methods.
- The lack of clear/standard metrics for comparing different approaches used for model interpretability
- Many current AI and machine learning applications augment humans, and, thus, Poursabzi believes it’s important for data scientists to work closely with researchers in other disciplines.
- The importance of using human subjects in model interpretability studies.
Related resources:
- “Local Interpretable Model-Agnostic Explanations (LIME): An Introduction”
- “Interpreting predictive models with Skater: Unboxing model opacity”
- Jacob Ward on “How social science research can inform the design of AI systems”
- Sharad Goel and Sam Corbett-Davies on “Why it’s hard to design fair machine learning models”
- “Managing risk in machine learning”: considerations for a world where ML models are becoming mission critical
- Francesca Lazzeri and Jaya Mathew on “Lessons learned while helping enterprises adopt machine learning”
- Jerry Overton on “Teaching and implementing data science and AI in the enterprise”
168 에피소드
Manage episode 248276640 series 61203
In this episode of the Data Show, I spoke with Forough Poursabzi-Sangdeh, a postdoctoral researcher at Microsoft Research New York City. Poursabzi works in the interdisciplinary area of interpretable and interactive machine learning. As models and algorithms become more widespread, many important considerations are becoming active research areas: fairness and bias, safety and reliability, security and privacy, and Poursabzi’s area of focus—explainability and interpretability.
We had a great conversation spanning many topics, including:
- Current best practices and state-of-the-art methods used to explain or interpret deep learning—or, more generally, machine learning models.
- The limitations of current model interpretability methods.
- The lack of clear/standard metrics for comparing different approaches used for model interpretability
- Many current AI and machine learning applications augment humans, and, thus, Poursabzi believes it’s important for data scientists to work closely with researchers in other disciplines.
- The importance of using human subjects in model interpretability studies.
Related resources:
- “Local Interpretable Model-Agnostic Explanations (LIME): An Introduction”
- “Interpreting predictive models with Skater: Unboxing model opacity”
- Jacob Ward on “How social science research can inform the design of AI systems”
- Sharad Goel and Sam Corbett-Davies on “Why it’s hard to design fair machine learning models”
- “Managing risk in machine learning”: considerations for a world where ML models are becoming mission critical
- Francesca Lazzeri and Jaya Mathew on “Lessons learned while helping enterprises adopt machine learning”
- Jerry Overton on “Teaching and implementing data science and AI in the enterprise”
168 에피소드
모든 에피소드
×플레이어 FM에 오신것을 환영합니다!
플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.