Player FM 앱으로 오프라인으로 전환하세요!
Measuring Bias, Toxicity, and Truthfulness in LLMs With Python
Manage episode 396312145 series 2637014
How can you measure the quality of a large language model? What tools can measure bias, toxicity, and truthfulness levels in a model using Python? This week on the show, Jodie Burchell, developer advocate for data science at JetBrains, returns to discuss techniques and tools for evaluating LLMs With Python.
Jodie provides some background on large language models and how they can absorb vast amounts of information about the relationship between words using a type of neural network called a transformer. We discuss training datasets and the potential quality issues with crawling uncurated sources.
We dig into ways to measure levels of bias, toxicity, and hallucinations using Python. Jodie shares three benchmarking datasets and links to resources to get you started. We also discuss ways to augment models using agents or plugins, which can access search engine results or other authoritative sources.
This week’s episode is brought to you by Intel.
Course Spotlight: Learn Text Classification With Python and Keras
In this course, you’ll learn about Python text classification with Keras, working your way from a bag-of-words model with logistic regression to more advanced methods, such as convolutional neural networks. You’ll see how you can use pretrained word embeddings, and you’ll squeeze more performance out of your model through hyperparameter optimization.
Topics:
- 00:00:00 – Introduction
- 00:02:19 – Testing characteristics of LLMs with Python
- 00:04:18 – Background on LLMs
- 00:08:35 – Training of models
- 00:14:23 – Uncurated sources of training
- 00:16:12 – Safeguards and prompt engineering
- 00:21:19 – TruthfulQA and creating a more strict prompt
- 00:23:20 – Information that is out of date
- 00:26:07 – WinoBias for evaluating gender stereotypes
- 00:28:30 – BOLD dataset for evaluating bias
- 00:30:28 – Sponsor: Intel
- 00:31:18 – Using Hugging Face to start testing with Python
- 00:35:25 – Using the transformers package
- 00:37:34 – Using langchain for proprietary models
- 00:43:04 – Putting the tools together and evaluating
- 00:47:19 – Video Course Spotlight
- 00:48:29 – Assessing toxicity
- 00:50:21 – Measuring bias
- 00:54:40 – Checking the hallucination rate
- 00:56:22 – LLM leaderboards
- 00:58:17 – What helped ChatGPT leap forward?
- 01:06:01 – Improvements of what is being crawled
- 01:07:32 – Revisiting agents and RAG
- 01:11:03 – ChatGPT plugins and Wolfram-Alpha
- 01:13:06 – How can people follow your work online?
- 01:14:33 – Thanks and goodbye
Background Links:
Dataset Links:
- truthful_qa - Datasets at Hugging Face
- wino_bias - Datasets at Hugging Face
- bold - Datasets at Hugging Face
Tutorials and Documentation for Python Packages:
- Evaluating Language Model Bias with 🤗 Evaluate
- Hugging Face - HF_bias_evaluation - Google Colab
- General Usage - Load a Dataset - Hugging Face
- What is Text Generation? - Hugging Face
- 🤗 Evaluate - Library Evaluating ML Models
- Python Quickstart - 🦜️🔗 Langchain
Measurement Links:
- Toxicity - a Hugging Face Space by evaluate-measurement
- Regard - a Hugging Face Space by evaluate-measurement
- Open LLM Leaderboard - a Hugging Face Space
Training Data for LLMs:
- Common Crawl - Open Repository of Web Crawl Data
- The Pile
- The RefinedWeb Dataset for Falcon LLM: Outperforming Curated Corpora
Agents and Plugin Links:
- Transformers Agents - Hugging Face
- Agents - 🦜️🔗 Langchain
- ChatGPT Gets Its “Wolfram Superpowers”! - Stephen Wolfram
Additional Links:
- Inside the AI Factory: The Humans that Make Tech Seem Human - The Verge
- Jodie Burchell - The JetBrains Blog
- Jodie Burchell’s Blog - Standard error
- Jodie Burchell 🇦🇺🇩🇪 (@t_redactyl) - Twitter
- Jodie Burchell 🇦🇺🇩🇪 (@t_redactyl@fosstodon.org) - Fosstodon
- JetBrains: Essential tools for software developers and teams
Level up your Python skills with our expert-led courses:
226 에피소드
Manage episode 396312145 series 2637014
How can you measure the quality of a large language model? What tools can measure bias, toxicity, and truthfulness levels in a model using Python? This week on the show, Jodie Burchell, developer advocate for data science at JetBrains, returns to discuss techniques and tools for evaluating LLMs With Python.
Jodie provides some background on large language models and how they can absorb vast amounts of information about the relationship between words using a type of neural network called a transformer. We discuss training datasets and the potential quality issues with crawling uncurated sources.
We dig into ways to measure levels of bias, toxicity, and hallucinations using Python. Jodie shares three benchmarking datasets and links to resources to get you started. We also discuss ways to augment models using agents or plugins, which can access search engine results or other authoritative sources.
This week’s episode is brought to you by Intel.
Course Spotlight: Learn Text Classification With Python and Keras
In this course, you’ll learn about Python text classification with Keras, working your way from a bag-of-words model with logistic regression to more advanced methods, such as convolutional neural networks. You’ll see how you can use pretrained word embeddings, and you’ll squeeze more performance out of your model through hyperparameter optimization.
Topics:
- 00:00:00 – Introduction
- 00:02:19 – Testing characteristics of LLMs with Python
- 00:04:18 – Background on LLMs
- 00:08:35 – Training of models
- 00:14:23 – Uncurated sources of training
- 00:16:12 – Safeguards and prompt engineering
- 00:21:19 – TruthfulQA and creating a more strict prompt
- 00:23:20 – Information that is out of date
- 00:26:07 – WinoBias for evaluating gender stereotypes
- 00:28:30 – BOLD dataset for evaluating bias
- 00:30:28 – Sponsor: Intel
- 00:31:18 – Using Hugging Face to start testing with Python
- 00:35:25 – Using the transformers package
- 00:37:34 – Using langchain for proprietary models
- 00:43:04 – Putting the tools together and evaluating
- 00:47:19 – Video Course Spotlight
- 00:48:29 – Assessing toxicity
- 00:50:21 – Measuring bias
- 00:54:40 – Checking the hallucination rate
- 00:56:22 – LLM leaderboards
- 00:58:17 – What helped ChatGPT leap forward?
- 01:06:01 – Improvements of what is being crawled
- 01:07:32 – Revisiting agents and RAG
- 01:11:03 – ChatGPT plugins and Wolfram-Alpha
- 01:13:06 – How can people follow your work online?
- 01:14:33 – Thanks and goodbye
Background Links:
Dataset Links:
- truthful_qa - Datasets at Hugging Face
- wino_bias - Datasets at Hugging Face
- bold - Datasets at Hugging Face
Tutorials and Documentation for Python Packages:
- Evaluating Language Model Bias with 🤗 Evaluate
- Hugging Face - HF_bias_evaluation - Google Colab
- General Usage - Load a Dataset - Hugging Face
- What is Text Generation? - Hugging Face
- 🤗 Evaluate - Library Evaluating ML Models
- Python Quickstart - 🦜️🔗 Langchain
Measurement Links:
- Toxicity - a Hugging Face Space by evaluate-measurement
- Regard - a Hugging Face Space by evaluate-measurement
- Open LLM Leaderboard - a Hugging Face Space
Training Data for LLMs:
- Common Crawl - Open Repository of Web Crawl Data
- The Pile
- The RefinedWeb Dataset for Falcon LLM: Outperforming Curated Corpora
Agents and Plugin Links:
- Transformers Agents - Hugging Face
- Agents - 🦜️🔗 Langchain
- ChatGPT Gets Its “Wolfram Superpowers”! - Stephen Wolfram
Additional Links:
- Inside the AI Factory: The Humans that Make Tech Seem Human - The Verge
- Jodie Burchell - The JetBrains Blog
- Jodie Burchell’s Blog - Standard error
- Jodie Burchell 🇦🇺🇩🇪 (@t_redactyl) - Twitter
- Jodie Burchell 🇦🇺🇩🇪 (@t_redactyl@fosstodon.org) - Fosstodon
- JetBrains: Essential tools for software developers and teams
Level up your Python skills with our expert-led courses:
226 에피소드
모든 에피소드
×플레이어 FM에 오신것을 환영합니다!
플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.