Artwork

Machine Learning Street Talk (MLST)에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Machine Learning Street Talk (MLST) 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
Player FM -팟 캐스트 앱
Player FM 앱으로 오프라인으로 전환하세요!

Speechmatics CTO - Next-Generation Speech Recognition

1:46:23
 
공유
 

Manage episode 446535795 series 2803422
Machine Learning Street Talk (MLST)에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Machine Learning Street Talk (MLST) 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

Will Williams is CTO of Speechmatics in Cambridge. In this sponsored episode - he shares deep technical insights into modern speech recognition technology and system architecture. The episode covers several key technical areas:

* Speechmatics' hybrid approach to ASR, which focusses on unsupervised learning methods, achieving comparable results with 100x less data than fully supervised approaches. Williams explains why this is more efficient and generalizable than end-to-end models like Whisper.

* Their production architecture implementing multiple operating points for different latency-accuracy trade-offs, with careful latency padding (up to 1.8 seconds) to ensure consistent user experience. The system uses lattice-based decoding with language model integration for improved accuracy.

* The challenges and solutions in real-time ASR, including their approach to diarization (speaker identification), handling cross-talk, and implicit source separation. Williams explains why these problems remain difficult even with modern deep learning approaches.

* Their testing and deployment infrastructure, including the use of mirrored environments for catching edge cases in production, and their strategy of maintaining global models rather than allowing customer-specific fine-tuning.

* Technical evolution in ASR, from early days of custom CUDA kernels and manual memory management to modern frameworks, with Williams offering interesting critiques of current PyTorch memory management approaches and arguing for more efficient direct memory allocation in production systems.

Get coding with their API! This is their URL:

https://www.speechmatics.com/

DO YOU WANT WORK ON ARC with the MindsAI team (current ARC winners)?

MLST is sponsored by Tufa Labs:

Focus: ARC, LLMs, test-time-compute, active inference, system2 reasoning, and more.

Interested? Apply for an ML research position: benjamin@tufa.ai

TOC

1. ASR Core Technology & Real-time Architecture

[00:00:00] 1.1 ASR and Diarization Fundamentals

[00:05:25] 1.2 Real-time Conversational AI Architecture

[00:09:21] 1.3 Neural Network Streaming Implementation

[00:12:49] 1.4 Multi-modal System Integration

2. Production System Optimization

[00:29:38] 2.1 Production Deployment and Testing Infrastructure

[00:35:40] 2.2 Model Architecture and Deployment Strategy

[00:37:12] 2.3 Latency-Accuracy Trade-offs

[00:39:15] 2.4 Language Model Integration

[00:40:32] 2.5 Lattice-based Decoding Architecture

3. Performance Evaluation & Ethical Considerations

[00:44:00] 3.1 ASR Performance Metrics and Capabilities

[00:46:35] 3.2 AI Regulation and Evaluation Methods

[00:51:09] 3.3 Benchmark and Testing Challenges

[00:54:30] 3.4 Real-world Implementation Metrics

[01:00:51] 3.5 Ethics and Privacy Considerations

4. ASR Technical Evolution

[01:09:00] 4.1 WER Calculation and Evaluation Methodologies

[01:10:21] 4.2 Supervised vs Self-Supervised Learning Approaches

[01:21:02] 4.3 Temporal Learning and Feature Processing

[01:24:45] 4.4 Feature Engineering to Automated ML

5. Enterprise Implementation & Scale

[01:27:55] 5.1 Future AI Systems and Adaptation

[01:31:52] 5.2 Technical Foundations and History

[01:34:53] 5.3 Infrastructure and Team Scaling

[01:38:05] 5.4 Research and Talent Strategy

[01:41:11] 5.5 Engineering Practice Evolution

Shownotes:

https://www.dropbox.com/scl/fi/d94b1jcgph9o8au8shdym/Speechmatics.pdf?rlkey=bi55wvktzomzx0y5sic6jz99y&st=6qwofv8t&dl=0

  continue reading

193 에피소드

Artwork
icon공유
 
Manage episode 446535795 series 2803422
Machine Learning Street Talk (MLST)에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Machine Learning Street Talk (MLST) 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.

Will Williams is CTO of Speechmatics in Cambridge. In this sponsored episode - he shares deep technical insights into modern speech recognition technology and system architecture. The episode covers several key technical areas:

* Speechmatics' hybrid approach to ASR, which focusses on unsupervised learning methods, achieving comparable results with 100x less data than fully supervised approaches. Williams explains why this is more efficient and generalizable than end-to-end models like Whisper.

* Their production architecture implementing multiple operating points for different latency-accuracy trade-offs, with careful latency padding (up to 1.8 seconds) to ensure consistent user experience. The system uses lattice-based decoding with language model integration for improved accuracy.

* The challenges and solutions in real-time ASR, including their approach to diarization (speaker identification), handling cross-talk, and implicit source separation. Williams explains why these problems remain difficult even with modern deep learning approaches.

* Their testing and deployment infrastructure, including the use of mirrored environments for catching edge cases in production, and their strategy of maintaining global models rather than allowing customer-specific fine-tuning.

* Technical evolution in ASR, from early days of custom CUDA kernels and manual memory management to modern frameworks, with Williams offering interesting critiques of current PyTorch memory management approaches and arguing for more efficient direct memory allocation in production systems.

Get coding with their API! This is their URL:

https://www.speechmatics.com/

DO YOU WANT WORK ON ARC with the MindsAI team (current ARC winners)?

MLST is sponsored by Tufa Labs:

Focus: ARC, LLMs, test-time-compute, active inference, system2 reasoning, and more.

Interested? Apply for an ML research position: benjamin@tufa.ai

TOC

1. ASR Core Technology & Real-time Architecture

[00:00:00] 1.1 ASR and Diarization Fundamentals

[00:05:25] 1.2 Real-time Conversational AI Architecture

[00:09:21] 1.3 Neural Network Streaming Implementation

[00:12:49] 1.4 Multi-modal System Integration

2. Production System Optimization

[00:29:38] 2.1 Production Deployment and Testing Infrastructure

[00:35:40] 2.2 Model Architecture and Deployment Strategy

[00:37:12] 2.3 Latency-Accuracy Trade-offs

[00:39:15] 2.4 Language Model Integration

[00:40:32] 2.5 Lattice-based Decoding Architecture

3. Performance Evaluation & Ethical Considerations

[00:44:00] 3.1 ASR Performance Metrics and Capabilities

[00:46:35] 3.2 AI Regulation and Evaluation Methods

[00:51:09] 3.3 Benchmark and Testing Challenges

[00:54:30] 3.4 Real-world Implementation Metrics

[01:00:51] 3.5 Ethics and Privacy Considerations

4. ASR Technical Evolution

[01:09:00] 4.1 WER Calculation and Evaluation Methodologies

[01:10:21] 4.2 Supervised vs Self-Supervised Learning Approaches

[01:21:02] 4.3 Temporal Learning and Feature Processing

[01:24:45] 4.4 Feature Engineering to Automated ML

5. Enterprise Implementation & Scale

[01:27:55] 5.1 Future AI Systems and Adaptation

[01:31:52] 5.2 Technical Foundations and History

[01:34:53] 5.3 Infrastructure and Team Scaling

[01:38:05] 5.4 Research and Talent Strategy

[01:41:11] 5.5 Engineering Practice Evolution

Shownotes:

https://www.dropbox.com/scl/fi/d94b1jcgph9o8au8shdym/Speechmatics.pdf?rlkey=bi55wvktzomzx0y5sic6jz99y&st=6qwofv8t&dl=0

  continue reading

193 에피소드

모든 에피소드

×
 
Loading …

플레이어 FM에 오신것을 환영합니다!

플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.

 

빠른 참조 가이드

탐색하는 동안 이 프로그램을 들어보세요.
재생