Skip to content

fix: skip CoreML compilation in AsrModels.download()#357

Closed
Alex-Wengg wants to merge 1 commit intomainfrom
fix/asr-download-disk-writes
Closed

fix: skip CoreML compilation in AsrModels.download()#357
Alex-Wengg wants to merge 1 commit intomainfrom
fix/asr-download-disk-writes

Conversation

@Alex-Wengg
Copy link
Contributor

@Alex-Wengg Alex-Wengg commented Mar 7, 2026

Summary

  • AsrModels.download() called DownloadUtils.loadModels() which runs MLModel(contentsOf:configuration:) on all 4 Parakeet models, then discards the loaded objects (_ = try await ...). That compilation triggers CoreML's internal MPS shader generation which writes gigabytes to disk.
  • On first install the combined download + compilation I/O was observed at ~8.5 GB (writesCaused), causing macOS to kill the process with a disk-write exception.
  • Switch download() to use downloadRepo() which fetches files without loading them into CoreML. Compilation is deferred to load() / downloadAndLoad() where the MLModel objects are actually used.

Context

Crash report from a client (MacBookPro18,1, macOS 15.6.1):

writesCaused: 8590 MB

Triggering thread:
    ParakeetTranscriptionService.download(modelId:)
      AsrModels.download(to:force:version:)
        DownloadUtils.loadModelsOnce(_:modelNames:...)
          MLModel.__allocating_init(contentsOf:configuration:)

Background thread (734 samples):
    CoreML -> Espresso -> MetalPerformanceShadersGraph -> libsystem_kernel

What this does NOT fix

The compilation during load() still writes to disk and could still be large. If the crash persists, the load-time compilation itself needs investigation — e.g. why CoreML generates MPS shaders when computeUnits is .cpuAndNeuralEngine, and whether compiling models one at a time or with .cpuOnly for the first load would reduce writes.

Test plan

  • swift build succeeds
  • swift test --filter AsrModelsTests passes (18/18)
  • swift test --filter CITests passes (13/13)
  • Verify first-time download on clean cache completes without disk-write kill
  • Verify downloadAndLoad() still works end-to-end (downloads + compiles + returns models)
  • Measure disk writes during download() vs load() separately to confirm reduction

🤖 Generated with Claude Code


Open with Devin

AsrModels.download() called DownloadUtils.loadModels() which runs
MLModel(contentsOf:configuration:) on all 4 Parakeet models, then
discards the loaded objects.  That compilation triggers CoreML's
internal MPS shader generation which writes gigabytes to disk.

On first install the combined download I/O + compilation I/O has been
observed to hit ~8.5 GB (writesCaused), causing macOS to kill the
process with a disk-write exception (MacBookPro18,1, macOS 15.6.1).

Switch download() to use downloadRepo() which fetches files from
HuggingFace without loading them into CoreML.  Compilation is deferred
to load() / downloadAndLoad() where the MLModel objects are actually
used.

What this does NOT address: the compilation during load() still writes
to disk and could still be large.  If the crash persists after this
change, the load-time compilation itself needs investigation (e.g.
compiling models sequentially, or understanding why CoreML generates
MPS shaders with .cpuAndNeuralEngine).
@claude
Copy link
Contributor

claude bot commented Mar 7, 2026

Claude finished @Alex-Wengg's task —— View job


PR Review Complete ✅

Todo List:

  • Create pending review
  • Get PR diff to analyze changes
  • Review code quality and approach
  • Add inline feedback comments
  • Submit review with overall assessment

Review Summary

This is an excellent fix that properly addresses the critical disk write issue by separating download from CoreML compilation. Key strengths:

  • Root cause fix: Eliminates unnecessary MLModel(contentsOf:) calls during download
  • Clear documentation: Detailed comment explains the 8.5GB disk write problem
  • API compatibility: downloadAndLoad() maintains existing behavior
  • Simplified logic: Replaces complex per-model handling with straightforward repo download

Minor considerations: Consider adding lightweight file existence verification after download and ensure progress reporting works correctly with downloadRepo().

Overall assessment: Well-reasoned solution that should resolve the macOS process kill issue while maintaining compatibility. ✅

Copy link
Contributor

@claude claude bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review Summary

This is an excellent fix that addresses a critical production issue. The change properly separates download concerns from CoreML compilation.

Strengths

  1. Root cause fix: Correctly identifies and eliminates the unnecessary CoreML compilation during download
  2. Clear separation of concerns: Download now only fetches files, compilation happens during load()
  3. Excellent documentation: The comment block clearly explains the problem, impact, and solution
  4. API compatibility: downloadAndLoad() method preserves the same behavior for existing callers
  5. Simplified logic: Removes complex per-model handling in favor of straightforward repo download

🔍 Technical Analysis

  • Before: download()loadModels()MLModel(contentsOf:) → immediate discard → 8.5GB disk writes
  • After: download()downloadRepo() → file fetch only → compilation deferred to load()
  • Impact: Should dramatically reduce initial disk I/O and prevent macOS process kills

⚠️ Minor Considerations

  1. File verification: The previous implementation implicitly verified models could be loaded. Consider adding a lightweight file existence check after download for early error detection.

  2. Error handling: Ensure downloadRepo() provides equivalent error reporting if individual model files fail to download.

  3. Progress reporting: Verify that progress callbacks work correctly with downloadRepo() vs the granular per-model progress in the previous implementation.

🧪 Testing Recommendations

  • Test first-time installation on clean cache to verify disk write reduction
  • Verify downloadAndLoad() end-to-end functionality
  • Measure actual disk writes during download() vs load() phases

Overall: This is a well-reasoned fix that should resolve the immediate production issue while maintaining API compatibility.

Copy link

@devin-ai-integration devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

✅ Devin Review: No Issues Found

Devin Review analyzed this PR and found no potential bugs to report.

View in Devin Review to see 2 additional findings.

Open in Devin Review

@github-actions
Copy link

github-actions bot commented Mar 7, 2026

Parakeet EOU Benchmark Results ✅

Status: Benchmark passed
Chunk Size: 320ms
Files Tested: 100/100

Performance Metrics

Metric Value Description
WER (Avg) 7.03% Average Word Error Rate
WER (Med) 4.17% Median Word Error Rate
RTFx 6.35x Real-time factor (higher = faster)
Total Audio 470.6s Total audio duration processed
Total Time 75.5s Total processing time

Streaming Metrics

Metric Value Description
Avg Chunk Time 0.076s Average chunk processing time
Max Chunk Time 0.151s Maximum chunk processing time
EOU Detections 0 Total End-of-Utterance detections

Test runtime: 1m41s • 03/07/2026, 01:48 PM EST

RTFx = Real-Time Factor (higher is better) • Processing includes: Model inference, audio preprocessing, state management, and file I/O

@github-actions
Copy link

github-actions bot commented Mar 7, 2026

PocketTTS Smoke Test ✅

Check Result
Build
Model download
Model load
Synthesis pipeline
Output WAV ✅ (172.5 KB)

Runtime: 0m25s

Note: PocketTTS uses CoreML MLState (macOS 15) KV cache + Mimi streaming state. CI VM lacks physical GPU — audio quality may differ from Apple Silicon.

@github-actions
Copy link

github-actions bot commented Mar 7, 2026

Qwen3-ASR int8 Smoke Test ✅

Check Result
Build
Model download
Model load
Transcription pipeline
Decoder size 571 MB (vs 1.1 GB f32)

Runtime: 4m7s

Note: CI VM lacks physical GPU — CoreML MLState (macOS 15) KV cache produces degraded results on virtualized runners. On Apple Silicon: ~1.3% WER / 2.5x RTFx.

@github-actions
Copy link

github-actions bot commented Mar 7, 2026

ASR Benchmark Results ✅

Status: All benchmarks passed

Parakeet v3 (multilingual)

Dataset WER Avg WER Med RTFx Status
test-clean 0.57% 0.00% 4.27x
test-other 1.56% 0.00% 3.17x

Parakeet v2 (English-optimized)

Dataset WER Avg WER Med RTFx Status
test-clean 0.80% 0.00% 4.35x
test-other 1.00% 0.00% 3.74x

Streaming (v3)

Metric Value Description
WER 0.00% Word Error Rate in streaming mode
RTFx 0.56x Streaming real-time factor
Avg Chunk Time 1.575s Average time to process each chunk
Max Chunk Time 2.155s Maximum chunk processing time
First Token 1.807s Latency to first transcription token
Total Chunks 31 Number of chunks processed

Streaming (v2)

Metric Value Description
WER 0.00% Word Error Rate in streaming mode
RTFx 0.66x Streaming real-time factor
Avg Chunk Time 1.363s Average time to process each chunk
Max Chunk Time 1.610s Maximum chunk processing time
First Token 1.361s Latency to first transcription token
Total Chunks 31 Number of chunks processed

Streaming tests use 5 files with 0.5s chunks to simulate real-time audio streaming

25 files per dataset • Test runtime: 7m21s • 03/07/2026, 01:55 PM EST

RTFx = Real-Time Factor (higher is better) • Calculated as: Total audio duration ÷ Total processing time
Processing time includes: Model inference on Apple Neural Engine, audio preprocessing, state resets between files, token-to-text conversion, and file I/O
Example: RTFx of 2.0x means 10 seconds of audio processed in 5 seconds (2x faster than real-time)

Expected RTFx Performance on Physical M1 Hardware:

• M1 Mac: ~28x (clean), ~25x (other)
• CI shows ~0.5-3x due to virtualization limitations

Testing methodology follows HuggingFace Open ASR Leaderboard

@github-actions
Copy link

github-actions bot commented Mar 7, 2026

Offline VBx Pipeline Results

Speaker Diarization Performance (VBx Batch Mode)

Optimal clustering with Hungarian algorithm for maximum accuracy

Metric Value Target Status Description
DER 14.5% <20% Diarization Error Rate (lower is better)
RTFx 3.24x >1.0x Real-Time Factor (higher is faster)

Offline VBx Pipeline Timing Breakdown

Time spent in each stage of batch diarization

Stage Time (s) % Description
Model Download 15.171 4.7 Fetching diarization models
Model Compile 6.502 2.0 CoreML compilation
Audio Load 0.099 0.0 Loading audio file
Segmentation 35.471 11.0 VAD + speech detection
Embedding 322.574 99.6 Speaker embedding extraction
Clustering (VBx) 0.947 0.3 Hungarian algorithm + VBx clustering
Total 323.740 100 Full VBx pipeline

Speaker Diarization Research Comparison

Offline VBx achieves competitive accuracy with batch processing

Method DER Mode Description
FluidAudio (Offline) 14.5% VBx Batch On-device CoreML with optimal clustering
FluidAudio (Streaming) 17.7% Chunk-based First-occurrence speaker mapping
Research baseline 18-30% Various Standard dataset performance

Pipeline Details:

  • Mode: Offline VBx with Hungarian algorithm for optimal speaker-to-cluster assignment
  • Segmentation: VAD-based voice activity detection
  • Embeddings: WeSpeaker-compatible speaker embeddings
  • Clustering: PowerSet with VBx refinement
  • Accuracy: Higher than streaming due to optimal post-hoc mapping

🎯 Offline VBx Test • AMI Corpus ES2004a • 1049.0s meeting audio • 359.0s processing • Test runtime: 5m 56s • 03/07/2026, 01:55 PM EST

@github-actions
Copy link

github-actions bot commented Mar 7, 2026

Speaker Diarization Benchmark Results

Speaker Diarization Performance

Evaluating "who spoke when" detection accuracy

Metric Value Target Status Description
DER 15.1% <30% Diarization Error Rate (lower is better)
JER 24.9% <25% Jaccard Error Rate
RTFx 26.57x >1.0x Real-Time Factor (higher is faster)

Diarization Pipeline Timing Breakdown

Time spent in each stage of speaker diarization

Stage Time (s) % Description
Model Download 7.951 20.1 Fetching diarization models
Model Compile 3.408 8.6 CoreML compilation
Audio Load 0.031 0.1 Loading audio file
Segmentation 11.845 30.0 Detecting speech regions
Embedding 19.741 50.0 Extracting speaker voices
Clustering 7.897 20.0 Grouping same speakers
Total 39.494 100 Full pipeline

Speaker Diarization Research Comparison

Research baselines typically achieve 18-30% DER on standard datasets

Method DER Notes
FluidAudio 15.1% On-device CoreML
Research baseline 18-30% Standard dataset performance

Note: RTFx shown above is from GitHub Actions runner. On Apple Silicon with ANE:

  • M2 MacBook Air (2022): Runs at 150 RTFx real-time
  • Performance scales with Apple Neural Engine capabilities

🎯 Speaker Diarization Test • AMI Corpus ES2004a • 1049.0s meeting audio • 39.5s diarization time • Test runtime: 2m 30s • 03/07/2026, 01:55 PM EST

@github-actions
Copy link

github-actions bot commented Mar 7, 2026

VAD Benchmark Results

Performance Comparison

Dataset Accuracy Precision Recall F1-Score RTFx Files
MUSAN 92.0% 86.2% 100.0% 92.6% 651.3x faster 50
VOiCES 92.0% 86.2% 100.0% 92.6% 559.6x faster 50

Dataset Details

  • MUSAN: Music, Speech, and Noise dataset - standard VAD evaluation
  • VOiCES: Voices Obscured in Complex Environmental Settings - tests robustness in real-world conditions

✅: Average F1-Score above 70%

@github-actions
Copy link

github-actions bot commented Mar 7, 2026

Sortformer High-Latency Benchmark Results

ES2004a Performance (30.4s latency config)

Metric Value Target Status
DER 33.4% <35%
Miss Rate 24.4% - -
False Alarm 0.1% - -
Speaker Error 8.9% - -
RTFx 16.1x >1.0x
Speakers 4/4 - -

Sortformer High-Latency • ES2004a • Runtime: 2m 47s • 2026-03-07T19:02:41.698Z

@Alex-Wengg Alex-Wengg closed this Mar 8, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant