Skip to content

normalize numbers to text as the g2p model doesn't handle this#358

Merged
Alex-Wengg merged 1 commit intomainfrom
g2p-number-fix
Mar 7, 2026
Merged

normalize numbers to text as the g2p model doesn't handle this#358
Alex-Wengg merged 1 commit intomainfrom
g2p-number-fix

Conversation

@smdesai
Copy link
Collaborator

@smdesai smdesai commented Mar 7, 2026

Why is this change needed?

The g2p model doesn't handle phoneme generation for numbers and as a result, numbers are not spoken. The solution is to speak out the numbers, e.g. "the range is 5 to 10" becomes "the range is five to ten".


Open with Devin

@claude
Copy link
Contributor

claude bot commented Mar 7, 2026

Claude finished @smdesai's task —— View job


Code Review Complete ✅

I've reviewed this PR that adds number-to-text normalization for the TTS g2p model and provided detailed feedback.

Todo List:

  • Create pending GitHub review
  • Get PR diff to analyze changes
  • Review code changes in TtsTextPreprocessor.swift
  • Submit review with feedback

Review Summary: This is a well-implemented fix that addresses the core issue where numbers weren't being spoken by the TTS system. The spellOutWholeNumbers function is strategically placed after other preprocessing steps to ensure comprehensive number conversion, and the implementation follows good Swift practices with proper error handling and performance optimization.

Key highlights:

  • ✅ Excellent processing order prevents edge cases
  • ✅ Uses shared NumberFormatter for performance
  • ✅ Robust regex pattern matching
  • ✅ Bonus smart quote normalization
  • ✅ Follows existing codebase patterns

Recommendation: Approve - This solves the stated problem effectively and safely.

Copy link
Contributor

@claude claude bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review Summary

This PR addresses an important TTS issue where the g2p model doesn't handle numbers, causing them to be skipped in speech synthesis. The solution adds a spellOutWholeNumbers function that converts remaining numeric digits to their spoken form.

Strengths

Excellent Processing Order: The number spelling is placed at step 10, after all other text normalization. This is crucial because:

  • Numbers produced by earlier steps (ranges: "5-10" → "5 to 10", times: "12:30" → "12 30") also get converted
  • Prevents edge cases like "five-10" instead of "five to ten"
  • Ensures comprehensive number coverage

Smart Quote Addition: The smart quote normalization (normalizeSmartQuotes) is a nice bonus that improves text consistency.

Performance Optimized: Uses the shared spellOutFormatter instance from SSMLTypes.swift, avoiding expensive NumberFormatter creation.

Robust Pattern Matching: The regex \\b\\d+\\b correctly matches whole numbers with word boundaries, preventing partial matches in mixed alphanumeric strings.

🔍 Technical Details

Lines 217-240 - The spellOutWholeNumbers implementation:

  • Correctly processes matches in reverse order to maintain string indices
  • Handles potential parsing failures gracefully with guard statements
  • Uses UTF16 count for NSRegularExpression compatibility

Lines 43-44 - Smart quote normalization covers the most common Unicode quote variants.

💭 Minor Considerations

Large Number Handling: The code relies on NumberFormatter.spellOut which handles very large numbers well (e.g., "1000000" → "one million"), but there's no explicit upper bound checking. This is likely fine for TTS use cases.

Edge Cases: The regex pattern handles standard integers but won't match numbers with leading zeros (e.g., "007"). This is probably the desired behavior for TTS.

🎯 Overall Assessment

This is a clean, well-thought-out implementation that solves the core problem effectively. The code follows the existing codebase patterns and integrates seamlessly with the preprocessing pipeline. The fix ensures that numbers like "5" become "five" so the TTS engine can properly synthesize them.

Recommendation: ✅ Approve - This addresses the stated issue comprehensively and follows good engineering practices.

@github-actions
Copy link

github-actions bot commented Mar 7, 2026

PocketTTS Smoke Test ✅

Check Result
Build
Model download
Model load
Synthesis pipeline
Output WAV ✅ (198.8 KB)

Runtime: 0m20s

Note: PocketTTS uses CoreML MLState (macOS 15) KV cache + Mimi streaming state. CI VM lacks physical GPU — audio quality may differ from Apple Silicon.

Copy link

@devin-ai-integration devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Devin Review found 1 potential issue.

View 3 additional findings in Devin Review.

Open in Devin Review


// 9. Handle phonetic replacement [Kokoro](/kˈOkəɹO/)
// 10. Spell out remaining whole numbers (5 → five, 10 → ten)
processed = spellOutWholeNumbers(processed)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Missing unit tests for new code violates Critical Rules in AGENTS.md

AGENTS.md under "Critical Rules" mandates: "Add unit tests when writing new code." This PR adds two new functions (normalizeSmartQuotes at line 130 and spellOutWholeNumbers at line 220) and a new preprocessing step (line 76), but no test file was modified or added. The git diff --name-only confirms only TtsTextPreprocessor.swift was changed. Tests exist for other preprocessor functionality in Tests/FluidAudioTests/TTS/SSMLTests.swift and Tests/FluidAudioTests/TTS/KokoroChunkerTests.swift, so there is an established test infrastructure to add to.

Prompt for agents
Add unit tests for the two new functions introduced in this PR. In Tests/FluidAudioTests/TTS/, add tests covering:

1. normalizeSmartQuotes: Verify that U+2018, U+2019, U+201C, U+201D are replaced with ASCII equivalents. Test mixed content, text with no smart quotes (passthrough), and text that is entirely smart quotes.

2. spellOutWholeNumbers: Verify basic cases like "5" → "five", "100" → "one hundred". Test that numbers embedded in words (e.g. "abc123def") are NOT replaced due to word boundaries. Test that the function handles text with no numbers (passthrough). Test interaction with prior pipeline steps (e.g. after range expansion "5 to 10" both numbers should be spelled out).

3. Integration: Test TtsTextPreprocessor.preprocess with input containing smart quotes and bare numbers to verify end-to-end behavior.
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

@github-actions
Copy link

github-actions bot commented Mar 7, 2026

Parakeet EOU Benchmark Results ✅

Status: Benchmark passed
Chunk Size: 320ms
Files Tested: 100/100

Performance Metrics

Metric Value Description
WER (Avg) 7.03% Average Word Error Rate
WER (Med) 4.17% Median Word Error Rate
RTFx 9.56x Real-time factor (higher = faster)
Total Audio 470.6s Total audio duration processed
Total Time 49.6s Total processing time

Streaming Metrics

Metric Value Description
Avg Chunk Time 0.050s Average chunk processing time
Max Chunk Time 0.099s Maximum chunk processing time
EOU Detections 0 Total End-of-Utterance detections

Test runtime: 1m17s • 03/07/2026, 05:35 PM EST

RTFx = Real-Time Factor (higher is better) • Processing includes: Model inference, audio preprocessing, state management, and file I/O

@github-actions
Copy link

github-actions bot commented Mar 7, 2026

Speaker Diarization Benchmark Results

Speaker Diarization Performance

Evaluating "who spoke when" detection accuracy

Metric Value Target Status Description
DER 15.1% <30% Diarization Error Rate (lower is better)
JER 24.9% <25% Jaccard Error Rate
RTFx 15.30x >1.0x Real-Time Factor (higher is faster)

Diarization Pipeline Timing Breakdown

Time spent in each stage of speaker diarization

Stage Time (s) % Description
Model Download 11.557 16.9 Fetching diarization models
Model Compile 4.953 7.2 CoreML compilation
Audio Load 0.040 0.1 Loading audio file
Segmentation 20.565 30.0 Detecting speech regions
Embedding 34.275 50.0 Extracting speaker voices
Clustering 13.710 20.0 Grouping same speakers
Total 68.575 100 Full pipeline

Speaker Diarization Research Comparison

Research baselines typically achieve 18-30% DER on standard datasets

Method DER Notes
FluidAudio 15.1% On-device CoreML
Research baseline 18-30% Standard dataset performance

Note: RTFx shown above is from GitHub Actions runner. On Apple Silicon with ANE:

  • M2 MacBook Air (2022): Runs at 150 RTFx real-time
  • Performance scales with Apple Neural Engine capabilities

🎯 Speaker Diarization Test • AMI Corpus ES2004a • 1049.0s meeting audio • 68.5s diarization time • Test runtime: 3m 41s • 03/07/2026, 05:40 PM EST

@github-actions
Copy link

github-actions bot commented Mar 7, 2026

ASR Benchmark Results ✅

Status: All benchmarks passed

Parakeet v3 (multilingual)

Dataset WER Avg WER Med RTFx Status
test-clean 0.57% 0.00% 5.19x
test-other 1.19% 0.00% 3.46x

Parakeet v2 (English-optimized)

Dataset WER Avg WER Med RTFx Status
test-clean 0.80% 0.00% 5.43x
test-other 1.22% 0.00% 3.45x

Streaming (v3)

Metric Value Description
WER 0.00% Word Error Rate in streaming mode
RTFx 0.61x Streaming real-time factor
Avg Chunk Time 1.506s Average time to process each chunk
Max Chunk Time 1.643s Maximum chunk processing time
First Token 1.798s Latency to first transcription token
Total Chunks 31 Number of chunks processed

Streaming (v2)

Metric Value Description
WER 0.00% Word Error Rate in streaming mode
RTFx 0.56x Streaming real-time factor
Avg Chunk Time 1.566s Average time to process each chunk
Max Chunk Time 2.382s Maximum chunk processing time
First Token 1.682s Latency to first transcription token
Total Chunks 31 Number of chunks processed

Streaming tests use 5 files with 0.5s chunks to simulate real-time audio streaming

25 files per dataset • Test runtime: 6m33s • 03/07/2026, 05:41 PM EST

RTFx = Real-Time Factor (higher is better) • Calculated as: Total audio duration ÷ Total processing time
Processing time includes: Model inference on Apple Neural Engine, audio preprocessing, state resets between files, token-to-text conversion, and file I/O
Example: RTFx of 2.0x means 10 seconds of audio processed in 5 seconds (2x faster than real-time)

Expected RTFx Performance on Physical M1 Hardware:

• M1 Mac: ~28x (clean), ~25x (other)
• CI shows ~0.5-3x due to virtualization limitations

Testing methodology follows HuggingFace Open ASR Leaderboard

@github-actions
Copy link

github-actions bot commented Mar 7, 2026

Sortformer High-Latency Benchmark Results

ES2004a Performance (30.4s latency config)

Metric Value Target Status
DER 33.4% <35%
Miss Rate 24.4% - -
False Alarm 0.1% - -
Speaker Error 8.9% - -
RTFx 19.8x >1.0x
Speakers 4/4 - -

Sortformer High-Latency • ES2004a • Runtime: 2m 28s • 2026-03-07T22:41:28.734Z

@github-actions
Copy link

github-actions bot commented Mar 7, 2026

Qwen3-ASR int8 Smoke Test ✅

Check Result
Build
Model download
Model load
Transcription pipeline
Decoder size 571 MB (vs 1.1 GB f32)

Runtime: 5m8s

Note: CI VM lacks physical GPU — CoreML MLState (macOS 15) KV cache produces degraded results on virtualized runners. On Apple Silicon: ~1.3% WER / 2.5x RTFx.

@github-actions
Copy link

github-actions bot commented Mar 7, 2026

VAD Benchmark Results

Performance Comparison

Dataset Accuracy Precision Recall F1-Score RTFx Files
MUSAN 92.0% 86.2% 100.0% 92.6% 725.0x faster 50
VOiCES 92.0% 86.2% 100.0% 92.6% 748.3x faster 50

Dataset Details

  • MUSAN: Music, Speech, and Noise dataset - standard VAD evaluation
  • VOiCES: Voices Obscured in Complex Environmental Settings - tests robustness in real-world conditions

✅: Average F1-Score above 70%

@github-actions
Copy link

github-actions bot commented Mar 7, 2026

Offline VBx Pipeline Results

Speaker Diarization Performance (VBx Batch Mode)

Optimal clustering with Hungarian algorithm for maximum accuracy

Metric Value Target Status Description
DER 14.5% <20% Diarization Error Rate (lower is better)
RTFx 3.93x >1.0x Real-Time Factor (higher is faster)

Offline VBx Pipeline Timing Breakdown

Time spent in each stage of batch diarization

Stage Time (s) % Description
Model Download 15.086 5.6 Fetching diarization models
Model Compile 6.465 2.4 CoreML compilation
Audio Load 0.051 0.0 Loading audio file
Segmentation 35.826 13.4 VAD + speech detection
Embedding 266.184 99.7 Speaker embedding extraction
Clustering (VBx) 0.736 0.3 Hungarian algorithm + VBx clustering
Total 267.083 100 Full VBx pipeline

Speaker Diarization Research Comparison

Offline VBx achieves competitive accuracy with batch processing

Method DER Mode Description
FluidAudio (Offline) 14.5% VBx Batch On-device CoreML with optimal clustering
FluidAudio (Streaming) 17.7% Chunk-based First-occurrence speaker mapping
Research baseline 18-30% Various Standard dataset performance

Pipeline Details:

  • Mode: Offline VBx with Hungarian algorithm for optimal speaker-to-cluster assignment
  • Segmentation: VAD-based voice activity detection
  • Embeddings: WeSpeaker-compatible speaker embeddings
  • Clustering: PowerSet with VBx refinement
  • Accuracy: Higher than streaming due to optimal post-hoc mapping

🎯 Offline VBx Test • AMI Corpus ES2004a • 1049.0s meeting audio • 302.7s processing • Test runtime: 4m 59s • 03/07/2026, 05:50 PM EST

@Alex-Wengg Alex-Wengg merged commit 92a550b into main Mar 7, 2026
18 checks passed
@Alex-Wengg Alex-Wengg deleted the g2p-number-fix branch March 7, 2026 22:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants