Skip to main content

Table 6 100-Best rescoring with different LMs on Hub5’00-SWB and RT03S-FSH

From: RNN language model with word clustering and class-based output layer

Model Perplexity WER (%, absolute change)
Hub5’00-SWB RT03S-FSH Hub5’00-SWB RT03S-FSH
LM-KN3 89.40 66.76 24.5 27.5
LM-KN5 86.78 63.80 24.1 (−0.4) 27.1 (−0.4)
RNNLM-Freq 72.47 55.76 22.9 (−1.6) 25.9 (−1.6)
RNNLM-Freq + LM-KN5 67.66 52.15 22.4 (−2.1) 25.5 (−2.0)
RNNLM-Brown 69.91 54.48 22.6 (−1.9) 25.7 (−1.8)
RNNLM-Brown + LM-KN5 66.00 51.24 22.2 (−2.3) 25.3 (−2.2)
  1. Values in italics indicate the lowest perplexity and WER on Hub5’00-SWB and RT03S-FSH.