Figure 2 - uploaded by Tatsuya Kawahara
Content may be subject to copyright.
SVM Based Language Model Hierarchy

SVM Based Language Model Hierarchy

Source publication
Conference Paper
Full-text available
A speech recognition architecture combining topic detection and topic-dependent language modeling is proposed. In this ar- chitecture, a hierarchical back-off mechanism is introduced to improve system robustness. Detailed topic models are applied when topic detection is confident, and wider models that cover multiple topics are applied in cases of...

Contexts in source publication

Context 1
... topic hierarchy is automatically constructed by clustering together those topics likely to be con- fused during topic detection. An example topic hierarchy based Figure 2. The top node corresponds to a topic-independent G-LM that gives complete coverage of all topics, the bottom layer corresponds to the most detailed, individual topic models, and the intermediate nodes corresponds to models that cover multiple topics. ...
Context 2
... models that provide less than a 10% reduc- tion in perplexity compared to the G-LM are removed from the hierarchy. The resulting hierarchy for the SVM case is shown in Figure 2. ...

Similar publications

Article
Full-text available
In an intelligent transportation system, accurate bus information is vital for passengers to schedule their departure time and make reasonable route choice. In this paper, an improved deep belief network (DBN) is proposed to predict the bus travel time. By using Gaussian–Bernoulli restricted Boltzmann machines to construct a DBN, we update the clas...

Citations

... There are dialogue systems designed to recognize sentences by considering a specific language model associated with each system prompt decided by the dialogue manager (Lane et al., 2003; Mori et al., 2003 ), which is the case, e.g., of SAPLEN. These language models, called prompt-dependent language models in this paper, aim to provide high speech recognition rates and are useful if the interaction is clearly constrained by the system; however, they are not adequate if the user does not follow the system indications and utters sentences not permitted by these language models. ...
Article
This paper presents a new technique to enhance the performance of the input interface of spoken dialogue systems based on a procedure that combines during speech recognition the advantages of using prompt-dependent language models with those of using a language model independent of the prompts generated by the dialogue system. The technique proposes to create a new speech recognizer, termed contextual speech recognizer, that uses a prompt-independent language model to allow recognizing any kind of sentence permitted in the application domain, and at the same time, uses contextual information (in the form of prompt-dependent language models) to take into account that some sentences are more likely to be uttered than others at a particular moment of the dialogue. The experiments show the technique allows enhancing clearly the performance of the input interface of a previously developed dialogue system based exclusively on prompt-dependent language models. But most important, in comparison with a standard speech recognizer that uses just one prompt-independent language model without contextual information, the proposed recognizer allows increasing the word accuracy and sentence understanding rates by 4.09% and 4.19% absolute, respectively. These scores are slightly better than those obtained using linear interpolation of the prompt-independent and prompt-dependent language models used in the experiments.