Nicht aus der Schweiz? Besuchen Sie lehmanns.de

Markov Models for Pattern Recognition

From Theory to Applications

(Autor)

Buch | Hardcover
XII, 248 Seiten
2007 | 2008. Auflage
Springer Berlin (Verlag)
978-3-540-71766-9 (ISBN)

Lese- und Medienproben

Markov Models for Pattern Recognition - Gernot A. Fink
CHF 89,85 inkl. MwSt
zur Neuauflage
  • Titel erscheint in neuer Auflage
  • Artikel merken
Zu diesem Artikel existiert eine Nachauflage
This introduction to the Markov modeling framework describes the underlying theoretical concepts of Markov models as used for sequential data, covering Hidden Markov models and Markov chain models. It presents the techniques necessary to build successful systems.

This comprehensive introduction to the Markov modeling framework describes the underlying theoretical concepts of Markov models as used for sequential data, covering Hidden Markov models and Markov chain models. It also presents the techniques necessary to build successful systems for practical applications. In addition, the book demonstrates the actual use of the technology in the three main application areas of pattern recognition methods based on Markov-Models: speech recognition, handwriting recognition, and biological sequence analysis. The book is suitable for experts as well as for practitioners.

Gernot A. Fink earned his diploma in computer science from the University of Erlangen-Nuremberg, Erlangen, Germany, in 1991. He recieved a Ph.D. degree in computer science in 1995 and the venia legendi in applied computer science in 2002 both from Bielefeld University, Germany. Currently, he is professor for Pattern Recognition in Embedded Systems at the University of Dortmund, Germany, where he also heads the Intelligent Systems Group at the Robotics Research Institute. His reserach interests lie in the development and application of pattern recognition methods in the fields of man machine interaction, multimodal machine perception including speech and image processing, statistical pattern recognition, handwriting recognition, and the analysis of genomic data.

1. Introduction
1.1 Thematic Context
1.2 Capabilities of Markov Models
1.3 Goal and Structure

2. Application Areas
2.1 Speech
2.2 Handwriting
2.3 Biological Sequences
2.4 Outlook

Part I: Theory

3. Foundations of Mathematical Statistics
3.1 Experiment, Event, and Probability
3.2 Random Variables and Probability Distributions
3.3 Parameters of Probability Distributions
3.4 Normal Distributions and Mixture Density Models
3.5 Stochastic Processes and Markov Chains
3.6 Principles of Parameter Estimation
3.7 Bibliographical Remarks

4. Vector Quantisation
4.1 Definition
4.2 Optimality
4.3 Algorithms for Vector Quantiser Design
(LLoyd, LBG, k-means)
4.4 Estimation of Mixture Density Models
4.5 Bibliographical Remarks

5. Hidden-Markov Models
5.1 Definition
5.2 Modeling of Output Distributions
5.3 Use-Cases
5.4 Notation
5.5 Scoring
(Forward algorithm)
5.6 Decoding
(Viterbi algorithm)
5.7 Parameter Estimation
(Forward-backward algorithm,
Baum-Welch, Viterbi, and segmental k-means training)
5.8 Model Variants
5.9 Bibliographical Remarks

6. n-Gram Models
6.1 Definition
6.2 Use-Cases
6.3 Notation
6.4 Scoring
6.5 Parameter Estimation
(discounting, interpolation and backing-off)
6.6 Model Variants
(categorial models, long-distance dependencies)
6.7 Bibliographical Remarks

Part II: Practical Aspects

7. Computations with Probabilities
7.1 Logarithmic Probability Representation
7.2 Flooring of Probabilities
7.3 Codebook Evaluation in Tied-Mixture Models
7.4 Likelihood Ratios

8. Configuration of Hidden-Markov Models
8.1 Model Topologies
8.2 Sub-Model Units
8.3 Compound Models
8.4 Profile-HMMs
8.5 Modelling of Output Probability Densities

9. Robust Parameter Estimation
9.1 Optimization of Feature Representations
(Principle component analysis, whitening, linear discriminant
analysis)
9.2 Tying
(of model parameters, especially: mixture tying)
9.3 Parameter Initialization

10. Efficient Model Evaluation
10.1 Efficient Decoding of Mixture Densities
10.2 Beam Search
10.3 Efficient Parameter Estimation
(forward-backward pruning, segmental Baum-Welch,
training of model hierarchies)
10.4 Tree-based Model Representations

11. Model Adaptation
11.1 Foundations of Adaptation
11.2 Adaptation of Hidden-Markov Models
(Maximum-likelihood linear regression)
11.3 Adaptation of n-Gram Models
(cache models, dialog-step dependent models, topic-based
language models)

12. Integrated Search
12.1 HMM Networks
12.2 Multi-pass Search Strategies
12.3 Search-Space Copies
(context and time-based tree copying strategies,
language model look-ahead)
12.4 Time-synchronous Integrated Decoding

Part III: Putting it All Together

13. Speech Recognition
13.1 Application-Specific Processing
(feature extraction, vocal tract length normalization, ...)
13.2 Systems
(e.g. BBN Byblos, SPHINX III, ...)

14. Text Recognition
14.1 Application-Specific Processing
(linearization of data representation for off-line applications,
preprocessing, normalization, feature extraction)
14.2 Systems for On-line Handwriting Recognition
14.3 Systems for Off-line Handwriting Recognition

15. Analysis of Biological Sequences
15.1 Representation of Biological Sequences
15.2 Systems
(HMMer, SAM, Meta-MEME)

Zusatzinfo XII, 248p. 51 illus..
Sprache englisch
Original-Titel Mustererkennung mit Markov-Modellen
Maße 155 x 235 mm
Gewicht 595 g
Einbandart gebunden
Themenwelt Mathematik / Informatik Informatik
Schlagworte Biological sequence analysis • Handwriting Recognition • Markov-Modell • Markov-Models • Mustererkennung • pattern recognition • Speech Recognition
ISBN-10 3-540-71766-8 / 3540717668
ISBN-13 978-3-540-71766-9 / 9783540717669
Zustand Neuware
Haben Sie eine Frage zum Produkt?
Mehr entdecken
aus dem Bereich
leichter Einstieg für Senioren

von Philip Kiefer

Buch | Softcover (2024)
Markt + Technik Verlag
CHF 13,90