Nicht aus der Schweiz? Besuchen Sie lehmanns.de

Machine Learning for Vision-Based Motion Analysis (eBook)

Theory and Techniques
eBook Download: PDF
2010 | 2011
XIV, 372 Seiten
Springer London (Verlag)
978-0-85729-057-1 (ISBN)

Lese- und Medienproben

Machine Learning for Vision-Based Motion Analysis -
Systemvoraussetzungen
149,79 inkl. MwSt
(CHF 146,30)
Der eBook-Verkauf erfolgt durch die Lehmanns Media GmbH (Berlin) zum Preis in Euro inkl. MwSt.
  • Download sofort lieferbar
  • Zahlungsarten anzeigen

Techniques of vision-based motion analysis aim to detect, track, identify, and generally understand the behavior of objects in image sequences. With the growth of video data in a wide range of applications from visual surveillance to human-machine interfaces, the ability to automatically analyze and understand object motions from video footage is of increasing importance. Among the latest developments in this field is the application of statistical machine learning algorithms for object tracking, activity modeling, and recognition.

Developed from expert contributions to the first and second International Workshop on Machine Learning for Vision-Based Motion Analysis, this important text/reference highlights the latest algorithms and systems for robust and effective vision-based motion understanding from a machine learning perspective. Highlighting the benefits of collaboration between the communities of object motion understanding and machine learning, the book discusses the most active forefronts of research, including current challenges and potential future directions.

Topics and features: provides a comprehensive review of the latest developments in vision-based motion analysis, presenting numerous case studies on state-of-the-art learning algorithms; examines algorithms for clustering and segmentation, and manifold learning for dynamical models; describes the theory behind mixed-state statistical models, with a focus on mixed-state Markov models that take into account spatial and temporal interaction; discusses object tracking in surveillance image streams, discriminative multiple target tracking, and guidewire tracking in fluoroscopy; explores issues of modeling for saliency detection, human gait modeling, modeling of extremely crowded scenes, and behavior modeling from video surveillance data; investigates methods for automatic recognition of gestures in Sign Language, and human action recognition from small training sets.

Researchers, professional engineers, and graduate students in computer vision, pattern recognition and machine learning, will all find this text an accessible survey of machine learning techniques for vision-based motion analysis. The book will also be of interest to all who work with specific vision applications, such as surveillance, sport event analysis, healthcare, video conferencing, and motion video indexing and retrieval.


Techniques of vision-based motion analysis aim to detect, track, identify, and generally understand the behavior of objects in image sequences. With the growth of video data in a wide range of applications from visual surveillance to human-machine interfaces, the ability to automatically analyze and understand object motions from video footage is of increasing importance. Among the latest developments in this field is the application of statistical machine learning algorithms for object tracking, activity modeling, and recognition.Developed from expert contributions to the first and second International Workshop on Machine Learning for Vision-Based Motion Analysis, this important text/reference highlights the latest algorithms and systems for robust and effective vision-based motion understanding from a machine learning perspective. Highlighting the benefits of collaboration between the communities of object motion understanding and machine learning, the book discusses the most active forefronts of research, including current challenges and potential future directions.Topics and features: provides a comprehensive review of the latest developments in vision-based motion analysis, presenting numerous case studies on state-of-the-art learning algorithms; examines algorithms for clustering and segmentation, and manifold learning for dynamical models; describes the theory behind mixed-state statistical models, with a focus on mixed-state Markov models that take into account spatial and temporal interaction; discusses object tracking in surveillance image streams, discriminative multiple target tracking, and guidewire tracking in fluoroscopy; explores issues of modeling for saliency detection, human gait modeling, modeling of extremely crowded scenes, and behavior modeling from video surveillance data; investigates methods for automatic recognition of gestures in Sign Language, and human action recognition from small training sets.Researchers, professional engineers, and graduate students in computer vision, pattern recognition and machine learning, will all find this text an accessible survey of machine learning techniques for vision-based motion analysis. The book will also be of interest to all who work with specific vision applications, such as surveillance, sport event analysis, healthcare, video conferencing, and motion video indexing and retrieval.

Preface 5
Part I: Manifold Learning and Clustering/Segmentation 6
Part II: Tracking 7
Part III: Motion Analysis and Behavior Modeling 9
Part IV: Gesture and Action Recognition 10
Acknowledgements 11
Contents 12
Manifold Learning and Clustering/Segmentation 14
Practical Algorithms of Spectral Clustering: Toward Large-Scale Vision-Based Motion Analysis 15
Introduction 15
Spectral Clustering 16
Principle 16
Algorithm 18
Related Work 18
Dimensionality Reduction by Random Projection 19
Random Projection 19
Acceleration of Kernel Computation 20
Random Sampling as Random Projection 20
Using a Minority of Image Pixels 21
Efficient Random Projection 21
Size Reduction of Affinity Matrix by Sampling 23
Random Subsampling 24
Pre-clustering 25
Practical Ncut Algorithms 26
Randomized Ncut Algorithm 26
Invocation of Dimensionality Reduction 27
Relation to the Original Algorithm 27
Scale Selection 27
Number of Clusters 28
Ncut Algorithm with Pre-clustering 28
Experiments 29
Performance Tests 29
Error Analysis 29
Computational Cost 30
Image Segmentation 31
Motion Segmentation 32
Video Shot Segmentation 33
Segmentation Using Appearance-Based Similarities 33
Segmentation with Local Scaling 33
Conclusions 35
Appendix: Clustering Scores 37
References 37
Riemannian Manifold Clustering and Dimensionality Reduction for Vision-Based Analysis 39
Introduction 40
Chapter summary 42
Review of Local Nonlinear Dimensionality Reduction Methods in Euclidean Spaces 43
NLDR for a Nonlinear Manifold 43
Calculation of M in LLE 44
Calculation of M in LE 44
Calculation of M in HLLE 45
NLDR for a Single Subspace 45
Manifold Clustering and Dimensionality Reduction Using the Euclidean Metric 47
Manifold Clustering and Dimensionality Reduction for a k-Separated Union of k-Connected Nonlinear Manifolds 47
Degeneracies for a k-Separated Union of k-Connected Linear Manifolds 48
Manifold Clustering and Dimensionality Reduction Using the Riemannian Metric 50
Review of Riemannian Manifolds 50
Extending Manifold Clustering and Dimensionality Reduction to Riemannian Manifolds 53
Selection of the Riemannian kNN 53
Riemannian Calculation of M for LLE 53
Riemannian Calculation of M for LE 54
Riemannian Calculation of M for HLLE 54
Calculation of the Embedding Coordinates 54
Extending Manifold Clustering to Riemannian Manifolds 55
Experiments 55
Application and Experiments on SPSD(3) 55
Application and Experiments on the Space of Probability Density Functions 58
Conclusion and Open Research Problems 62
References 63
Manifold Learning for Multi-dimensional Auto-regressive Dynamical Models 66
Introduction 66
Learning Pullback Metrics for Linear Models 68
Pullback Metrics 68
Fisher Metric for Linear Models 69
General Framework 69
Objective Functions: Classification Performance and Inverse Volume 71
Pullback Metrics for Multidimensional Autoregressive Models 72
The Basis Manifold 72
The Basis Manifold AR(2,1) in the Scalar Case 72
The Multidimensional Case 73
Product Metric 73
Geodesics 74
An Automorphism for the Scalar Case 75
Product and Global Automorphisms for AR(2,p) 75
Volume Element for AR(2,p) Under Product Automorphism 76
Tests on Identity Recognition 77
Feature Representation 78
Identification of a AR(2,p) Model for Each Sequence 78
Performances of Optimal Pullback Metrics 80
Influence of Parameters 82
Perspectives and Conclusions 83
References 83
Tracking 86
Mixed-State Markov Models in Image Motion Analysis 87
Introduction 88
Outline of the Chapter 89
Related Work: Discrete-Continuous Models and Dynamic Textures 89
Discrete-Continuous Approaches 90
Dynamic Texture Characterization 91
The Mixed-State Nature of Motion Measurements 92
Mixed-State Markov Models 94
Mixed-State Random Variables 94
Mixed-State Markov Random Fields 96
The Mixed-State Gibbs Distribution 97
Mixed-State Automodels 98
Causal Mixed-State Markov Models 101
Sampling, Estimation and Inference in MS-MRF 102
Sampling 102
Parameter Estimation 103
Inference of Mixed-State Random Fields 104
Characterizing Motion Textures with MS-MRF 105
Defining the Set of Parameters 105
Recognition of Motion Textures 106
Application to Motion Texture Classification 107
Temporal Consistency 110
Mixed-State Causal Modeling of Motion Textures 111
Learning MS-MC Motion Texture Models 113
Model Matching 113
Mixed-State Markov Chain vs. Mixed-State Markov Random Field Motion Texture Models 114
Motion Texture Tracking 117
Experimental Results 118
Conclusions 123
References 123
Learning to Detect Event Sequences in Surveillance Streams at Very Low Frame Rate 126
Introduction 126
Approaches for Image Reviews 127
Surveillance for Nuclear Safeguards 129
Filtering Surveillance Streams by Combining Uninformed and Informed Search Strategies 130
Searching Events by Scene Change Detection 132
Searching Events by Sequence and Time Attributes 134
Modeling Nuclear Flask Processing with a HSMM 136
State Space 137
Transition Matrix 137
Sojourn Times 138
Emissions 138
Training the HSMM 139
The MM Image Review Tool 140
Discussion about the MM Review Tool 142
Benchmarking Image Review Filters 145
Image Sets 145
Performance Metrics 146
Experimental Results 147
Discussion 150
References 152
Discriminative Multiple Target Tracking 154
Introduction 154
Appearance and Motion Model of Multiple Targets 156
Metric Learning Framework 156
Joint Appearance Model Estimation 157
Motion Parameter Optimization 158
Online Matching and Updating Multiple Models 159
Discriminant Exclusive Principle 161
Experiments 161
Visualization of Learned Appearance Model 161
Multiple Target Tracking for Different Video Sequences 162
Discussions, Conclusion and Future Work 165
References 166
A Framework of Wire Tracking in Image Guided Interventions 168
Background 169
Guidewire Tracking Method 172
Method Overview 172
A Guidewire Model 172
A Probabilistic Guidewire Tracking Framework 173
Guidewire Measurement Models 175
Learning-Based Guidewire Measurements 175
Appearance-Based Measurements 176
Fusion of Multiple Measurements 177
Hierarchical and Multi-resolution Guidewire Tracking 178
Kernel-Based Measurement Smoothing 178
Rigid Tracking 178
Nonrigid Tracking 179
Experiments 181
Data and Evaluation Protocol 181
Quantitative Evaluations 183
Conclusion 184
References 185
Motion Analysis and Behavior Modeling 187
An Integrated Approach to Visual Attention Modeling for Saliency Detection in Videos 188
Introduction 189
Interest Point Detection 189
Visual Attention Modeling 190
Proposed Approach 191
Prior Work 193
Visual Attention Modeling Methods 194
Bottom-Up Saliency 194
Top-Down Saliency 196
Integrating Top-Down and Bottom-Up Saliency 196
Interest Point Detection Methods 197
Human Eye Movement as Indicators of User Interest 199
Use of Eye-Tracking in Related Work 200
Limitations of Existing Work 201
Learning Attention-Based Saliency: Conceptual Framework 202
Learning Context-Specific Saliency 203
Predicting Context-Specific Saliency 204
Experiments and Results 204
Experimental Setup 205
Implementation 206
Results 207
Eye Movement Prediction 208
Context Specific Saliency Detection 209
Discussion 211
Integrating Bottom-Up Approaches: A Possible Extension 213
Integration Using Probabilistic Framework 213
Results of the Integrated Framework 214
Conclusions and Future Work 217
Possible Applications 217
Future Work 218
References 219
Video-Based Human Motion Estimation by Part-Whole Gait Manifold Learning 222
Introduction 223
Related Works 224
Discriminative Approaches 225
Feature Representation 225
Inference Algorithms 225
Generative Approaches 226
Visual Observations 226
Human Shape Models 226
Inference Algorithms 227
Human Motion Models 227
Single Pose Manifold 228
Dual Pose Manifolds 228
Shared Pose Manifold 228
Our Research 228
Research Overview 229
Dual Gait Generative Models 229
Gait Manifolds 229
Inference for Gait Estimation 230
Dual Gait Generative Models 231
Kinematic Gait Generative Model (KGGM) 231
Visual Gait Generative Model (VGGM) 232
Two-Layer KGGM and VGGM 233
Gait Manifolds 234
Gait Manifold Learning 234
Gait Manifold Topology 236
Euclidean Distance Between Gait Vectors 236
Distance Between 3D Joint Positions 237
Fourier Analysis of Joint Angles 237
Part-Whole Gait Manifolds 239
Manifold Mapping Between KGGM and VGGM 240
Nonlinear Mapping Functions (MAP-1) 241
Similarity-Preserving Mapping Functions (MAP-2) 241
MAP-1 vs. MAP-2 242
Inference Algorithms 242
Graphical Models 242
Whole-Based Gait Estimation 244
Segmental Modeling 245
Mode-Based Gait Estimation 245
Segmental Jump-Diffusion MCMC Inference 246
Part-Based Gait Estimation 247
Part-Level Gait Priors 248
Part-Level Likelihood Functions 249
Experimental Results and Discussions 250
Experimental Setups 251
Training Data Collection 251
Testing Data Collection 253
Local Error Analysis 253
Global Error Analysis 255
Experiments on KGGM 255
Evaluation of Two-Stage Inference 256
Segmental Gait Modeling 256
Local Motion Estimation 257
Whole-Based Gait Estimation 258
Part-Whole Gait Estimation 259
Overall Performance Evaluation 260
Group-I 261
Group-II 263
Group-III 263
Limitations and Discussion 264
Conclusion and Future Research 265
References 265
Spatio-Temporal Motion Pattern Models of Extremely Crowded Scenes 269
Introduction 269
Related Work 271
Local Spatio-Temporal Motion Patterns 271
Prototypical Motion Patterns 273
Distribution-Based Hidden Markov Models 275
Experimental Results 276
Conclusion 279
References 280
Learning Behavioral Patterns of Time Series for Video-Surveillance 281
Introduction 281
Related Works 283
Low Level Processing and Initial Representation 286
Video Processing 286
The Choice of the Input Space 287
Temporal Series Representations 288
Curve Fitting 288
Probabilistic Models 289
String-Based Approach 290
Learning Behaviors 291
The Learning Phase 291
Kernels for Time-Series 293
Probability Product Kernel (PPK) 293
Kernels for String-Based Representations 294
Run Time Analysis 294
Experimental Analysis 295
Data Collection and Semi-automatic Labeling 295
Model Selection 297
Curve Fitting 297
Probabilistic Model 298
String-Based Approach 299
Kernel Choice 299
RLS Classification 299
HMMs Likelihood Estimation 300
Spectral Clustering 300
Supervised Analysis 300
Unsupervised Analysis 302
Discussion and Open Problems 306
References 309
Gesture and Action Recognition 311
Recognition of Spatiotemporal Gestures in Sign Language Using Gesture Threshold HMMs 312
Introduction 312
Related Work 313
Isolated Gesture Recognition 313
Continuous Gesture Recognition 314
Chapter Outline 316
Hidden Markov Models 317
HMM Algorithms 318
Types of HMMs 318
Threshold HMM Model 319
GT-HMM Framework 320
GT-HMM Training 321
Gesture Subunit Initialization 322
GT-HMM for Gesture Recognition 326
Gesture Classification 326
Parallel Training 327
Parallel Gesture Classification 327
Continuous Recognition 328
Candidate Selection 329
Experiments 330
Feature Extraction 330
Evaluation of Techniques on Isolated Gestures 332
Manual Sign Experiments 333
Head Gesture Experiments 336
Eye Brow Gesture Experiments 338
Benchmark Data-Set: Marcel InteractPlay Database 340
Continuous Gesture Recognition Experiments 343
Continuous Experiment Results 344
Continuous User Independent Experiment Results 345
Multimodal Recognition Examples 348
Conclusion 349
References 351
Learning Transferable Distance Functions for Human Action Recognition 354
Introduction 354
Previous Work 356
Related Work in Learning 356
"Relatedness" via Features 357
"Relatedness" via Model Parameters 358
Related Work in Vision 358
Motion Descriptors and Matching Scheme 360
Motion Descriptors 360
Patch Based Action Comparison 360
Learning a Transferable Distance Function 362
Transferable Distance Function 363
Max-Margin Formulation 364
Solving the Dual 366
Hyper-Features 366
Experiments 367
Datasets 367
KTH Dataset 367
Weizmann Dataset 367
Cluttered Human Action Dataset 368
Experimental Results 368
Direct Comparison on KTH 368
Training on Weizmann and Testing on KTH 369
Direct Comparison on Weizmann 372
Direct Comparison on Cluttered Action Dataset 372
Training on KTH and Testing on Cluttered Action Dataset 373
Conclusion 373
References 373
Index 376

Erscheint lt. Verlag 18.11.2010
Reihe/Serie Advances in Computer Vision and Pattern Recognition
Advances in Computer Vision and Pattern Recognition
Zusatzinfo XIV, 372 p.
Verlagsort London
Sprache englisch
Themenwelt Informatik Grafik / Design Digitale Bildverarbeitung
Informatik Theorie / Studium Künstliche Intelligenz / Robotik
Schlagworte computer vision • Graphical Models • Kernel Machines • machine learning • manifold learning • motion analysis • Visual Event Analysis
ISBN-10 0-85729-057-6 / 0857290576
ISBN-13 978-0-85729-057-1 / 9780857290571
Haben Sie eine Frage zum Produkt?
PDFPDF (Ohne DRM)

Digital Rights Management: ohne DRM
Dieses eBook enthält kein DRM oder Kopier­schutz. Eine Weiter­gabe an Dritte ist jedoch rechtlich nicht zulässig, weil Sie beim Kauf nur die Rechte an der persön­lichen Nutzung erwerben.

Dateiformat: PDF (Portable Document Format)
Mit einem festen Seiten­layout eignet sich die PDF besonders für Fach­bücher mit Spalten, Tabellen und Abbild­ungen. Eine PDF kann auf fast allen Geräten ange­zeigt werden, ist aber für kleine Displays (Smart­phone, eReader) nur einge­schränkt geeignet.

Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen dafür einen PDF-Viewer - z.B. den Adobe Reader oder Adobe Digital Editions.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen dafür einen PDF-Viewer - z.B. die kostenlose Adobe Digital Editions-App.

Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

Mehr entdecken
aus dem Bereich
Discover the smart way to polish your digital imagery skills by …

von Gary Bradley

eBook Download (2024)
Packt Publishing (Verlag)
CHF 29,30
Explore powerful modeling and character creation techniques used for …

von Lukas Kutschera

eBook Download (2024)
Packt Publishing (Verlag)
CHF 42,20
Generate creative images from text prompts and seamlessly integrate …

von Margarida Barreto

eBook Download (2024)
Packt Publishing (Verlag)
CHF 31,65