Nicht aus der Schweiz? Besuchen Sie lehmanns.de

Ambient Intelligence (eBook)

A Novel Paradigm

Gian Luca Foresti, Tim Ellis (Herausgeber)

eBook Download: PDF
2006 | 2005
XIII, 240 Seiten
Springer New York (Verlag)
978-0-387-22991-1 (ISBN)

Lese- und Medienproben

Ambient Intelligence -
Systemvoraussetzungen
96,29 inkl. MwSt
(CHF 93,95)
Der eBook-Verkauf erfolgt durch die Lehmanns Media GmbH (Berlin) zum Preis in Euro inkl. MwSt.
  • Download sofort lieferbar
  • Zahlungsarten anzeigen

Ambient Intelligence (AmI) is an integrating technology for supporting a pervasive and transparent infrastructure for implementing smart environments. Such technology is used to enable environments for detecting events and behaviors of people and for responding in a contextually relevant fashion. AmI proposes a multi-disciplinary approach for enhancing human machine interaction.

Ambient Intelligence: A Novel Paradigm is a compilation of edited chapters describing current state-of-the-art and new research techniques including those related to intelligent visual monitoring, face and speech recognition, innovative education methods, as well as smart and cognitive environments.

The authors start with a description of the iDorm as an example of a smart environment conforming to the AmI paradigm, and introduces computer vision as an important component of the system. Other computer vision examples describe visual monitoring for the elderly, classic and novel surveillance techniques using clusters of cameras installed in indoor and outdoor application domains, and the monitoring of public spaces. Face and speech recognition systems are also covered as well as enhanced LEGO blocks for novel educational purposes. The book closes with a provocative chapter on how a cybernetic system can be designed as the backbone of a human machine interaction.


Ambient Intelligence (AmI) is an integrating technology for supporting a pervasive and transparent infrastructure for implementing smart environments. Such technology is used to enable environments for detecting events and behaviors of people and for responding in a contextually relevant fashion. AmI proposes a multi-disciplinary approach for enhancing human machine interaction.Ambient Intelligence: A Novel Paradigm is a compilation of edited chapters describing current state-of-the-art and new research techniques including those related to intelligent visual monitoring, face and speech recognition, innovative education methods, as well as smart and cognitive environments.The authors start with a description of the iDorm as an example of a smart environment conforming to the AmI paradigm, and introduces computer vision as an important component of the system. Other computer vision examples describe visual monitoring for the elderly, classic and novel surveillance techniques using clusters of cameras installed in indoor and outdoor application domains, and the monitoring of public spaces. Face and speech recognition systems are also covered as well as enhanced LEGO blocks for novel educational purposes. The book closes with a provocative chapter on how a cybernetic system can be designed as the backbone of a human machine interaction.

Contents 7
Preface 9
Foreword 11
1 AMBIENT INTELLIGENCE 14
1. Introduction 14
2. The Essex approach 15
2.1 The iDorm - A Testbed for Ubiquitous Computing and Ambient Intelligence 15
2.2 The iDorm Embedded Computational Artifacts 17
3. Integrating Computer Vision 21
3.1 User Detection 22
3.2 Estimating reliability of detection 24
3.3 Vision in the iDorm 26
4. Conclusions 26
References 26
2 TOWARDS AMBIENT INTELLIGENCE FOR THE DOMESTIC CARE OF THE ELDERLY 28
1. Introduction 28
2. An Integrated Supervision System 29
2.1 E-service Based Integration Schemata 32
3. People and Robot Localization and Tracking System 34
3.1 System architecture and implementation 36
4. The Plan Execution Monitoring System 39
4.1 Representing Contingencies 42
4.2 The Execution Monitor 43
5. Integrating Sensing and Execution Monitoring: a Running Example 46
6. Conclusions and Future Work 49
References 51
3 SCALING AMBIENT INTELLIGENCE 52
1. Ambient Intelligence: the contribution of different disciplines 52
2. I-BLOCKS technology 55
3. Design process 57
4. Scaling Ambient Intelligence at level of compositional devices: predefined activities 58
4.1 Arithmetic training 59
4.2 Storytelling Play Scenario 60
4.3 Linguistic scenario 63
5. Scaling Ambient Intelligence at level of compositional devices: free activities 65
6. Scaling Ambient Intelligence at the level of configurable environments: future scenarios 67
6.1 The Augmented Playground 67
6.2 Self-reconfigurable Robots 70
7. Discussion and conclusions 71
References 73
4 VIDEO AND RADIO ATTRIBUTES EXTRACTION FOR HETEROGENEOUS LOCATION ESTIMATION 76
1. Introduction 76
2. Related work 77
3. Main tasks of Ambient Intelligence systems 78
4. Architecture design 79
4.1 Inspiration 79
4.2 Mapping the Model into an AmI Architecture 81
4.3 Artificial Sensing 82
4.4 Proposed structure 82
5. Context aware systems 84
5.1 Location feature 85
5.2 The formalism 86
5.3 Alignment and Extraction of Video and Radio Object Reports 88
6. Results 93
6.1 The environment 93
6.2 Results for video object extraction 93
6.3 Results for radio object extraction 93
6.4 Alignment results 95
7. Conclusions 95
8. Acknowledgments 96
References 96
5 DISTRIBUTED ACTIVE MULTICAMERA NETWORKS 102
1. Introduction 102
2. Sensing modalities 102
3. Vision for Ambient Intelligence 103
4. Architecture 104
5. Tracking and object detection 105
5.1 Object detection 105
5.2 Tracking 106
5.3 Appearance models 107
5.4 Track data 108
6. Normalization 108
7. Multi-camera coordination 110
8. Multi-scale image acquisition 111
8.1 Active Head Tracking and Face Cataloging 112
8.2 Uncalibrated, multiscale data acquisition 114
8.3 Extensions 115
9. Indexing Surveillance Data 115
9.1 Visualization 116
10. Privacy 116
11. Conclusions 117
References 117
6 A DISTRIBUTED MULTIPLE CAMERA SURVEILLANCE SYSTEM 120
1. Introduction 120
2. System architecture 121
3. Motion detection and single-view tracking 121
3.1 Motion Detection 122
3.2 Scene Models 124
3.3 Target Tracking 125
3.4 Partial Observation 126
3.5 Target Reasoning 129
4. Multi view tracking 133
4.1 Homography Estimation 133
4.2 Least Median of Squares 134
4.3 Feature Matching Between Overlapping Views 135
4.4 3D Measurements 136
4.5 Tracking in 3D 137
4.6 Non-Overlapping Views 139
5. System architecture 142
5.1 Surveillance Database 143
6. Summary 145
7. Appendix 147
7.1 Kalman Filter 147
7.2 Homography Estimation 148
7.3 3D Measurement Estimation 149
References 150
7 LEARNING AND INTEGRATING INFORMATION FROM MULTIPLE CAMERA VIEWS 152
1. Introduction 152
1.1 Semantic Scene Model 154
2. Learning point-based regions 156
3. Learning trajectory-based regions 159
3.1 Route model 159
3.2 Learning algorithm 161
3.3 Segmentation to paths and junctions 162
4. Activity analysis 163
5. Integration of information from multiple views 164
5.1 Multiple Camera Activity Network (MCAN) 166
6. Database 168
6.1 Metadata Generation 171
7. Summary 175
References 175
8 FAST ONLINE SPEAKER ADAPTATION FOR SMART ROOM APPLICATIONS 178
1. Introduction 178
2. Description of the proposed on-line adaptation technique 179
3. Implementation details of proposed approach 183
3.1 Calculation of in an FST framework 183
4. Experimental details and results 185
5. Conclusions 187
References 187
9 STEREO-BASED 3D FACE RECOGNITION SYSTEM FOR AMI 190
1. Introduction 190
2. Face Recognition: Review 192
2.1 Face Recognition from Still Images 192
2.2 Face Recognition from Image Retrievals 193
2.3 3D Face Recognition 194
2.4 NIVA System Overview 195
3. NIVA 3D Vision System 195
3.1 NIVA 3D Stereo-based Face Database 196
4. Face Recognition in NIVA 196
4.1 Fisher/Linear Discriminant Analysis 197
4.2 Face Classification in NIVA 198
4.3 Pattern Vectors 198
5. NIVA Dynamic Indexing to Database and Recognition 199
6. NIVA Implementation of Indexing and Recognition 199
6.1 Feature Space 200
6.4 Step 2: Face Recognition 202
7. Testing and Results 202
7.1 Indexing and Recognition Performance 203
7.2 Conclusion and Future Work 205
References 209
10 SECURITY AND BUILDING INTELLIGENCE 212
1. Introduction 212
2. System Description 213
3. People tracking and counting 215
3.1 People tracking 215
3.2 People counting 216
4. Event detection and association 217
5. Experimental results 217
6. AmI for training environments 218
7. Conclusions 222
References 223
11 SUSTAINABLE CYBERNETICS SYSTEMS 226
1. Encoding Interplay and Co-Evolution 229
1.1 Encoding Interplay between Natural and Cybernetic Systems 229
1.2 Encoding Co-Evolution of Natural and Cybernetic Systems 236
2. Sustaining Ambient Intelligence 245
2.1 Propagating Structure and Function 245
2.2 Indicators of Sustainability 249
2.3 Collective Intelligent Agents 250
3. Conclusion 250
References 251
Index 252

Chapter 6

A DISTRIBUTED MULTIPLE CAMERA SURVEILLANCE SYSTEM
(p.107-108)

T.Ellis, J.Black, M.Xu and D.Makris
Digital Imaging Research Centre (DIRC), Kingston University, UK
{t.ellis,j.black,m.xu,d.makris}@kingston.ac.uk

1. Introduction

An important capability of an ambient intelligent environment is the capacity to detect, locate and identify objects of interest. In many cases interesting can move, and in order to provide meaningful interaction, capturing and tracking the motion creates a perceptively-enabled interface, capable of understanding and reacting to a wide range of actions and activities. CCTV systems fulfill an increasingly important role in the modern world, providing live video access to remote environments. Whilst the role of CCTV has been primarily focused on rather specific surveillance and monitoring tasks (i.e. security and traffic monitoring), the potential uses cover a much wider range.

The proliferation of video security surveillance systems over the past 5-10 years, in both public and commercial environments, is extensively used to remotely monitor activity in sensitive locations and publicly accessible spaces. In town and city centres, surveillance has been acknowledged to result in significant reductions in crime. However, in order to provide comprehensive and large area coverage of anything but the simplest environments, a large number of cameras must be employed.

In complex and cluttered environments with even moderate numbers of moving objects (e.g. 10-20) the problem of tracking individual objects is significantly complicated by occlusions in the scene, where an object may be partially occluded or totally disappear from camera view for both short or extended periods of time. Static occlusion results from objects moving behind (with respect to the camera) fixed elements in the scene (e.g. walls, bushes), whilst dynamic occlusion occurs as a result of moving objects in the scene occluding each other, where targets may merge or separate (e.g. a group of people walking together).

Information can be combined from multiple viewpoints to improve reliability, particularly taking advantage of the additional information where it minimises occlusion within the field-of-view (FOV). We treat the non-visible regions between camera views as simply another type of occlusion, and employ spatio-temporal reasoning to match targets moving between cameras that are spatially adjacent. The "boundaries" of the system represent locations from which previously unseen targets can enter the network.

To aid robust tracking across the camera network requires the system to maintain a record of each target entering the system and throughout its duration. When a target disappears from any camera FOV, motion prediction, colour identification, and learnt route patterns are used to re-establish tracking when the target reappears. Each target is maintained as a persistent object in the active database and spatial and temporal reasoning are used to detect these activities and ensure that entries are not retained for indefinite periods.

This chapter describes a multi-camera surveillance network that can detect and track objects (principally pedestrians and vehicles) moving through an outdoor environment. The remainder of this chapter is divided into four sections. The first describes the architecture of our multi-camera surveillance system. The second considers the image analysis methods for detecting and tracking objects within a single camera view. The next section deals with the integration of information from multiple cameras. The final section describes the structure of the database.

Erscheint lt. Verlag 16.1.2006
Zusatzinfo XIII, 240 p.
Verlagsort New York
Sprache englisch
Themenwelt Mathematik / Informatik Informatik Betriebssysteme / Server
Informatik Grafik / Design Digitale Bildverarbeitung
Mathematik / Informatik Informatik Netzwerke
Informatik Software Entwicklung User Interfaces (HCI)
Informatik Theorie / Studium Künstliche Intelligenz / Robotik
Informatik Weitere Themen Hardware
Technik
Schlagworte Ambient Intelligence • cognitive environment • computer vision • domestic care • face recognition • Smart environment • Speech Recognition • Surveillance
ISBN-10 0-387-22991-4 / 0387229914
ISBN-13 978-0-387-22991-1 / 9780387229911
Haben Sie eine Frage zum Produkt?
PDFPDF (Wasserzeichen)
Größe: 10,5 MB

DRM: Digitales Wasserzeichen
Dieses eBook enthält ein digitales Wasser­zeichen und ist damit für Sie persona­lisiert. Bei einer missbräuch­lichen Weiter­gabe des eBooks an Dritte ist eine Rück­ver­folgung an die Quelle möglich.

Dateiformat: PDF (Portable Document Format)
Mit einem festen Seiten­layout eignet sich die PDF besonders für Fach­bücher mit Spalten, Tabellen und Abbild­ungen. Eine PDF kann auf fast allen Geräten ange­zeigt werden, ist aber für kleine Displays (Smart­phone, eReader) nur einge­schränkt geeignet.

Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen dafür einen PDF-Viewer - z.B. den Adobe Reader oder Adobe Digital Editions.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen dafür einen PDF-Viewer - z.B. die kostenlose Adobe Digital Editions-App.

Zusätzliches Feature: Online Lesen
Dieses eBook können Sie zusätzlich zum Download auch online im Webbrowser lesen.

Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

Mehr entdecken
aus dem Bereich
Discover the smart way to polish your digital imagery skills by …

von Gary Bradley

eBook Download (2024)
Packt Publishing (Verlag)
CHF 49,20
Explore powerful modeling and character creation techniques used for …

von Lukas Kutschera

eBook Download (2024)
Packt Publishing (Verlag)
CHF 42,20
Generate creative images from text prompts and seamlessly integrate …

von Margarida Barreto

eBook Download (2024)
Packt Publishing (Verlag)
CHF 31,65