Nicht aus der Schweiz? Besuchen Sie lehmanns.de
Connectionist Models -

Connectionist Models (eBook)

Proceedings of the 1990 Summer School
eBook Download: PDF
2014 | 1. Auflage
416 Seiten
Elsevier Science (Verlag)
978-1-4832-1448-1 (ISBN)
Systemvoraussetzungen
54,95 inkl. MwSt
(CHF 53,65)
Der eBook-Verkauf erfolgt durch die Lehmanns Media GmbH (Berlin) zum Preis in Euro inkl. MwSt.
  • Download sofort lieferbar
  • Zahlungsarten anzeigen
Connectionist Models contains the proceedings of the 1990 Connectionist Models Summer School held at the University of California at San Diego. The summer school provided a forum for students and faculty to assess the state of the art with regards to connectionist modeling. Topics covered range from theoretical analysis of networks to empirical investigations of learning algorithms; speech and image processing; cognitive psychology; computational neuroscience; and VLSI design. Comprised of 40 chapters, this book begins with an introduction to mean field, Boltzmann, and Hopfield networks, focusing on deterministic Boltzmann learning in networks with asymmetric connectivity; contrastive Hebbian learning in the continuous Hopfield model; and energy minimization and the satisfiability of propositional logic. Mean field networks that learn to discriminate temporally distorted strings are described. The next sections are devoted to reinforcement learning and genetic learning, along with temporal processing and modularity. Cognitive modeling and symbol processing as well as VLSI implementation are also discussed. This monograph will be of interest to both students and academicians concerned with connectionist modeling.
Connectionist Models contains the proceedings of the 1990 Connectionist Models Summer School held at the University of California at San Diego. The summer school provided a forum for students and faculty to assess the state of the art with regards to connectionist modeling. Topics covered range from theoretical analysis of networks to empirical investigations of learning algorithms; speech and image processing; cognitive psychology; computational neuroscience; and VLSI design. Comprised of 40 chapters, this book begins with an introduction to mean field, Boltzmann, and Hopfield networks, focusing on deterministic Boltzmann learning in networks with asymmetric connectivity; contrastive Hebbian learning in the continuous Hopfield model; and energy minimization and the satisfiability of propositional logic. Mean field networks that learn to discriminate temporally distorted strings are described. The next sections are devoted to reinforcement learning and genetic learning, along with temporal processing and modularity. Cognitive modeling and symbol processing as well as VLSI implementation are also discussed. This monograph will be of interest to both students and academicians concerned with connectionist modeling.

Front Cover 1
Connectionist Models 2
Copyright Page 3
Table of Contents 4
Foreword 8
Participants in the 1990 Connectionist Models Summer School 10
List Of Accepted Students 11
Part I: Mean Field, Boltzmann, and Hopfield Networks 14
Chapter 1. Deterministic Boltzmann Learning in Networks with Asymmetric Connectivity 16
Abstract 16
1 INTRODUCTION 16
2 DETERMINISTIC BOLTZMANN LEARNING IN SYMMETRIC NETWORKS 16
3 ASYMMETRIC NETWORKS 17
4 SIMULATION RESULTS 19
5 DISCUSSION 21
Acknowledgement 21
References 21
APPENDIX 21
Chapter 2. Contrastive Hebbian Learning in the Continuous Hopfield Model 23
Abstract 23
1 INTRODUCTION 23
2 STABILITY OF ACTIVATIONS 24
3 CONTRASTIVE LEARNING 24
4 DISCUSSION 25
5 APPENDIX 27
Acknowledgements 30
References 30
Chapter 3. Mean field networks that learn to discriminate temporally distorted strings 31
Abstract 31
INTRODUCTION 31
PREVIOUS APPROACHES USING NEURAL NETS 32
THE LEARNING PROCEDURE FOR THE MEAN FIELD MODULES 32
THE TASK USED IN THE SIMULATIONS 33
RESULTS AND DISCUSSION 34
Acknowledgements 34
References 35
Chapter 4. Energy Minimization and the Satisfiability of Propositional Logic 36
Abstract 36
1 Introduction 36
2 Satisfiability and models of propositional formulas 37
3 Equivalence between WFFs 37
4 Conversion of a WFF into Conjunction of Triples Form (CTF) 37
5 Energy functions 38
6 The equivalence between high order models and low order models 39
7 Describing WFFs by energy functions 40
8 The penalty function 40
9 Mapping from a satisfiability problem to a minimization problem and vice versa 41
10 Summary, applications and conclusions 42
Acknowledgments 43
References 43
Part II: Reinforcement Learning 46
Chapter 5. On the Computational Economics of Reinforcement Learning 48
Abstract 48
1 INTRODUCTION 48
2 INDIRECT AND DIRECT ADAPTIVE CONTROL 49
3 MARKOV DECISION PROBLEMS 50
4 INDIRECT AND DIRECT LEARNING FOR MARKOV DECISION PROBLEMS 51
5 AN INDIRECT ALGORITHM 51
6 Q-LEARNING 52
7 SIMULATION RESULTS 53
8 DISCUSSION 54
9 CONCLUSION 55
Acknowledgements 55
References 55
Chapter 6. Reinforcement Comparison 58
Abstract 58
1 INTRODUCTION 58
2 THEORY 58
3 RESULTS 60
4 CONCLUSIONS 61
Acknowledgements 62
References 62
Chapter 7. Learning Algorithms for Networks with Internal and External Feedback 65
Abstract 65
1 Terminology 65
2 The Neural Bucket Brigade Algorithm 66
3 A Reinforcement Comparison Algorithm for Continually Running Fully Recurrent Probabilistic Networks 67
4 Two Interacting Fully Recurrent Self-Supervised Learning Networks for Reinforcement Learning 68
5 An Example for Learning Dynamic Selective Attention: Adaptive Focus Trajectories for Attentive Vision 71
6 An Adaptive Subgoal Generator for Planning Action Sequences 72
References 73
Part III: Genetic Learning 76
Chapter 8. Exploring Adaptive Agency I: Theory and Methods for Simulating the Evolution of Learning 78
Abstract 78
1 INTRODUCTION 78
2 NATURAL SELECTION AND THE EVOLUTION OF SUBSIDIARY ADAPTIVE PROCESSES 79
3 A BRIEF HISTORY OF LEARNING THEORY IN (COMPARATIVE) PSYCHOLOGY 80
4 HOW ECOLOGICAL LEARNING THEORY CAN INFORM CONNECTIONIST LEARNING THEORY 82
5 TOWARDS A TAXONOMY OF ADAPTIVE FUNCTIONS FOR LEARNING 84
6 A SIMULATION FRAMEWORK FOR EXPLORING ADAPTIVE AGENCY 86
7 A SIMPLE SCENARIO FOR THE EVOLUTION OF UNSUPERVISED LEARNING 88
8 PLANNED EXTENSIONS AND FUTURE RESEARCH 90
Acknowledgements 91
References 91
Chapter 9. The Evolution of Learning: An Experiment in Genetic Connectionism 94
Abstract 94
1 INTRODUCTION 94
2 EVOLUTION OF LEARNING IN NEURAL NETWORKS 96
3 RESULTS 99
4 DISCUSSION AND FURTHER DIRECTIONS 102
Acknowledgements 103
References 103
Chapter 10. Evolving Controls for Unstable Systems 104
Abstract 104
1 INTRODUCTION 104
2 NEURAL NETWORKS 107
3 GENETIC ALGORITHM 108
4 POLE BALANCING 109
5 DISCUSSION 112
APPENDIX 113
Acknowledgements 114
References 114
Part IV: Temporal Processing 116
Chapter 11. BACK-PROPAGATION, WEIGHT-ELIMINATION AND TIME SERIES PREDICTION 118
Abstract 118
1 INTRODUCTION 118
2 NETWORKS FOR TIME SERIES PREDICTION 118
3 SUNSPOTS 122
4 SUMMARY 128
Appendix: Parameters of the Network 128
References 129
Chapter 12. Predicting the Mackey-Glass Timeseries With Cascade-Correlation Learning 130
Abstract 130
1 THE MACKEY-GLASS TIMESERIES 130
2 THE CASCADE-CORRELATION LEARNING ALGORITHM 131
3 BENCHMARK RESULTS 132
4 Conclusions 135
Acknowledgments 135
References 136
Chapter 13. Learning in Recurrent Finite Difference Networks 137
Abstract 137
1 A FINITE DIFFERENCE ALGORITHM 137
2 SIMULATIONS 138
3 DISTORTED WAVE FORMS WITH THE RTRL ALGORITHM 140
4. DISCUSSION 142
Acknowledgements 143
References 143
Chapter 14. Temporal Backpropagation: An Efficient Algorithm for Finite Impulse Response Neural Networks 144
Abstract 144
1 INTRODUCTION 144
2 NETWORK STRUCTURE 144
3 TRAINING 146
4 APPLICATIONS 150
5 CONCLUSION 150
References 150
Part V: Theory and Analysis 152
Chapter 15. Optimal Dimensionality Reduction Using Hebbian Learning 154
Abstract 154
1 Introduction 154
2 Statement of The Problem 154
3 Main Results 155
4 Discussion 155
5 appendix 156
6 references 157
Chapter 16. Basis-Function Trees for Approximation in High-Dimensional Spaces 158
Abstract 158
1 INTRODUCTION 158
2 NETWORK STRUCTURE 158
3 GROWING THE TREE 160
4 EXAMPLES 160
5 DISCUSSION 162
Acknowledgements 164
References 164
Chapter 17. Effects of Circuit Parameters on Convergence of Trinary Update Back-Propagation 165
Abstract 165
1. INTRODUCTION 165
2. TRIT ALGORITHM 166
3. EFFECTS OF CIRCUIT LIMITATIONS 167
4. RESULTS 168
5. CONCLUSIONS 169
References 170
Chapter 18. Equivalence Proofs for Multi-Layer Perceptron Classifiers and the Bayesian Discriminant Function 172
Abstract 172
1 INTRODUCTION 172
2 A GENERAL DESCRIPTION OF THE N-CLASS PROBLEM AND THE BAYESIAN DISCRIMINANT FUNCTION 173
3 REASONABLE ERROR MEASURES: BAYESIAN PERFORMANCE VIA ACCURATE ESTIMATION OF A POSTERIORI PROBABILITIES 173
4 CLASSIFICATION FIGURES OF MERIT: LIMITED BAYESIAN PERFORMANCE WITHOUT EXPLICIT ESTIMATION OF A POSTERIORI PROBABILITIES 181
5 COMMENTS ON THE APPLICABILITY OF THESE PROOFS TO THE STUDY OF GENERALIZATION IN MLP CLASSIFIERS 183
6 SUMMARY 184
Acknowledgments 185
References 185
Chapter 19. A Local Approach to Optimal Queries 186
Abstract 186
1 Overview 186
2 Concept Learning and Generalization 186
3 Generalization From Queries 187
4 Selective Sampling as Sequential Querying 188
5 Optimal queries 189
6 Conclusion 191
Acknowledgemnts 192
References 192
Part VI: Modularity 194
Chapter 20. A Modularization Scheme for Feedforward Networks 196
Abstract 196
1 INTRODUCTION 196
2 CONSTRAINING INTERNAL REPRESENTATIONS 197
3 APPLICATIONS 197
4 DISCUSSION 199
A LEARNING PROCEDURE ADJUSTMENTS 199
Acknowledgement 199
References 199
Chapter 21. A Compositional Connectionist Architecture 201
Abstract 201
1 INTRODUCTION 201
2 CompoNet: A COMPOSITIONAL CONNECTIONIST ARCHITECTURE 201
3 SIMULATION RESULTS 205
4 SUMMARY 210
References 210
Part VII: Cognitive Modeling and Symbol Processing 212
Chapter 22. From Rote Learning to System Building: Acquiring Verb Morphology in Children and Connectionist Nets 214
Abstract 214
1 INTRODUCTION 214
2 METHOD 218
3 RESULTS 221
4 DISCUSSION 226
5 CONCLUSION 230
References 230
Chapter 23. Parallel Mapping Circuitry in a Phonological Model 233
Abstract 233
1 Introduction 233
2 Sequence Manipulation Via a Change Buffer 233
3 Operation of The Mapping Matrix 235
4 Projections 237
5 Clustering 237
6 M P: The Big Picture 239
7 Discussion 239
Acknowledgements 240
References 240
Chapter 24. A modular Neural Network Model of the Acquisition of Category Names in Children 241
Abstract 241
1 INTRODUCTION 241
2. EXPERIMENT 244
3. RESULTS AND DISCUSSION 245
4. CONCLUSIONS AND FUTURE WORK 247
Acknowledgments 247
References 247
Chapter 25. A Computational Model of Attentional Requirements in Sequence Learning 249
Abstract 249
1 BEHAVIORAL CHARACTERISTICS OF SEQUENCE LEARNING 249
2 A COMPUTATIONAL MODEL OF SEQUENCE LEARNING 250
3 GENERAL DISCUSSION 253
Acknowledgements 255
References 255
Chapter 26. Recall of Sequences of Items by a Neural Network 256
Abstract 256
1 INTRODUCTION 256
2 THE SIMULATION 256
3 RESULTS 260
4 DISCUSSION 262
Acknowledgements 265
Rererences 265
Chapter 27. Binding, Episodic Short-Term Memory, and Selective Attention, Or Why are PDP Models Poor at Symbol Manipulation? 266
Abstract 266
1 INTRODUCTION 266
2 SYMBOL SYSTEMS 267
3 PARALLEL DISTRIBUTED PROCESSING 267
4 HUMAN COGNITION 267
5 THE BINDING PROBLEM OR WHAT GOES WITH WHAT 270
6 PADSYMA, THE NEW MODEL 271
7 PROPERTIES OF THE MODEL 275
8 DISCUSSION 275
Acknowledgements 276
References 276
Chapter 28. Analogical Retrieval Within a Hybrid Spreading-Activation Network 278
ABSTRACT 278
1. INTRODUCTION 278
2. CONNECTIONIST MODELS 279
3. A HYBRID SPREADING-ACTIVATION MODEL OF DISAMBIGUATION AND RETRIEVAL 280
4. SIMULATION RESULTS 285
5. DISCUSSION 286
ACKNOWLEDGEMENTS 288
REFERENCES 288
Chapter 29. Appropriate Uses of Hybrid Systems 290
Abstract 290
1 Introduction 290
2 Motivating a Hybrid Solution 291
3 Problems with Hybrids 292
4 The SCALIR System 292
5 Discussion and Conclusions 298
Acknowledgements 298
References 298
Chapter 30. Cognitive Map Construction and Use: A Parallel Distributed Processing Approach 300
Abstract 300
1 INTRODUCTION 300
2 THE PREDICTIVE MAP 300
3 THE ORIENTING SYSTEM 305
4 THE INVERSE MODEL 307
5 NAVIGATION 309
6 THEORETICAL MOTIVATION 310
7 CONCLUSIONS 312
Acknowledgements 312
References 312
Part VIII: Speech and Vision 314
Chapter 31. UNSUPERVISED DISCOVERY OF SPEECH SEGMENTS USING RECURRENT NETWORKS 316
Abstract 316
References 322
Chapter 32. Feature Extraction using an Unsupervised Neural Network 323
Abstract 323
1 How to construct optimal unsupervised feature extraction 323
2 Feature Extraction using ANN 324
3 Comparison with other feature extraction methods 326
4 Discussion 329
Acknowledgements 329
References 329
Mathematical Appendix 331
Chapter 33. Motor Control for Speech Skills a Connectionist Approach 332
Abstract 332
1 INTRODUCTION 332
2 GENERAL PRINCIPLES 332
3 FROM ACOUSTIC SIGNAL TO ARTICULATORY GESTURES: THE ROLE OF CONSTRAINTS 334
4 GENERATION OF CONTROLLED OSCILLATORS BY SEQUENTIAL NETWORKS 337
5 NON-LINEAR TRANSFORMATION OF A SIMPLE OSCILLATOR TRAJECTORY INTO COMPLEX GESTURES 338
6 COMPLETE MODEL 338
7 CONCLUSION 339
Acknowledgements 339
References 339
Chapter 34. Extracting features from faces using compression networks: Face, identity, emotion, and gender recognition using holons 341
Abstract 341
1 INTRODUCTION 341
2 COMPRESSION NETWORKS 341
3 FACE RECOGNITION USING COMPRESSION NETWORKS 342
4 GROUNDING MEANING IN PERCEPTION 346
5 CONCLUSIONS 349
References 349
Chapter 35. The Development of Topography and Ocular Dominance 351
Abstract 351
1 INTRODUCTION 351
2 COMPUTATIONAL MODELS 352
3 THE BINOCULAR NEURAL ACTIVITY MODEL 354
4 THE ELASTIC NET APPROACH 356
5 DISCUSSION 359
6 CONCLUSIONS 359
Acknowledgements 360
References 360
Chapter 36. On Modeling Some Aspects of Higher Level Vision 363
Abstract 363
1. BACKGROUND VIEWS AND ASSUMPTIONS 363
2. PERCEPTUAL PRIMING 365
3. EXPECTATION AS IMAGINATIVE SELF-PRIMING 366
4. THE IMAGE NORMALIZATION PROBLEM 367
5. IMAGE SEGMENTATION AGAIN 369
Acknowledgements 370
References 370
Part IX: Biology 374
Chapter 37. Modeling cortical area 7a using Stochastic Real-Valued (SRV) units 376
Abstract 376
1 INTRODUCTION 376
2 NETWORK STRUCTURE AND TRAINING 377
3 SIMULATION RESULTS 379
4 DISCUSSION AND CONCLUSIONS 380
Acknowledgements 380
References 381
Chapter 38. Neuronal signal strength is enhanced by rhythmic firing 382
Abstract 382
1 INTRODUCTION 382
2 SIMULATION OF A CORTICAL PYRAMIDAL NEURON 382
3 COMPETITIVE EFFECTS IN VISUAL ATTENTION 383
4 APPENDIX: SIMULATION 383
Acknowledgements 386
References 387
Part X: VLSI Implementation 390
Chapter 39. An Analog VLSI Neural Network Cocktail Party Processor 392
Abstract 392
1 INTRODUCTION 392
2 BINDING BY PHASE CORRELATION 392
3 THE MALSBURG & SCHNEIDER MODEL
4 AN ANALOG VLSI IMPLEMENTATION 395
5 PRESENT STATUS AND FUTURE DIRECTIONS 398
Acknowledgements 398
References 399
Chapter 40. A VLSI Neural Network with On-Chip Learning 400
Abstract 400
1 THE NETWORK MODEL 400
2 TRAINING 400
3 HARDWARE REALIZATION 402
4 PARALLEL RANDOM NUMBER GENERATION 405
5 CONCLUDING REMARKS 406
Acknowledgements 407
References 407
Index 414

Erscheint lt. Verlag 12.5.2014
Sprache englisch
Themenwelt Geisteswissenschaften Psychologie Test in der Psychologie
Sozialwissenschaften Politik / Verwaltung
ISBN-10 1-4832-1448-6 / 1483214486
ISBN-13 978-1-4832-1448-1 / 9781483214481
Haben Sie eine Frage zum Produkt?
PDFPDF (Adobe DRM)
Größe: 54,6 MB

Kopierschutz: Adobe-DRM
Adobe-DRM ist ein Kopierschutz, der das eBook vor Mißbrauch schützen soll. Dabei wird das eBook bereits beim Download auf Ihre persönliche Adobe-ID autorisiert. Lesen können Sie das eBook dann nur auf den Geräten, welche ebenfalls auf Ihre Adobe-ID registriert sind.
Details zum Adobe-DRM

Dateiformat: PDF (Portable Document Format)
Mit einem festen Seiten­layout eignet sich die PDF besonders für Fach­bücher mit Spalten, Tabellen und Abbild­ungen. Eine PDF kann auf fast allen Geräten ange­zeigt werden, ist aber für kleine Displays (Smart­phone, eReader) nur einge­schränkt geeignet.

Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen eine Adobe-ID und die Software Adobe Digital Editions (kostenlos). Von der Benutzung der OverDrive Media Console raten wir Ihnen ab. Erfahrungsgemäß treten hier gehäuft Probleme mit dem Adobe DRM auf.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen eine Adobe-ID sowie eine kostenlose App.
Geräteliste und zusätzliche Hinweise

Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

Mehr entdecken
aus dem Bereich
Grundlagen und Anwendung von Online-Tests in der Unternehmenspraxis

von Heinke Steiner

eBook Download (2024)
Springer Berlin Heidelberg (Verlag)
CHF 45,90
Grundlagen und Anwendungsperspektiven

von Katrin Rentzsch; Astrid Schütz

eBook Download (2022)
Kohlhammer Verlag
CHF 34,15