Nicht aus der Schweiz? Besuchen Sie lehmanns.de

Continuous-Time Markov Decision Processes (eBook)

Theory and Applications
eBook Download: PDF
2009 | 2009
XVIII, 234 Seiten
Springer Berlin (Verlag)
978-3-642-02547-1 (ISBN)

Lese- und Medienproben

Continuous-Time Markov Decision Processes - Xianping Guo, Onésimo Hernández-Lerma
Systemvoraussetzungen
106,99 inkl. MwSt
(CHF 104,50)
Der eBook-Verkauf erfolgt durch die Lehmanns Media GmbH (Berlin) zum Preis in Euro inkl. MwSt.
  • Download sofort lieferbar
  • Zahlungsarten anzeigen

Continuous-time Markov decision processes (MDPs), also known as controlled Markov chains, are used for modeling decision-making problems that arise in operations research (for instance, inventory, manufacturing, and queueing systems), computer science, communications engineering, control of populations (such as fisheries and epidemics), and management science, among many other fields. This volume provides a unified, systematic, self-contained presentation of recent developments on the theory and applications of continuous-time MDPs. The MDPs in this volume include most of the cases that arise in applications, because they allow unbounded transition and reward/cost rates. Much of the material appears for the first time in book form.



Onésimo Hernández-Lerma received the Science and Arts National Award from the Government of MEXICO in 2001, an honorary doctorate from the University of Sonora in 2003, and the Scopus Prize from Elsevier in 2008. Xianping Guo received the He-Pan-Qing-Yi Best Paper Award from the 7th Word Congress on Intelligent Control and Automation in 2008.

Onésimo Hernández-Lerma received the Science and Arts National Award from the Government of MEXICO in 2001, an honorary doctorate from the University of Sonora in 2003, and the Scopus Prize from Elsevier in 2008. Xianping Guo received the He-Pan-Qing-Yi Best Paper Award from the 7th Word Congress on Intelligent Control and Automation in 2008.

Preface 6
Contents 8
Notation 11
Abbreviations 14
Introduction and Summary 15
Introduction 15
Preliminary Examples 15
Summary of the Following Chapters 20
Continuous-Time Markov Decision Processes 23
Introduction 23
Notation 23
The Control Model 24
Evolution of the Control System 25
Continuous-Time Markov Decision Processes 27
Basic Optimality Criteria 30
The Dynamic Programming Approach 32
Average Optimality for Finite Models 33
Introduction 33
n-bias Optimality Criteria 34
Difference Formulas of n-biases 37
Characterization of n-bias Policies 43
Computation of n-bias Optimal Policies 50
Notation 50
The Policy Iteration Algorithm for Average Optimality 50
The 0-bias Policy Iteration Algorithm 53
n-bias Policy Iteration Algorithms 57
The Linear Programming Approach 60
Linear Programming for Ergodic Models 60
Linear Programming for Multichain Models 63
Notes 66
Discount Optimality for Nonnegative Costs 68
Introduction 68
The Nonnegative Model 68
Preliminaries 69
The Discounted Cost Optimality Equation 73
Existence of Optimal Policies 76
Approximation Results 76
The Policy Iteration Approach 79
Examples 81
Notes 82
Average Optimality for Nonnegative Costs 84
Introduction 84
The Average-Cost Criterion 85
The Minimum Nonnegative Solution Approach 86
The Average-Cost Optimality Inequality 89
The Average-Cost Optimality Equation 93
Examples 94
Notes 97
Discount Optimality for Unbounded Rewards 100
Introduction 100
The Discounted-Reward Optimality Equation 102
Discount Optimal Stationary Policies 108
A Value Iteration Algorithm 111
Examples 111
Notes 115
An Open Problem 115
Average Optimality for Unbounded Rewards 117
Introduction 117
Exponential Ergodicity Conditions 118
The Existence of AR Optimal Policies 121
The Policy Iteration Algorithm 125
The Bias of a Stationary Policy 125
Examples 131
Notes 136
Average Optimality for Pathwise Rewards 138
Introduction 138
The Optimal Control Problem 140
Optimality Conditions and Preliminaries 140
The Existence of PAR Optimal Policies 142
Policy and Value Iteration Algorithms 149
An Example 150
Notes 153
Advanced Optimality Criteria 154
Bias and Weakly Overtaking Optimality 154
Sensitive Discount Optimality 158
Blackwell Optimality 170
Notes 171
Variance Minimization 174
Introduction 174
Preliminaries 175
Computation of the Average Variance 175
Variance Minimization 181
Examples 182
Notes 184
Constrained Optimality for Discount Criteria 185
The Model with a Constraint 185
Preliminaries 187
Notation 190
Proof of Theorem 11.4 192
An Example 194
Notes 196
Constrained Optimality for Average Criteria 197
Average Optimality with a Constraint 197
Preliminaries 198
Proof of Theorem 12.4 202
An Example 202
Notes 204
Appendix A 205
Limit Theorems 205
Results from Measure Theory 207
Appendix B 212
Continuous-Time Markov Chains 212
Stationary Distributions and Ergodicity 215
Appendix C 218
The Construction of Transition Functions 218
Ergodicity Based on the Q-Matrix 223
Dynkin's Formula 227
References 229
Index 236

Erscheint lt. Verlag 18.9.2009
Reihe/Serie Stochastic Modelling and Applied Probability
Stochastic Modelling and Applied Probability
Zusatzinfo XVIII, 234 p.
Verlagsort Berlin
Sprache englisch
Themenwelt Mathematik / Informatik Mathematik Statistik
Technik
Wirtschaft Betriebswirtschaft / Management Planung / Organisation
Schlagworte 90C40, 93E20, 90B05, 90B22, 60J27, 60K30 • controlled Markov chains • Markov Chain • Markov decision process • Markov Decision Processes • Operations Research • stochastic control • stochastic dynamic programming
ISBN-10 3-642-02547-1 / 3642025471
ISBN-13 978-3-642-02547-1 / 9783642025471
Haben Sie eine Frage zum Produkt?
PDFPDF (Wasserzeichen)
Größe: 2,3 MB

DRM: Digitales Wasserzeichen
Dieses eBook enthält ein digitales Wasser­zeichen und ist damit für Sie persona­lisiert. Bei einer missbräuch­lichen Weiter­gabe des eBooks an Dritte ist eine Rück­ver­folgung an die Quelle möglich.

Dateiformat: PDF (Portable Document Format)
Mit einem festen Seiten­layout eignet sich die PDF besonders für Fach­bücher mit Spalten, Tabellen und Abbild­ungen. Eine PDF kann auf fast allen Geräten ange­zeigt werden, ist aber für kleine Displays (Smart­phone, eReader) nur einge­schränkt geeignet.

Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen dafür einen PDF-Viewer - z.B. den Adobe Reader oder Adobe Digital Editions.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen dafür einen PDF-Viewer - z.B. die kostenlose Adobe Digital Editions-App.

Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

Mehr entdecken
aus dem Bereich