Reinforcement Learning (eBook)
XVI, 310 Seiten
Springer International Publishing (Verlag)
978-3-031-28394-9 (ISBN)
This book offers a thorough introduction to the basics and scientific and technological innovations involved in the modern study of reinforcement-learning-based feedback control. The authors address a wide variety of systems including work on nonlinear, networked, multi-agent and multi-player systems.
A concise description of classical reinforcement learning (RL), the basics of optimal control with dynamic programming and network control architectures, and a brief introduction to typical algorithms build the foundation for the remainder of the book. Extensive research on data-driven robust control for nonlinear systems with unknown dynamics and multi-player systems follows. Data-driven optimal control of networked single- and multi-player systems leads readers into the development of novel RL algorithms with increased learning efficiency. The book concludes with a treatment of how these RL algorithms can achieve optimal synchronization policies for multi-agent systems with unknown model parameters and how game RL can solve problems of optimal operation in various process industries. Illustrative numerical examples and complex process control applications emphasize the realistic usefulness of the algorithms discussed.
The combination of practical algorithms, theoretical analysis and comprehensive examples presented in Reinforcement Learning will interest researchers and practitioners studying or using optimal and adaptive control, machine learning, artificial intelligence, and operations research, whether advancing the theory or applying it in mineral-process, chemical-process, power-supply or other industries.
Professor Jinna Li received the M.S. degree and the Ph. D. degree from Northeastern University, Shenyang, China, 2006 and 2009, respectively. She is an associate professor at Shenyang University of Chemical Technology, Shenyang, China. From April 2009 to April 2011, she carried out postdoctoral research at the Lab of Industrial Control Networks and Systems, Shenyang Institute of Automation, Chinese Academy of Sciences. From June 2014 to June 2015, she was a Visiting Scholar granted by China Scholarship Council with Energy Research Institute, Nanyang Technological University, Singapore. From September 2015 to June 2016, she was a Domestic Young Core Visiting Scholar granted by Ministry of Education of China with State Key Lab of Synthetical Automation for Process Industries, Northeastern University. From Jan. 2017 to Jul. 2017, she was a Visiting Scholar with the School of Electrical and Electronic Engineering, the University of Manchester, UK. Her current research interests include neural networks, reinforcement learning, optimal operational control, distributed optimization control and data-based control. She has authored two P.R.China patents, more than 40 journal papers, more than 20 conference papers, one books. Dr. Li is a Senior Fellow of the Institute of Electrical and Electronic Engineers. She presided 3 projects from the National Natural Science Foundation of China and 5 projects from provincial funding in P. R. China.
Frank L. Lewis is a Distinguished Scholar Professor and Moncrief-O'Donnell Chair at University of Texas at Arlington's Automation & Robotics Research Institute. He obtained his Bachelor's Degree in Physics/EE and MSEE at Rice University, his MS in Aeronautical Engineering from Univ. W. Florida, and his Ph.D. at Ga. Tech. He received the Fulbright Research Award, the Outstanding Service Award from Dallas IEEE Section, and was selected as Engineer of the year by Ft. Worth IEEE Section. He is an elected Guest Consulting Professor at South China University of Technology and Shanghai Jiao Tong University. He is a Fellow of the IEEE, Fellow of IFAC, Fellow of the U.K. Institute of Measurement & Control, and a U.K. Chartered Engineer. His current research interests include distributed control on graphs, neural and fuzzy systems, and intelligent control.
Erscheint lt. Verlag | 25.8.2023 |
---|---|
Reihe/Serie | Advances in Industrial Control | Advances in Industrial Control |
Zusatzinfo | XVI, 310 p. 114 illus., 110 illus. in color. |
Sprache | englisch |
Themenwelt | Naturwissenschaften ► Chemie |
Naturwissenschaften ► Physik / Astronomie | |
Technik ► Bauwesen | |
Technik ► Elektrotechnik / Energietechnik | |
Schlagworte | Adaptive Dynamic Programming • Data-driven control • model-free control • Optimal Operational Control • Process Engineering • Reinforcement Learning for Optimal Control |
ISBN-10 | 3-031-28394-5 / 3031283945 |
ISBN-13 | 978-3-031-28394-9 / 9783031283949 |
Haben Sie eine Frage zum Produkt? |
Größe: 7,0 MB
DRM: Digitales Wasserzeichen
Dieses eBook enthält ein digitales Wasserzeichen und ist damit für Sie personalisiert. Bei einer missbräuchlichen Weitergabe des eBooks an Dritte ist eine Rückverfolgung an die Quelle möglich.
Dateiformat: PDF (Portable Document Format)
Mit einem festen Seitenlayout eignet sich die PDF besonders für Fachbücher mit Spalten, Tabellen und Abbildungen. Eine PDF kann auf fast allen Geräten angezeigt werden, ist aber für kleine Displays (Smartphone, eReader) nur eingeschränkt geeignet.
Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen dafür einen PDF-Viewer - z.B. den Adobe Reader oder Adobe Digital Editions.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen dafür einen PDF-Viewer - z.B. die kostenlose Adobe Digital Editions-App.
Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.
aus dem Bereich