Reinforcement Learning
Springer International Publishing (Verlag)
978-3-031-28396-3 (ISBN)
This book offers a thorough introduction to the basics and scientific and technological innovations involved in the modern study of reinforcement-learning-based feedback control. The authors address a wide variety of systems including work on nonlinear, networked, multi-agent and multi-player systems.
A concise description of classical reinforcement learning (RL), the basics of optimal control with dynamic programming and network control architectures, and a brief introduction to typical algorithms build the foundation for the remainder of the book. Extensive research on data-driven robust control for nonlinear systems with unknown dynamics and multi-player systems follows. Data-driven optimal control of networked single- and multi-player systems leads readers into the development of novel RL algorithms with increased learning efficiency. The book concludes with a treatment of how these RL algorithms can achieve optimal synchronization policies for multi-agentsystems with unknown model parameters and how game RL can solve problems of optimal operation in various process industries. Illustrative numerical examples and complex process control applications emphasize the realistic usefulness of the algorithms discussed.
The combination of practical algorithms, theoretical analysis and comprehensive examples presented in Reinforcement Learning will interest researchers and practitioners studying or using optimal and adaptive control, machine learning, artificial intelligence, and operations research, whether advancing the theory or applying it in mineral-process, chemical-process, power-supply or other industries.
Professor Jinna Li received the M.S. degree and the Ph. D. degree from Northeastern University, Shenyang, China, 2006 and 2009, respectively. She is an associate professor at Shenyang University of Chemical Technology, Shenyang, China. From April 2009 to April 2011, she carried out postdoctoral research at the Lab of Industrial Control Networks and Systems, Shenyang Institute of Automation, Chinese Academy of Sciences. From June 2014 to June 2015, she was a Visiting Scholar granted by China Scholarship Council with Energy Research Institute, Nanyang Technological University, Singapore. From September 2015 to June 2016, she was a Domestic Young Core Visiting Scholar granted by Ministry of Education of China with State Key Lab of Synthetical Automation for Process Industries, Northeastern University. From Jan. 2017 to Jul. 2017, she was a Visiting Scholar with the School of Electrical and Electronic Engineering, the University of Manchester, UK. Her current research interests include neural networks, reinforcement learning, optimal operational control, distributed optimization control and data-based control. She has authored two P.R.China patents, more than 40 journal papers, more than 20 conference papers, one books. Dr. Li is a Senior Fellow of the Institute of Electrical and Electronic Engineers. She presided 3 projects from the National Natural Science Foundation of China and 5 projects from provincial funding in P. R. China.
Frank L. Lewis is a Distinguished Scholar Professor and Moncrief-O'Donnell Chair at University of Texas at Arlington's Automation & Robotics Research Institute. He obtained his Bachelor's Degree in Physics/EE and MSEE at Rice University, his MS in Aeronautical Engineering from Univ. W. Florida, and his Ph.D. at Ga. Tech. He received the Fulbright Research Award, the Outstanding Service Award from Dallas IEEE Section, and was selected as Engineer of the year by Ft. Worth IEEE Section. He is an elected Guest Consulting Professor at South China University of Technology and Shanghai Jiao Tong University. He is a Fellow of the IEEE, Fellow of IFAC, Fellow of the U.K. Institute of Measurement & Control, and a U.K. Chartered Engineer. His current research interests include distributed control on graphs, neural and fuzzy systems, and intelligent control.
1. Background on Reinforcement Learning and Optimal Control.- 2. H-infinity Control Using Reinforcement Learning.- 3. Robust Tracking Control and Output Regulation.- 4. Interleaved Robust Reinforcement Learning.- 5. Optimal Networked Controller and Observer Design.- 6. Interleaved Q-Learning.- 7. Off-Policy Game Reinforcement Learning.- 8. Game Reinforcement Learning for Process Industries.
Erscheinungsdatum | 27.07.2024 |
---|---|
Reihe/Serie | Advances in Industrial Control |
Zusatzinfo | XVI, 310 p. 114 illus., 110 illus. in color. |
Verlagsort | Cham |
Sprache | englisch |
Maße | 155 x 235 mm |
Themenwelt | Technik ► Elektrotechnik / Energietechnik |
Schlagworte | Adaptive Dynamic Programming • Data-driven control • model-free control • Optimal Operational Control • Process Engineering • Reinforcement Learning for Optimal Control |
ISBN-10 | 3-031-28396-1 / 3031283961 |
ISBN-13 | 978-3-031-28396-3 / 9783031283963 |
Zustand | Neuware |
Haben Sie eine Frage zum Produkt? |
aus dem Bereich