Multi-sensor Fusion for Autonomous Driving (eBook)
XV, 232 Seiten
Springer Nature Singapore (Verlag)
978-981-99-3280-1 (ISBN)
Although sensor fusion is an essential prerequisite for autonomous driving, it entails a number of challenges and potential risks. For example, the commonly used deep fusion networks are lacking in interpretability and robustness. To address these fundamental issues, this book introduces the mechanism of deep fusion models from the perspective of uncertainty and models the initial risks in order to create a robust fusion architecture.
This book reviews the multi-sensor data fusion methods applied in autonomous driving, and the main body is divided into three parts: Basic, Method, and Advance. Starting from the mechanism of data fusion, it comprehensively reviews the development of automatic perception technology and data fusion technology, and gives a comprehensive overview of various perception tasks based on multimodal data fusion. The book then proposes a series of innovative algorithms for various autonomous driving perception tasks, to effectively improve the accuracy and robustness of autonomous driving-related tasks, and provide ideas for solving the challenges in multi-sensor fusion methods. Furthermore, to transition from technical research to intelligent connected collaboration applications, it proposes a series of exploratory contents such as practical fusion datasets, vehicle-road collaboration, and fusion mechanisms.
In contrast to the existing literature on data fusion and autonomous driving, this book focuses more on the deep fusion method for perception-related tasks, emphasizes the theoretical explanation of the fusion method, and fully considers the relevant scenarios in engineering practice. Helping readers acquire an in-depth understanding of fusion methods and theories in autonomous driving, it can be used as a textbook for graduate students and scholars in related fields or as a reference guide for engineers who wish to apply deep fusion methods.
Prof. Xinyu Zhang is an associate professor at the School of Vehicle and Mobility, Tsinghua University. He was a research fellow at the University of Cambridge, UK, in 2008. Since 2014, he has served as Deputy Secretary General of the Chinese Association for Artificial Intelligence. As director of the Tsinghua Mengshi team, he invented the first amphibious autonomous flying car in China and proposed a new method of collaborative fusion for perception information and motion information in three-dimensional traffic. His research interests include multi-model fusion, unmanned ground vehicles, and flying cars.
Prof. Jun Li is a professor at the School of Vehicle and Mobility, Tsinghua University. He is the President of the China Society of Automotive Engineers and is also an Academician of the Chinese Academy of Engineering. Relying on the Intelligent Vehicle Design and Safety Technology Research Center, he led the team to focus on the core technology of intelligent driving, mainly to carry out the system engineering research on the integration of smart city-smart transportation-smart vehicle (SCSTSV). He focuses on the research of cutting-edge technologies, such as intelligent shared vehicle design, safety of the intended functionality, 5G vehicle equipment, and fusion perception, to overcome the core problems of intelligent driving and improve the core competitiveness of intelligent networked vehicles.
Dr. Zhiwei Li is a tutor of master's students in Beijing University of Chemical Technology. In 2020, he studied as a postdoctoral fellow with Academician Jun Li at Tsinghua University. His main research interests include computer vision, intelligent perception and autonomous driving, and robot system architecture.
Prof. Huaping Liu is a professor at the Department of Computer Science and Technology, Tsinghua University. He serves as an associate editor for various journals, including IEEE Transactions on Automation Science and Engineering, IEEE Transactions on Industrial Informatics, IEEE Robotics and Automation Letters, Neurocomputing, and Cognitive Computation. He has served as an associate editor for ICRA and IROS and on the IJCAI, RSS, and IJCNN Program Committees. His main research interests are robotic perception and learning.
Mo Zhou is currently a doctoral candidate at the School of Vehicle and Mobility, Tsinghua University, supervised by Prof. Jun Li. She received MS degree in image and video communications and signal processing from the University of Bristol, Bristol, UK. Her research interests include intelligent vehicles, deep learning, environmental perception, and driving safety.
Dr. Li Wang is a postdoctoral fellow in the State Key Laboratory of Automotive Safety and Energy, and the School of Vehicle and Mobility, Tsinghua University. He received his PhD degree in mechatronic engineering at the State Key Laboratory of Robotics and System, Harbin Institute of Technology, in 2020. He was a visiting scholar at Nanyang Technology of University for two years. He is the author of more than 20 SCI/EI articles. His research interests include autonomous-driving perception, 3D robot vision, and multi-modal fusion.
Zhenhong Zou is an assistant researcher at the School of Vehicle and Mobility, Tsinghua University. He received his BS degree in Information and Computation Science from Beihang University and was subsequently a visiting student at the University of California, Los Angeles, USA, supervised by Prof. Deanna Needell. His research interests include autonomous driving and multi-sensor fusion.
Although sensor fusion is an essential prerequisite for autonomous driving, it entails a number of challenges and potential risks. For example, the commonly used deep fusion networks are lacking in interpretability and robustness. To address these fundamental issues, this book introduces the mechanism of deep fusion models from the perspective of uncertainty and models the initial risks in order to create a robust fusion architecture.This book reviews the multi-sensor data fusion methods applied in autonomous driving, and the main body is divided into three parts: Basic, Method, and Advance. Starting from the mechanism of data fusion, it comprehensively reviews the development of automatic perception technology and data fusion technology, and gives a comprehensive overview of various perception tasks based on multimodal data fusion. The book then proposes a series of innovative algorithms for various autonomous driving perception tasks, to effectively improve the accuracy and robustness of autonomous driving-related tasks, and provide ideas for solving the challenges in multi-sensor fusion methods. Furthermore, to transition from technical research to intelligent connected collaboration applications, it proposes a series of exploratory contents such as practical fusion datasets, vehicle-road collaboration, and fusion mechanisms.In contrast to the existing literature on data fusion and autonomous driving, this book focuses more on the deep fusion method for perception-related tasks, emphasizes the theoretical explanation of the fusion method, and fully considers the relevant scenarios in engineering practice. Helping readers acquire an in-depth understanding of fusion methods and theories in autonomous driving, it can be used as a textbook for graduate students and scholars in related fields or as a reference guide for engineers who wish to apply deep fusion methods.
Erscheint lt. Verlag | 28.8.2023 |
---|---|
Zusatzinfo | XV, 232 p. 1 illus. |
Sprache | englisch |
Themenwelt | Informatik ► Datenbanken ► Data Warehouse / Data Mining |
Mathematik / Informatik ► Informatik ► Grafik / Design | |
Informatik ► Theorie / Studium ► Künstliche Intelligenz / Robotik | |
Technik ► Elektrotechnik / Energietechnik | |
Technik ► Fahrzeugbau / Schiffbau | |
Schlagworte | Autonomous Driving • computer vision • data fusion • machine learning • multimodal perception • Robotics • sensor management • uncertainty quantification |
ISBN-10 | 981-99-3280-7 / 9819932807 |
ISBN-13 | 978-981-99-3280-1 / 9789819932801 |
Haben Sie eine Frage zum Produkt? |
Größe: 9,0 MB
DRM: Digitales Wasserzeichen
Dieses eBook enthält ein digitales Wasserzeichen und ist damit für Sie personalisiert. Bei einer missbräuchlichen Weitergabe des eBooks an Dritte ist eine Rückverfolgung an die Quelle möglich.
Dateiformat: PDF (Portable Document Format)
Mit einem festen Seitenlayout eignet sich die PDF besonders für Fachbücher mit Spalten, Tabellen und Abbildungen. Eine PDF kann auf fast allen Geräten angezeigt werden, ist aber für kleine Displays (Smartphone, eReader) nur eingeschränkt geeignet.
Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen dafür einen PDF-Viewer - z.B. den Adobe Reader oder Adobe Digital Editions.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen dafür einen PDF-Viewer - z.B. die kostenlose Adobe Digital Editions-App.
Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.
aus dem Bereich