Deep Generative Modeling
Springer International Publishing (Verlag)
978-3-030-93160-5 (ISBN)
This textbook tackles the problem of formulating AI systems by combining probabilistic modeling and deep learning. Moreover, it goes beyond typical predictive modeling and brings together supervised learning and unsupervised learning. The resulting paradigm, called deep generative modeling, utilizes the generative perspective on perceiving the surrounding world. It assumes that each phenomenon is driven by an underlying generative process that defines a joint distribution over random variables and their stochastic interactions, i.e., how events occur and in what order. The adjective "deep" comes from the fact that the distribution is parameterized using deep neural networks. There are two distinct traits of deep generative modeling. First, the application of deep neural networks allows rich and flexible parameterization of distributions. Second, the principled manner of modeling stochastic dependencies using probability theory ensures rigorous formulation and prevents potential flaws in reasoning. Moreover, probability theory provides a unified framework where the likelihood function plays a crucial role in quantifying uncertainty and defining objective functions.
Deep Generative Modeling is designed to appeal to curious students, engineers, and researchers with a modest mathematical background in undergraduate calculus, linear algebra, probability theory, and the basics in machine learning, deep learning, and programming in Python and PyTorch (or other deep learning libraries). It will appeal to students and researchers from a variety of backgrounds, including computer science, engineering, data science, physics, and bioinformatics, who wish to become familiar with deep generative modeling. To engage the reader, the book introduces fundamental concepts with specific examples and code snippets. The full code accompanying the book is available on github.
The ultimate aim of the book is to outline the most important techniques in deep generative modeling and, eventually, enable readers to formulate new models and implement them.
Jakub Tomczak is an assistant professor of Artificial Intelligence in the Computational Intelligence group at Vrije Universiteit Amsterdam since November 2019. Before, from October 2018 to October 2019, he was a deep learning researcher (Staff Engineer) in Qualcomm AI Research in Amsterdam. From October 2016 to September 2018, he was a Marie Sklodowska-Curie Individual Fellow in Prof. Max Welling’s group at the University of Amsterdam. He obtained his Ph.D. in machine learning from the Wroclaw University of Technology. His research interests include probabilistic modeling, deep learning, approximate Bayesian modeling, and deep generative modeling (with special focus on Variational Auto-Encoders and Flow-based model).
Why Deep Generative Modeling?.- Autoregressive Models.- Flow-based Models.- Latent Variable Models.- Hybrid Modeling.- Energy-based Models.- Generative Adversarial Networks.- Deep Generative Modeling for Neural Compression.- Useful Facts from Algebra and Calculus.- Useful Facts from Probability Theory and Statistics.- Index.
Erscheinungsdatum | 23.02.2023 |
---|---|
Zusatzinfo | XVIII, 197 p. 127 illus., 122 illus. in color. |
Verlagsort | Cham |
Sprache | englisch |
Maße | 155 x 235 mm |
Gewicht | 337 g |
Themenwelt | Informatik ► Theorie / Studium ► Künstliche Intelligenz / Robotik |
Mathematik / Informatik ► Mathematik ► Angewandte Mathematik | |
Schlagworte | Artificial Intelligence • autoregressive models • Deep learning • generative adversarial networks • latent variable models • probabilistic modeling |
ISBN-10 | 3-030-93160-9 / 3030931609 |
ISBN-13 | 978-3-030-93160-5 / 9783030931605 |
Zustand | Neuware |
Haben Sie eine Frage zum Produkt? |
aus dem Bereich