Inference and Learning from Data: Volume 1
Cambridge University Press (Verlag)
978-1-009-21812-2 (ISBN)
This extraordinary three-volume work, written in an engaging and rigorous style by a world authority in the field, provides an accessible, comprehensive introduction to the full spectrum of mathematical and statistical techniques underpinning contemporary methods in data-driven learning and inference. This first volume, Foundations, introduces core topics in inference and learning, such as matrix theory, linear algebra, random variables, convex optimization and stochastic optimization, and prepares students for studying their practical application in later volumes. A consistent structure and pedagogy is employed throughout this volume to reinforce student understanding, with over 600 end-of-chapter problems (including solutions for instructors), 100 figures, 180 solved examples, datasets and downloadable Matlab code. Supported by sister volumes Inference and Learning, and unique in its scale and depth, this textbook sequence is ideal for early-career researchers and graduate students across many courses in signal processing, machine learning, statistical analysis, data science and inference.
Ali H. Sayed is Professor and Dean of Engineering at École Polytechnique Fédérale de Lausanne (EPFL), Switzerland. He has also served as Distinguished Professor and Chairman of Electrical Engineering at the University of California, Los Angeles, USA, and as President of the IEEE Signal Processing Society. He is a member of the US National Academy of Engineering (NAE) and The World Academy of Sciences (TWAS), and a recipient of the 2022 IEEE Fourier Award and the 2020 IEEE Norbert Wiener Society Award. He is a Fellow of the IEEE.
Contents; Preface; Notation; 1. Matrix theory; 2. Vector differentiation; 3. Random variables; 4. Gaussian distribution; 5. Exponential distributions; 6. Entropy and divergence; 7. Random processes; 8. Convex functions; 9. Convex optimization; 10. Lipschitz conditions; 11. Proximal operator; 12. Gradient descent method; 13. Conjugate gradient method; 14. Subgradient method; 15. Proximal and mirror descent methods; 16. Stochastic optimization; 17. Adaptive gradient methods; 18. Gradient noise; 19. Convergence analysis I: Stochastic gradient algorithms; 20. Convergence analysis II: Stochasic subgradient algorithms; 21: Convergence analysis III: Stochastic proximal algorithms; 22. Variance-reduced methods I: Uniform sampling; 23. Variance-reduced methods II: Random reshuffling; 24. Nonconvex optimization; 25. Decentralized optimization I: Primal methods; 26: Decentralized optimization II: Primal-dual methods; Author index; Subject index.
Erscheinungsdatum | 22.12.2022 |
---|---|
Zusatzinfo | Worked examples or Exercises |
Verlagsort | Cambridge |
Sprache | englisch |
Maße | 180 x 255 mm |
Gewicht | 1790 g |
Themenwelt | Mathematik / Informatik ► Informatik ► Theorie / Studium |
Technik ► Nachrichtentechnik | |
ISBN-10 | 1-009-21812-3 / 1009218123 |
ISBN-13 | 978-1-009-21812-2 / 9781009218122 |
Zustand | Neuware |
Haben Sie eine Frage zum Produkt? |
aus dem Bereich