Nicht aus der Schweiz? Besuchen Sie lehmanns.de
Inference and Learning from Data - Ali H. Sayed

Inference and Learning from Data

Ali H. Sayed (Autor)

Media-Kombination
3370 Seiten
2022
Cambridge University Press
978-1-009-21810-8 (ISBN)
CHF 379,95 inkl. MwSt
This extraordinary three-volume work provides an accessible, comprehensive introduction to mathematical and statistical techniques for data-driven learning and inference. Ideal for early-career researchers and graduate students across signal processing, machine learning, statistics and data science.
This extraordinary three-volume work, written in an engaging and rigorous style by a world authority in the field, provides an accessible, comprehensive introduction to the full spectrum of mathematical and statistical techniques underpinning contemporary methods in data-driven learning and inference. The first volume, Foundations, establishes core topics in inference and learning, and prepares readers for studying their practical application. The second volume, Inference, introduces readers to cutting-edge techniques for inferring unknown variables and quantities. The final volume, Learning, provides a rigorous introduction to state-of-the-art learning methods. A consistent structure and pedagogy is employed throughout all three volumes to reinforce student understanding, with over 1280 end-of-chapter problems (including solutions for instructors), over 600 figures, over 470 solved examples, datasets and downloadable Matlab code. Unique in its scale and depth, this textbook sequence is ideal for early-career researchers and graduate students across many courses in signal processing, machine learning, statistical analysis, data science and inference.

Ali H. Sayed is Professor and Dean of Engineering at École Polytechnique Fédérale de Lausanne (EPFL), Switzerland. He has also served as Distinguished Professor and Chairman of Electrical Engineering at the University of California, Los Angeles, USA, and as President of the IEEE Signal Processing Society. He is a member of the US National Academy of Engineering (NAE) and The World Academy of Sciences (TWAS), and a recipient of the 2022 IEEE Fourier Award and the 2020 IEEE Norbert Wiener Society Award. He is a Fellow of the IEEE.

Volume I. Foundations: 1. Matrix theory; 2. Vector differentiation; 3. Random variables; 4. Gaussian distribution; 5. Exponential distributions; 6. Entropy and divergence; 7. Random processes; 8. Convex functions; 9. Convex optimization; 10. Lipschitz conditions; 11. Proximal operator; 12. Gradient descent method; 13. Conjugate gradient method; 14. Subgradient method; 15. Proximal and mirror descent methods; 16. Stochastic optimization; 17. Adaptive gradient methods; 18. Gradient noise; 19. Convergence analysis I: stochastic gradient algorithms; 20. Convergence analysis II: stochasic subgradient algorithms; 21. Convergence analysis III: stochastic proximal algorithms; 22. Variance-reduced methods I: uniform sampling; 23. Variance-reduced methods II: random reshuffling; 24. Nonconvex optimization; 25. Decentralized optimization I: primal methods; 26. Decentralized optimization II: primal-dual methods; Author index; Subject index. Volume II. Inference: 27. Mean-Square-Error inference; 28. Bayesian inference; 29. Linear regression; 30. Kalman filter; 31. Maximum likelihood; 32. Expectation maximization; 33. Predictive modeling; 34. Expectation propagation; 35. Particle filters; 36. Variational inference; 37. Latent Dirichlet allocation; 38. Hidden Markov models; 39. Decoding HMMs; 40. Independent component analysis; 41. Bayesian networks; 42. Inference over graphs; 43. Undirected graphs; 44. Markov decision processes; 45. Value and policy iterations; 46. Temporal difference learning; 47. Q-learning; 48. Value function approximation; 49. Policy gradient methods; Author index; Subject index. Volume III. Learning: 50. Least-squares problems; 51. Regularization; 52. Nearest-neighbor rule; 53. Self-organizing maps; 54. Decision trees; 55. Naive Bayes classifier; 56. Linear discriminant analysis; 57. Principal component analysis; 58. Dictionary learning; 59. Logistic regression; 60. Perceptron; 61. Support vector machines; 62. Bagging and boosting; 63. Kernel methods; 64. Generalization theory; 65. Feed forward neural networks; 66. Deep belief networks; 67. Convolutional networks; 68. Generative networks; 69. Recurrent networks; 70. Explainable learning; 71. Adversarial attacks; 72. Meta learning; Author index; Subject index.

Erscheint lt. Verlag 31.1.2023
Zusatzinfo Worked examples or Exercises
Verlagsort Cambridge
Sprache englisch
Maße 180 x 255 mm
Gewicht 5420 g
Themenwelt Mathematik / Informatik Informatik Theorie / Studium
Technik Nachrichtentechnik
ISBN-10 1-009-21810-7 / 1009218107
ISBN-13 978-1-009-21810-8 / 9781009218108
Zustand Neuware
Haben Sie eine Frage zum Produkt?