Nicht aus der Schweiz? Besuchen Sie lehmanns.de
Multivariate Statistics - Hemant Ishwaran

Multivariate Statistics

Classical Foundations and ModernMachine Learning

(Autor)

Buch | Hardcover
504 Seiten
2025
Chapman & Hall/CRC (Verlag)
978-1-032-75879-4 (ISBN)
CHF 199,95 inkl. MwSt
  • Noch nicht erschienen (ca. März 2025)
  • Versandkostenfrei
  • Auch auf Rechnung
  • Artikel merken
This book explores multivariate statistics from traditional and modern perspectives. It covers core topics like multivariate normality, MANOVA, and canonical correlation analysis, as well as modern concepts such as gradient boosting, random forests, variable importance, and causal inference.
This book explores multivariate statistics from both traditional and modern perspectives. The first section covers core topics like multivariate normality, MANOVA, discrimination, PCA, and canonical correlation analysis. The second section includes modern concepts such as gradient boosting, random forests, variable importance, and causal inference.

A key theme is leveraging classical multivariate statistics to explain advanced topics and prepare for contemporary methods. For example, linear models provide a foundation for understanding regularization with AIC and BIC, leading to a deeper analysis of regularization through generalization error and the VC theorem. Discriminant analysis introduces the weighted Bayes rule, which leads into modern classification techniques for class-imbalanced machine learning problems. Steepest descent serves as a precursor to matching pursuit and gradient boosting. Axis-aligned trees like CART, a classical tool, set the stage for more recent methods like super greedy trees.

Another central theme is the concept of training error. While introductory courses often emphasize that reducing training error can lead to overfitting, training error is also known as empirical risk. In regression, it is the residual sum of squares, and reducing it leads to least squares solutions. But is this always the best approach? Empirical risk is vital in statistical learning theory and is key to determining whether learning is possible. The principle of empirical risk minimization is crucial and shows that reducing training error is not harmful when regularization is applied. This principle is further examined through discussions on penalization, matching pursuit, gradient boosting, and super greedy trees.

Key Features:



Covers both classical and contemporary multivariate statistics.
Each chapter includes a carefully selected set of exercises that vary in degree of difficulty and are both applied and theoretical.
The book can also serve as a reference for researchers due to the diverse topics covered, including new material on super greedy trees, rule-based variable selection, and machine learning for causal inference.
Extensive treatment on trees that provides a comprehensive and unified approach to understanding trees in terms of partitions and empirical risk minimization.
New content on random forests, including random forest quantile classifiers for class-imbalanced problems, multivariate random forests, subsampling for confidence regions, super greedy forests. An entire chapter is dedicated to random survival forests, featuring new material on random hazard forests extending survival forests to time-varying covariate

Dr. Hemant Ishwaran’s work focuses on advancing machine learning techniques for applications in public health, medicine, and informatics. His contributions include the development of open-source tools, such as R packages for his pioneering methods, including the widely-used random survival forests—a significant extension of the random forest algorithm in machine learning. His collaborations with healthcare experts have resulted in precision models for cardiovascular disease (CVD), heart transplantation, cancer staging, and resistance to gene cancer therapy.

Preface 1. Introduction 2. Properties of Random Vectors and Background Material 3. Multivariate Normal Distribution 4. Linear Regression 5. Multivariate Regression 6. Discriminant Analysis and Classification 7. Generalization Error 8. Principal Component Analysis 9. Canonical Correlation Analysis 10. Newton’s Method 11. Steepest Descent 12. Gradient Boosting 13. Detailed Analysis of L2Boost 14. Coordinate Descent 15. Trees 16. Random Forests 17. Random Forests Variable Selection 18. Splitting Effect on Random Forests 19. Random Survival Forests 20. Causal Estimates using Machine Learning

Erscheint lt. Verlag 20.3.2025
Zusatzinfo 36 Tables, black and white; 113 Line drawings, color; 16 Line drawings, black and white; 3 Halftones, color; 2 Halftones, black and white; 116 Illustrations, color; 18 Illustrations, black and white
Sprache englisch
Maße 178 x 254 mm
Themenwelt Mathematik / Informatik Mathematik Statistik
Technik Elektrotechnik / Energietechnik
ISBN-10 1-032-75879-1 / 1032758791
ISBN-13 978-1-032-75879-4 / 9781032758794
Zustand Neuware
Haben Sie eine Frage zum Produkt?
Mehr entdecken
aus dem Bereich
Der Weg zur Datenanalyse

von Ludwig Fahrmeir; Christian Heumann; Rita Künstler …

Buch | Softcover (2024)
Springer Spektrum (Verlag)
CHF 69,95
Eine Einführung für Wirtschafts- und Sozialwissenschaftler

von Günter Bamberg; Franz Baur; Michael Krapp

Buch | Softcover (2022)
De Gruyter Oldenbourg (Verlag)
CHF 41,90