Convex Optimization for Machine Learning
now publishers Inc (Verlag)
978-1-63828-052-1 (ISBN)
This book covers an introduction to convex optimization, one of the powerful and tractable optimization problems that can be efficiently solved on a computer. The goal of the book is tohelp develop a sense of what convex optimization is, and how it can be used in a widening array of practical contexts with a particular emphasis on machine learning.The first part of the book covers core concepts of convex sets, convex functions, and related basic definitions that serve understanding convex optimization and its corresponding models. The second part deals with one very useful theory, called duality, which enables us to: (1) gain algorithmic insights; and (2) obtain an approximate solution to non-convex optimization problems which are often difficult to solve. The last part focuses on modern applications in machine learning and deep learning.A defining feature of this book is that it succinctly relates the “story” of how convex optimization plays a role, via historical examples and trending machine learning applications. Another key feature is that it includes programming implementation of a variety of machine learning algorithms inspired by optimization fundamentals, together with a brief tutorial of the used programming tools. The implementation is based on Python, CVXPY, and TensorFlow. This book does not follow a traditional textbook-style organization, but is streamlined via a series of lecture notes that are intimately related, centered around coherent themes and concepts. It serves as a textbook mainly for a senior-level undergraduate course, yet is also suitable for a first-year graduate course. Readers benefit from having a good background in linear algebra, some exposure to probability, and basic familiarity with Python.
Changho Suh is an Associate Professor of Electrical Engineering at KAIST and an Associate Head of KAIST AI Institute. He received the B.S. and M.S. degrees in Electrical Engineering from KAIST in 2000 and 2002 respectively, and the Ph.D. degree in Electrical Engineering and Computer Sciences from UC Berkeley in 2011. From 2011 to 2012, he was a postdoctoral associate at the Research Laboratory of Electronics in MIT. From 2002 to 2006, he was with Samsung Electronics.
Preface
1 Convex Optimization Basics
1.1Overview of the book
1.2Definition of convex optimization
1.3 Tractability of convex optimization and gradient descent
1.4Linear Program
1.5 Least Squares
1.6 Test error, regularization and CVXPY implementation
1.7Computed tomography
1.8Quadratic program
1.9 Second-order cone program
1.10 Semi-definite program
1.11 SDP relaxation
1.12 Problem Sets
2 Duality
2.1 Strong duality
2.2 Interior point method
2.3 Proof of strong duality theorem
2.4 Weak duality
2.5 Lagrange relaxation for Boolean problems
2.6 Lagrange relaxation for the MAXCUT problem
2.7 Problem Sets
3 Machine Learning Applications
3.1 Supervised learning and optimization
3.2 Logistic regression
3.3 Deep learning
3.4 Deep learning II
3.5 DL: TensorFlow implementation
3.6 Unsupervised Learning: Generative modeling
3.7 Generative Adversarial Networks (GANs)
3.8 GANs: TensorFlow implementation
3.9 Wasserstein GAN
3.10 Wasserstein GAN II
3.11 Wasserstein GAN: TensorFlow implementation
3.12 Fair machine learning
3.13 A fair classifier and its connection to GANs
3.14 A fair classifier: TensorFlow implementation
Appendices
Erscheinungsdatum | 03.10.2022 |
---|---|
Reihe/Serie | NowOpen |
Verlagsort | Hanover |
Sprache | englisch |
Maße | 156 x 234 mm |
Gewicht | 717 g |
Themenwelt | Mathematik / Informatik ► Mathematik ► Angewandte Mathematik |
Mathematik / Informatik ► Mathematik ► Finanz- / Wirtschaftsmathematik | |
ISBN-10 | 1-63828-052-5 / 1638280525 |
ISBN-13 | 978-1-63828-052-1 / 9781638280521 |
Zustand | Neuware |
Haben Sie eine Frage zum Produkt? |
aus dem Bereich