Nicht aus der Schweiz? Besuchen Sie lehmanns.de

Learning to Rank for Information Retrieval (eBook)

(Autor)

eBook Download: PDF
2011 | 2011
XVII, 285 Seiten
Springer Berlin (Verlag)
978-3-642-14267-3 (ISBN)

Lese- und Medienproben

Learning to Rank for Information Retrieval - Tie-Yan Liu
Systemvoraussetzungen
139,09 inkl. MwSt
(CHF 135,85)
Der eBook-Verkauf erfolgt durch die Lehmanns Media GmbH (Berlin) zum Preis in Euro inkl. MwSt.
  • Download sofort lieferbar
  • Zahlungsarten anzeigen

Due to the fast growth of the Web and the difficulties in finding desired information, efficient and effective information retrieval systems have become more important than ever, and the search engine has become an essential tool for many people.

The ranker, a central component in every search engine, is responsible for the matching between processed queries and indexed documents. Because of its central role, great attention has been paid to the research and development of ranking technologies. In addition, ranking is also pivotal for many other information retrieval applications, such as collaborative filtering, definition ranking, question answering, multimedia retrieval, text summarization, and online advertisement. Leveraging machine learning technologies in the ranking process has led to innovative and more effective ranking models, and eventually to a completely new research area called 'learning to rank'.

Liu first gives a comprehensive review of the major approaches to learning to rank. For each approach he presents the basic framework, with example algorithms, and he discusses its advantages and disadvantages. He continues with some recent advances in learning to rank that cannot be simply categorized into the three major approaches - these include relational ranking, query-dependent ranking, transfer ranking, and semisupervised ranking. His presentation is completed by several examples that apply these technologies to solve real information retrieval problems, and by theoretical discussions on guarantees for ranking performance.

This book is written for researchers and graduate students in both information retrieval and machine learning. They will find here the only comprehensive description of the state of the art in a field that has driven the recent advances in search engine development.



Tie-Yan Liu is a lead researcher at Microsoft Research Asia. He leads a team working on learning to rank for information retrieval, and graph-based machine learning.   So far, he has more than 70 quality papers published in referred conferences and journals, including SIGIR(9), WWW(3), ICML(3), KDD, NIPS, ACM MM, IEEE TKDE, SIGKDD Explorations, etc.   He has about 40 filed US / international patents or pending applications on learning to rank, general Web search, and multimedia signal processing.   He is the co-author of the Best Student Paper for SIGIR 2008, and the Most Cited Paper for the Journal of Visual Communication and Image Representation (2004-2006). He is an Area Chair of SIGIR 2009, a Senior Program Committee member of SIGIR 2008, and Program Committee members for many other international conferences, such as WWW, ICML, ACL, and ICIP. He is the co-chair of the SIGIR workshop on learning to rank for information retrieval (LR4IR) in 2007 and 2008. He has been on the Editorial Board of the Information Retrieval Journal (IRJ) since 2008, and is the guest editor of the special issue on learning to rank of IRJ.   He has given tutorials on learning to rank at WWW 2008 and SIGIR 2008. Prior to joining Microsoft, he obtained his Ph.D. from Tsinghua University, where his research efforts were devoted to video content analysis.

Tie-Yan Liu is a lead researcher at Microsoft Research Asia. He leads a team working on learning to rank for information retrieval, and graph-based machine learning.   So far, he has more than 70 quality papers published in referred conferences and journals, including SIGIR(9), WWW(3), ICML(3), KDD, NIPS, ACM MM, IEEE TKDE, SIGKDD Explorations, etc.   He has about 40 filed US / international patents or pending applications on learning to rank, general Web search, and multimedia signal processing.   He is the co-author of the Best Student Paper for SIGIR 2008, and the Most Cited Paper for the Journal of Visual Communication and Image Representation (2004~2006). He is an Area Chair of SIGIR 2009, a Senior Program Committee member of SIGIR 2008, and Program Committee members for many other international conferences, such as WWW, ICML, ACL, and ICIP. He is the co-chair of the SIGIR workshop on learning to rank for information retrieval (LR4IR) in 2007 and 2008. He has been on the Editorial Board of the Information Retrieval Journal (IRJ) since 2008, and is the guest editor of the special issue on learning to rank of IRJ.   He has given tutorials on learning to rank at WWW 2008 and SIGIR 2008. Prior to joining Microsoft, he obtained his Ph.D. from Tsinghua University, where his research efforts were devoted to video content analysis.

Preface 5
Acknowledgements 8
Contents 9
Part I: Overview of Learning to Rank 16
Chapter 1: Introduction 17
1.1 Overview 17
1.2 Ranking in Information Retrieval 21
1.2.1 Conventional Ranking Models 21
1.2.1.1 Relevance Ranking Models 21
1.2.1.2 Importance Ranking Models 23
1.2.2 Query-Level Position-Based Evaluations 25
Mean Reciprocal Rank (MRR) 26
Mean Average Precision (MAP) 27
Discounted Cumulative Gain (DCG) 27
Rank Correlation (RC) 28
1.3 Learning to Rank 29
1.3.1 Machine Learning Framework 30
1.3.2 Definition of Learning to Rank 31
Feature Based 31
Discriminative Training 32
1.3.3 Learning-to-Rank Framework 32
1.3.3.1 The Pointwise Approach 34
1.3.3.2 The Pairwise Approach 34
1.3.3.3 The Listwise Approach 35
1.4 Book Overview 37
1.5 Exercises 38
References 39
Part II: Major Approaches to Learning to Rank 45
Chapter 2: The Pointwise Approach 46
2.1 Overview 46
2.2 Regression-Based Algorithms 46
2.2.1 Subset Ranking with Regression 47
2.3 Classification-Based Algorithms 48
2.3.1 Binary Classification for Ranking 48
SVM-Based Method 48
Logistic Regression-Based Method 49
2.3.2 Multi-class Classification for Ranking 50
Boosting Tree-Based Method 50
Association Rule Mining-Based Method 51
2.4 Ordinal Regression-Based Algorithms 52
2.4.1 Perceptron-Based Ranking (PRanking) 52
2.4.2 Ranking with Large Margin Principles 53
2.4.3 Ordinal Regression with Threshold-Based Loss Functions 55
2.5 Discussions 55
2.5.1 Relationship with Relevance Feedback 56
2.5.2 Problems with the Pointwise Approach 57
2.5.3 Improved Algorithms 57
2.6 Summary 58
2.7 Exercises 58
References 59
Chapter 3: The Pairwise Approach 61
3.1 Overview 61
3.2 Example Algorithms 61
3.2.1 Ordering with Preference Function 61
3.2.2 SortNet: Neural Network-Based Sorting Algorithm 63
3.2.3 RankNet: Learning to Rank with Gradient Descent 64
3.2.4 FRank: Ranking with a Fidelity Loss 65
3.2.5 RankBoost 66
3.2.6 Ranking SVM 68
3.2.7 GBRank 70
3.3 Improved Algorithms 71
3.3.1 Multiple Hyperplane Ranker 71
3.3.2 Magnitude-Preserving Ranking 72
3.3.3 IR-SVM 73
3.3.4 Robust Pairwise Ranking with Sigmoid Functions 74
3.3.5 P-norm Push 75
3.3.6 Ordered Weighted Average for Ranking 76
3.3.7 LambdaRank 77
3.3.8 Robust Sparse Ranker 78
3.4 Summary 79
3.5 Exercises 79
References 80
Chapter 4: The Listwise Approach 83
4.1 Overview 83
4.2 Minimization of Measure-Specific Loss 84
4.2.1 Measure Approximation 84
4.2.1.1 SoftRank 84
4.2.1.2 Decision Theoretic Framework for Ranking 85
4.2.1.3 Approximate Rank 86
4.2.1.4 SmoothRank 88
4.2.2 Bound Optimization 89
4.2.2.1 SVMmap 89
4.2.3 Non-smooth Optimization 90
4.2.3.1 AdaRank 90
4.2.3.2 Genetic Programming Based Algorithms 91
4.2.4 Discussions 92
4.3 Minimization of Non-measure-Specific Loss 92
4.3.1 ListNet 93
4.3.2 ListMLE 94
4.3.3 Ranking Using Cumulative Distribution Networks 95
4.3.4 BoltzRank 96
4.4 Summary 97
4.5 Exercises 98
References 99
Chapter 5: Analysis of the Approaches 101
5.1 Overview 101
5.2 The Pointwise Approach 101
5.3 The Pairwise Approach 103
5.4 The Listwise Approach 106
5.4.1 Non-measure-Specific Loss 106
5.4.2 Measure-Specific Loss 107
5.5 Summary 110
5.6 Exercises 110
References 111
Part III: Advanced Topics in Learning to Rank 112
Chapter 6: Relational Ranking 113
6.1 General Relational Ranking Framework 114
6.1.1 Relational Ranking SVM 114
6.1.2 Continuous Conditional Random Fields 116
6.2 Learning Diverse Ranking 117
6.3 Discussions 120
References 121
Chapter 7: Query-Dependent Ranking 122
7.1 Query-Dependent Loss Function 122
7.2 Query-Dependent Ranking Function 124
7.2.1 Query Classification-Based Approach 124
7.2.2 K Nearest Neighbor-Based Approach 125
7.2.3 Query Clustering-Based Approach 127
7.2.4 Two-Layer Learning Approach 128
7.3 Discussions 129
References 130
Chapter 8: Semi-supervised Ranking 131
8.1 Inductive Approach 131
8.2 Transductive Approach 132
8.3 Discussions 133
References 133
Chapter 9: Transfer Ranking 135
9.1 Feature-Level Transfer Ranking 136
9.2 Instance-Level Transfer Ranking 136
9.3 Discussions 138
References 138
Part IV: Benchmark Datasets for Learning to Rank 139
Chapter 10: The LETOR Datasets 140
10.1 Overview 140
10.2 Document Corpora 140
10.2.1 The "Gov" Corpus and Six Query Sets 141
10.2.2 The OHSUMED Corpus 141
10.2.3 The "Gov2" Corpus and Two Query Sets 142
10.3 Document Sampling 142
10.4 Feature Extraction 143
10.5 Meta Information 143
10.6 Learning Tasks 145
10.7 Discussions 149
References 149
Chapter 11: Experimental Results on LETOR 151
11.1 Experimental Settings 151
11.2 Experimental Results on LETOR 3.0 152
11.3 Experimental Results on LETOR 4.0 155
11.4 Discussions 156
11.5 Exercises 157
References 157
Chapter 12: Other Datasets 159
12.1 Yahoo! Learning-to-Rank Challenge Datasets 159
12.2 Microsoft Learning-to-Rank Datasets 160
12.3 Discussions 161
References 161
Part V: Practical Issues in Learning to Rank 162
Chapter 13: Data Preprocessing for Learning to Rank 163
13.1 Overview 163
13.2 Ground Truth Mining from Logs 164
13.2.1 User Click Models 164
13.2.1.1 Dependent Click Model 165
13.2.1.2 Bayesian Browsing Model 166
13.2.1.3 Dynamic Bayesian Network Click Model 168
13.2.2 Click Data Enhancement 170
13.2.2.1 Learning a User Interaction Model 170
13.2.2.2 Smoothing Click-Through Data 171
13.3 Training Data Selection 172
13.3.1 Document and Query Selection for Labeling 173
13.3.1.1 Deep Versus Shallow Judgments 173
13.3.1.2 Actively Learning for Labeling 174
13.3.2 Document and Query Selection for Training 175
13.3.2.1 Document Selection Strategies 176
13.3.2.2 Data Selection by Optimizing PPC 177
13.3.3 Feature Selection for Training 179
13.4 Summary 180
13.5 Exercises 180
References 181
Chapter 14: Applications of Learning to Rank 184
14.1 Overview 184
14.2 Question Answering 184
14.2.1 Definitional QA 185
14.2.2 Quantity Consensus QA 186
14.2.3 Non-factoid QA 187
14.2.4 Why QA 188
14.3 Multimedia Retrieval 189
14.4 Text Summarization 190
14.5 Online Advertising 191
14.6 Summary 192
14.7 Exercises 193
References 193
Part VI: Theories in Learning to Rank 195
Chapter 15: Statistical Learning Theory for Ranking 196
15.1 Overview 196
15.2 Statistical Learning Theory 196
15.3 Learning Theory for Ranking 198
15.3.1 Statistical Ranking Framework 198
15.3.2 Generalization Analysis for Ranking 199
15.3.3 Statistical Consistency for Ranking 199
15.4 Exercises 200
References 200
Chapter 16: Statistical Ranking Framework 202
16.1 Document Ranking Framework 203
16.1.1 The Pointwise Approach 203
16.1.2 The Pairwise Approach 203
(1) The U-statistics View 204
(2) The Average View 204
16.1.3 The Listwise Approach 205
16.2 Subset Ranking Framework 205
16.2.1 The Pointwise Approach 206
16.2.2 The Pairwise Approach 206
16.2.3 The Listwise Approach 207
16.3 Two-Layer Ranking Framework 207
16.3.1 The Pointwise Approach 207
16.3.2 The Pairwise Approach 208
(1) The U-statistics View 208
(2) The Average View 208
16.3.3 The Listwise Approach 209
16.4 Summary 209
16.5 Exercises 209
References 210
Chapter 17: Generalization Analysis for Ranking 211
17.1 Overview 211
17.2 Uniform Generalization Bounds for Ranking 212
17.2.1 For Document Ranking 212
17.2.2 For Subset Ranking 214
17.2.3 For Two-Layer Ranking 216
17.3 Algorithm-Dependent Generalization Bound 217
17.3.1 For Document Ranking 218
17.3.2 For Subset Ranking 219
17.3.3 For Two-Layer Ranking 220
17.4 Summary 220
17.5 Exercises 221
References 221
Chapter 18: Statistical Consistency for Ranking 223
18.1 Overview 223
18.2 Consistency Analysis for Document Ranking 224
18.2.1 Regarding Pairwise 0-1 Loss 224
18.3 Consistency Analysis for Subset Ranking 224
18.3.1 Regarding DCG-Based Ranking Error 225
18.3.2 Regarding Permutation-Level 0-1 Loss 225
18.3.3 Regarding Top-k True Loss 226
18.3.4 Regarding Weighted Kendall's tau 227
18.4 Consistency Analysis for Two-Layer Ranking 229
18.5 Summary 229
18.6 Exercises 230
References 230
Part VII: Summary and Outlook 232
Chapter 19: Summary 233
References 236
Chapter 20: Future Work 239
20.1 Sample Selection Bias 239
20.2 Direct Learning from Logs 240
20.3 Feature Engineering 241
20.4 Advanced Ranking Models 241
20.5 Large-Scale Learning to Rank 242
20.6 Online Complexity Versus Accuracy 243
20.7 Robust Learning to Rank 243
20.8 Online Learning to Rank 244
20.9 Beyond Ranking 245
References 245
Part VIII: Appendix 247
Chapter 21: Mathematical Background 248
21.1 Probability Theory 248
21.1.1 Probability Space and Random Variables 248
21.1.2 Probability Distributions 249
21.1.2.1 Discrete Distribution 249
21.1.2.2 Continuous Distribution 250
21.1.2.3 Popular Distributions 251
21.1.3 Expectations and Variances 251
21.2 Linear Algebra and Matrix Computation 252
21.2.1 Notations 252
21.2.2 Basic Matrix Operations and Properties 253
21.2.2.1 Matrix Multiplication 253
21.2.2.2 Identity Matrix 254
21.2.2.3 Diagonal Matrix 254
21.2.2.4 Transpose 254
21.2.2.5 Symmetric Matrix 255
21.2.2.6 Trace 255
21.2.2.7 Norm 255
21.2.2.8 Inverse 256
21.2.2.9 Orthogonal Matrix 256
21.2.2.10 Determinant 257
21.2.2.11 Quadratic Form 257
21.2.3 Eigenvalues and Eigenvectors 258
21.3 Convex Optimization 259
21.3.1 Convex Set and Convex Function 259
21.3.2 Conditions for Convexity 260
21.3.2.1 First-Order Condition 260
21.3.2.2 Second-Order Condition 260
21.3.3 Convex Optimization Problem 260
21.3.4 Lagrangian Duality 261
21.3.5 KKT Conditions 262
References 263
Chapter 22: Machine Learning 264
22.1 Regression 264
22.1.1 Linear Regression 264
22.1.2 Probabilistic Explanation 265
22.2 Classification 266
22.2.1 Neural Networks 267
22.2.2 Support Vector Machines 268
22.2.3 Boosting 270
22.2.4 K Nearest Neighbor (KNN) 271
22.3 Statistical Learning Theory 271
22.3.1 Formalization 272
22.3.2 Bounds for |R(g) - R(g)| 274
22.3.2.1 Hoeffding's Inequality 274
22.3.2.2 Uniform Bounds in Finite Case 275
22.3.2.3 Uniform Bounds in Infinite Case 276
22.3.2.4 Uniform Bounds Based on Covering Number 277
22.3.2.5 Uniform Bounds Based on Rademacher Average 278
References 279
Index 280

Erscheint lt. Verlag 29.4.2011
Zusatzinfo XVII, 285 p.
Verlagsort Berlin
Sprache englisch
Themenwelt Informatik Theorie / Studium Künstliche Intelligenz / Robotik
Schlagworte Information Retrieval • machine learning • Ranking Algorithms • Statistical Learning
ISBN-10 3-642-14267-2 / 3642142672
ISBN-13 978-3-642-14267-3 / 9783642142673
Haben Sie eine Frage zum Produkt?
PDFPDF (Wasserzeichen)

DRM: Digitales Wasserzeichen
Dieses eBook enthält ein digitales Wasser­zeichen und ist damit für Sie persona­lisiert. Bei einer missbräuch­lichen Weiter­gabe des eBooks an Dritte ist eine Rück­ver­folgung an die Quelle möglich.

Dateiformat: PDF (Portable Document Format)
Mit einem festen Seiten­layout eignet sich die PDF besonders für Fach­bücher mit Spalten, Tabellen und Abbild­ungen. Eine PDF kann auf fast allen Geräten ange­zeigt werden, ist aber für kleine Displays (Smart­phone, eReader) nur einge­schränkt geeignet.

Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen dafür einen PDF-Viewer - z.B. den Adobe Reader oder Adobe Digital Editions.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen dafür einen PDF-Viewer - z.B. die kostenlose Adobe Digital Editions-App.

Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

Mehr entdecken
aus dem Bereich
der Praxis-Guide für Künstliche Intelligenz in Unternehmen - Chancen …

von Thomas R. Köhler; Julia Finkeissen

eBook Download (2024)
Campus Verlag
CHF 37,95
Wie du KI richtig nutzt - schreiben, recherchieren, Bilder erstellen, …

von Rainer Hattenhauer

eBook Download (2023)
Rheinwerk Computing (Verlag)
CHF 16,95