Data Science Using Python and R
John Wiley & Sons Inc (Verlag)
978-1-119-52681-0 (ISBN)
Data Science Using Python and R will get you plugged into the world’s two most widespread open-source platforms for data science: Python and R.
Data science is hot. Bloomberg called data scientist “the hottest job in America.” Python and R are the top two open-source data science tools in the world. In Data Science Using Python and R, you will learn step-by-step how to produce hands-on solutions to real-world business problems, using state-of-the-art techniques.
Data Science Using Python and R is written for the general reader with no previous analytics or programming experience. An entire chapter is dedicated to learning the basics of Python and R. Then, each chapter presents step-by-step instructions and walkthroughs for solving data science problems using Python and R.
Those with analytics experience will appreciate having a one-stop shop for learning how to do data science using Python and R. Topics covered include data preparation, exploratory data analysis, preparing to model the data, decision trees, model evaluation, misclassification costs, naïve Bayes classification, neural networks, clustering, regression modeling, dimension reduction, and association rules mining.
Further, exciting new topics such as random forests and general linear models are also included. The book emphasizes data-driven error costs to enhance profitability, which avoids the common pitfalls that may cost a company millions of dollars.
Data Science Using Python and R provides exercises at the end of every chapter, totaling over 500 exercises in the book. Readers will therefore have plenty of opportunity to test their newfound data science skills and expertise. In the Hands-on Analysis exercises, readers are challenged to solve interesting business problems using real-world data sets.
CHANTAL D. LAROSE, PHD, is an Assistant Professor of Statistics & Data Science at Eastern Connecticut State University (ECSU). She has co-authored three books on data science and predictive analytics and helped develop data science programs at ECSU and SUNY New Paltz. Her PhD dissertation, Model-Based Clustering of Incomplete Data, tackles the persistent problem of trying to do data science with incomplete data. DANIEL T. LAROSE, PHD, is a Professor of Data Science and Statistics and Director of the Data Science programs at Central Connecticut State University. He has published many books on data science, data mining, predictive analytics, and statistics. His consulting clients include The Economist magazine, Forbes Magazine, the CIT Group, and Microsoft.
Preface xi
About the Authors xv
Acknowledgements xvii
Chapter 1 Introduction to Data Science 1
1.1 Why Data Science? 1
1.2 What is Data Science? 1
1.3 The Data Science Methodology 2
1.4 Data Science Tasks 5
1.4.1 Description 6
1.4.2 Estimation 6
1.4.3 Classification 6
1.4.4 Clustering 7
1.4.5 Prediction 7
1.4.6 Association 7
Exercises 8
Chapter 2 The Basics of Python and R 9
2.1 Downloading Python 9
2.2 Basics of Coding in Python 9
2.2.1 Using Comments in Python 9
2.2.2 Executing Commands in Python 10
2.2.3 Importing Packages in Python 11
2.2.4 Getting Data into Python 12
2.2.5 Saving Output in Python 13
2.2.6 Accessing Records and Variables in Python 14
2.2.7 Setting Up Graphics in Python 15
2.3 Downloading R and RStudio 17
2.4 Basics of Coding in R 19
2.4.1 Using Comments in R 19
2.4.2 Executing Commands in R 20
2.4.3 Importing Packages in R 20
2.4.4 Getting Data into R 21
2.4.5 Saving Output in R 23
2.4.6 Accessing Records and Variables in R 24
References 26
Exercises 26
Chapter 3 Data Preparation 29
3.1 The Bank Marketing Data Set 29
3.2 The Problem Understanding Phase 29
3.2.1 Clearly Enunciate the Project Objectives 29
3.2.2 Translate These Objectives into a Data Science Problem 30
3.3 Data Preparation Phase 31
3.4 Adding an Index Field 31
3.4.1 How to Add an Index Field Using Python 31
3.4.2 How to Add an Index Field Using R 32
3.5 Changing Misleading Field Values 33
3.5.1 How to Change Misleading Field Values Using Python 34
3.5.2 How to Change Misleading Field Values Using R 34
3.6 Reexpression of Categorical Data as Numeric 36
3.6.1 How to Reexpress Categorical Field Values Using Python 36
3.6.2 How to Reexpress Categorical Field Values Using R 38
3.7 Standardizing the Numeric Fields 39
3.7.1 How to Standardize Numeric Fields Using Python 40
3.7.2 How to Standardize Numeric Fields Using R 40
3.8 Identifying Outliers 40
3.8.1 How to Identify Outliers Using Python 41
3.8.2 How to Identify Outliers Using R 42
References 43
Exercises 44
Chapter 4 Exploratory Data Analysis 47
4.1 EDA Versus HT 47
4.2 Bar Graphs with Response Overlay 47
4.2.1 How to Construct a Bar Graph with Overlay Using Python 49
4.2.2 How to Construct a Bar Graph with Overlay Using R 50
4.3 Contingency Tables 51
4.3.1 How to Construct Contingency Tables Using Python 52
4.3.2 How to Construct Contingency Tables Using R 53
4.4 Histograms with Response Overlay 53
4.4.1 How to Construct Histograms with Overlay Using Python 55
4.4.2 How to Construct Histograms with Overlay Using R 58
4.5 Binning Based on Predictive Value 58
4.5.1 How to Perform Binning Based on Predictive Value Using Python 59
4.5.2 How to Perform Binning Based on Predictive Value Using R 62
References 63
Exercises 63
Chapter 5 Preparing to Model the Data 69
5.1 The Story So Far 69
5.2 Partitioning the Data 69
5.2.1 How to Partition the Data in Python 70
5.2.2 How to Partition the Data in R 71
5.3 Validating your Partition 72
5.4 Balancing the Training Data Set 73
5.4.1 How to Balance the Training Data Set in Python 74
5.4.2 How to Balance the Training Data Set in R 75
5.5 Establishing Baseline Model Performance 77
References 78
Exercises 78
Chapter 6 Decision Trees 81
6.1 Introduction to Decision Trees 81
6.2 Classification and Regression Trees 83
6.2.1 How to Build CART Decision Trees Using Python 84
6.2.2 How to Build CART Decision Trees Using R 86
6.3 The C5.0 Algorithm for Building Decision Trees 88
6.3.1 How to Build C5.0 Decision Trees Using Python 89
6.3.2 How to Build C5.0 Decision Trees Using R 90
6.4 Random Forests 91
6.4.1 How to Build Random Forests in Python 92
6.4.2 How to Build Random Forests in R 92
References 93
Exercises 93
Chapter 7 Model Evaluation 97
7.1 Introduction to Model Evaluation 97
7.2 Classification Evaluation Measures 97
7.3 Sensitivity and Specificity 99
7.4 Precision, Recall, and Fβ Scores 99
7.5 Method for Model Evaluation 100
7.6 An Application of Model Evaluation 100
7.6.1 How to Perform Model Evaluation Using R 103
7.7 Accounting for Unequal Error Costs 104
7.7.1 Accounting for Unequal Error Costs Using R 105
7.8 Comparing Models with and without Unequal Error Costs 106
7.9 Data‐Driven Error Costs 107
Exercises 109
Chapter 8 Naïve Bayes Classification 113
8.1 Introduction to Naive Bayes 113
8.2 Bayes Theorem 113
8.3 Maximum a Posteriori Hypothesis 114
8.4 Class Conditional Independence 114
8.5 Application of Naive Bayes Classification 115
8.5.1 Naive Bayes in Python 121
8.5.2 Naive Bayes in R 123
References 125
Exercises 126
Chapter 9 Neural Networks 129
9.1 Introduction to Neural Networks 129
9.2 The Neural Network Structure 129
9.3 Connection Weights and the Combination Function 131
9.4 The Sigmoid Activation Function 133
9.5 Backpropagation 134
9.6 An Application of a Neural Network Model 134
9.7 Interpreting the Weights in a Neural Network Model 136
9.8 How to Use Neural Networks in R 137
References 138
Exercises 138
Chapter 10 Clustering 141
10.1 What is Clustering? 141
10.2 Introduction to the K‐Means Clustering Algorithm 142
10.3 An Application of K‐Means Clustering 143
10.4 Cluster Validation 144
10.5 How to Perform K‐Means Clustering Using Python 145
10.6 How to Perform K‐Means Clustering Using R 147
Exercises 149
Chapter 11 Regression Modeling 151
11.1 The Estimation Task 151
11.2 Descriptive Regression Modeling 151
11.3 An Application of Multiple Regression Modeling 152
11.4 How to Perform Multiple Regression Modeling Using Python 154
11.5 How to Perform Multiple Regression Modeling Using R 156
11.6 Model Evaluation for Estimation 157
11.6.1 How to Perform Estimation Model Evaluation Using Python 159
11.6.2 How to Perform Estimation Model Evaluation Using R 160
11.7 Stepwise Regression 161
11.7.1 How to Perform Stepwise Regression Using R 162
11.8 Baseline Models for Regression 162
References 163
Exercises 164
Chapter 12 Dimension Reduction 167
12.1 The Need for Dimension Reduction 167
12.2 Multicollinearity 168
12.3 Identifying Multicollinearity Using Variance Inflation Factors 171
12.3.1 How to Identify Multicollinearity Using Python 172
12.3.2 How to Identify Multicollinearity in R 173
12.4 Principal Components Analysis 175
12.5 An Application of Principal Components Analysis 175
12.6 How Many Components Should We Extract? 176
12.6.1 The Eigenvalue Criterion 176
12.6.2 The Proportion of Variance Explained Criterion 177
12.7 Performing Pca with K = 4 178
12.8 Validation of the Principal Components 178
12.9 How to Perform Principal Components Analysis Using Python 179
12.10 How to Perform Principal Components Analysis Using R 181
12.11 When is Multicollinearity Not a Problem? 183
References 184
Exercises 184
Chapter 13 Generalized Linear Models 187
13.1 An Overview of General Linear Models 187
13.2 Linear Regression as a General Linear Model 188
13.3 Logistic Regression as a General Linear Model 188
13.4 An Application of Logistic Regression Modeling 189
13.4.1 How to Perform Logistic Regression Using Python 190
13.4.2 How to Perform Logistic Regression Using R 191
13.5 Poisson Regression 192
13.6 An Application of Poisson Regression Modeling 192
13.6.1 How to Perform Poisson Regression Using Python 193
13.6.2 How to Perform Poisson Regression Using R 194
Reference 195
Exercises 195
Chapter 14 Association Rules 199
14.1 Introduction to Association Rules 199
14.2 A Simple Example of Association Rule Mining 200
14.3 Support, Confidence, and Lift 200
14.4 Mining Association Rules 202
14.4.1 How to Mine Association Rules Using R 203
14.5 Confirming Our Metrics 207
14.6 The Confidence Difference Criterion 208
14.6.1 How to Apply the Confidence Difference Criterion Using R 208
14.7 The Confidence Quotient Criterion 209
14.7.1 How to Apply the Confidence Quotient Criterion Using R 210
References 211
Exercises 211
Appendix Data Summarization and Visualization 215
Part 1: Summarization 1: Building Blocks of Data Analysis 215
Part 2: Visualization: Graphs and Tables for Summarizing and Organizing Data 217
Part 3: Summarization 2: Measures of Center, Variability, and Position 222
Part 4: Summarization and Visualization of Bivariate Elationships 225
Index 231
Erscheinungsdatum | 24.04.2019 |
---|---|
Reihe/Serie | Wiley Series on Methods and Applications in Data Mining |
Verlagsort | New York |
Sprache | englisch |
Maße | 152 x 231 mm |
Gewicht | 522 g |
Themenwelt | Informatik ► Datenbanken ► Data Warehouse / Data Mining |
Informatik ► Office Programme ► Outlook | |
ISBN-10 | 1-119-52681-7 / 1119526817 |
ISBN-13 | 978-1-119-52681-0 / 9781119526810 |
Zustand | Neuware |
Haben Sie eine Frage zum Produkt? |
aus dem Bereich