Nicht aus der Schweiz? Besuchen Sie lehmanns.de
Mathematical Statistics with Applications in R -  Kandethody M. Ramachandran,  Chris P. Tsokos

Mathematical Statistics with Applications in R (eBook)

eBook Download: PDF | EPUB
2014 | 2. Auflage
826 Seiten
Elsevier Science (Verlag)
978-0-12-417132-9 (ISBN)
Systemvoraussetzungen
Systemvoraussetzungen
93,99 inkl. MwSt
(CHF 89,95)
Der eBook-Verkauf erfolgt durch die Lehmanns Media GmbH (Berlin) zum Preis in Euro inkl. MwSt.
  • Download sofort lieferbar
  • Zahlungsarten anzeigen
Mathematical Statistics with Applications in R, Second Edition, offers a modern calculus-based theoretical introduction to mathematical statistics and applications. The book covers many modern statistical computational and simulation concepts that are not covered in other texts, such as the Jackknife, bootstrap methods, the EM algorithms, and Markov chain Monte Carlo (MCMC) methods such as the Metropolis algorithm, Metropolis-Hastings algorithm and the Gibbs sampler. By combining the discussion on the theory of statistics with a wealth of real-world applications, the book helps students to approach statistical problem solving in a logical manner.This book provides a step-by-step procedure to solve real problems, making the topic more accessible. It includes goodness of fit methods to identify the probability distribution that characterizes the probabilistic behavior or a given set of data. Exercises as well as practical, real-world chapter projects are included, and each chapter has an optional section on using Minitab, SPSS and SAS commands. The text also boasts a wide array of coverage of ANOVA, nonparametric, MCMC, Bayesian and empirical methods; solutions to selected problems; data sets; and an image bank for students.Advanced undergraduate and graduate students taking a one or two semester mathematical statistics course will find this book extremely useful in their studies. - Step-by-step procedure to solve real problems, making the topic more accessible - Exercises blend theory and modern applications - Practical, real-world chapter projects - Provides an optional section in each chapter on using Minitab, SPSS and SAS commands - Wide array of coverage of ANOVA, Nonparametric, MCMC, Bayesian and empirical methods

Kandethody M. Ramachandran is Professor of Mathematics and Statistics at the University of South Florida. His research interests are concentrated in the areas of applied probability, statistics, machine learning, and generative AI. His research publications span a variety of areas such as control of heavy traffic queues, stochastic delay systems, machine learning methods applied to game theory, finance, cyber security, health sciences, and other emerging areas. He is also co-author of three books. He is the founding director of the Interdisciplinary Data Sciences Consortium (IDSC). He is extensively involved in activities to improve statistics and mathematics education. He is a recipient of the Teaching Incentive Program award at the University of South Florida. He is also the PI of a two million dollar grant from NSF, and a co_PI of a 1.4 million grant from HHMI to improve STEM education at USF.
Mathematical Statistics with Applications in R, Second Edition, offers a modern calculus-based theoretical introduction to mathematical statistics and applications. The book covers many modern statistical computational and simulation concepts that are not covered in other texts, such as the Jackknife, bootstrap methods, the EM algorithms, and Markov chain Monte Carlo (MCMC) methods such as the Metropolis algorithm, Metropolis-Hastings algorithm and the Gibbs sampler. By combining the discussion on the theory of statistics with a wealth of real-world applications, the book helps students to approach statistical problem solving in a logical manner.This book provides a step-by-step procedure to solve real problems, making the topic more accessible. It includes goodness of fit methods to identify the probability distribution that characterizes the probabilistic behavior or a given set of data. Exercises as well as practical, real-world chapter projects are included, and each chapter has an optional section on using Minitab, SPSS and SAS commands. The text also boasts a wide array of coverage of ANOVA, nonparametric, MCMC, Bayesian and empirical methods; solutions to selected problems; data sets; and an image bank for students.Advanced undergraduate and graduate students taking a one or two semester mathematical statistics course will find this book extremely useful in their studies. - Step-by-step procedure to solve real problems, making the topic more accessible- Exercises blend theory and modern applications- Practical, real-world chapter projects- Provides an optional section in each chapter on using Minitab, SPSS and SAS commands- Wide array of coverage of ANOVA, Nonparametric, MCMC, Bayesian and empirical methods

Front Cover 1
Mathematical Statistics with Applications in R 4
Copyright 5
Dedication 6
Contents 8
Acknowledgments 16
About the Authors 18
Preface to Second Edition 20
Preface to First Edition 20
Aim and Objective of the Textbook 21
Features 22
Flow Chart 24
Chapter 1: Descriptive Statistics 26
1.1. Introduction 27
1.1.1. Data Collection 28
1.2. Basic Concepts 29
1.2.1. Types of Data 30
Exercises 1.2 32
1.3. Sampling Schemes 33
1.3.1. Errors in Sample Data 37
1.3.2. Sample Size 37
Exercise 1.3 38
1.4. Graphical Representation of Data 38
Exercises 1.4 45
1.5. Numerical Description of Data 51
1.5.1. Numerical Measures for Grouped Data 55
1.5.2. Box Plots 58
Exercises 1.5 60
1.6. Computers and Statistics 64
1.7. Chapter Summary 64
1.8. Computer Examples 66
1.8.1. R Introduction and Examples 66
1.8.2. Minitab Examples 69
1.8.3. SPSS Examples 71
1.8.4. SAS Examples 73
Exercises 1.8 75
Projects for Chapter 1 76
1A. World Wide Web and Data Collection 76
1B. Preparing a List of Useful Internet Sites 76
1C. Dot Plots and Descriptive Statistics 76
1D. Importance of Statistics in our Society 77
1E. Uses and Misuses of statistics 77
Chapter 2: Basic Concepts from Probability Theory 78
2.1. Introduction 79
2.2. Random Events and Probability 80
Exercises 2.2 85
2.3. Counting Techniques and Calculation of Probabilities 88
Exercises 2.3 93
2.4. The Conditional Probability, Independence, and Bayes Rule 95
Exercises 2.4 102
2.5. Random Variables and Probability Distributions 107
Exercises 2.5 114
2.6. Moments and Moment-Generating Functions 116
2.6.1. Skewness and Kurtosis 121
Exercises 2.6 127
2.7. Chapter Summary 129
2.8. Computer Examples (Optional) 130
2.8.1. Examples Using R 130
2.8.2. Minitab Computations 131
2.8.2. SPSS Examples 132
2.8.3. SAS Examples 132
Projects for Chapter 2 133
2A. The Birthday Problem 133
2B. The Hardy-Weinberg Law 134
Chapter 3: Additional Topics in Probability 136
3.1. Introduction 137
3.2. Special Distribution Functions 137
3.2.1. The Binomial Probability Distribution 138
3.2.2. Poisson Probability Distribution 142
3.2.3. Uniform Probability Distribution 145
3.2.4. Normal Probability Distribution 147
3.2.5. Gamma Probability Distribution 154
Exercises 3.2 159
3.3. Joint Probability Distributions 164
3.3.1. Covariance and Correlation 172
Exercises 3.3 174
3.4. Functions of Random Variables 177
3.4.1. Method of Distribution Functions 177
3.4.2. The pdf of Yg(X), Where g is Differentiable and Monotone Increasing or Decreasing 179
3.4.3. Probability Integral Transformation 179
3.4.4. Functions of Several Random Variables: Method of Distribution Functions 180
3.4.5. Transformation Method 180
Exercises 3.4 183
3.5. Limit Theorems 184
Exercises 3.5 191
3.6. Chapter Summary 193
3.7. Computer Examples (Optional) 195
3.7.1. The R-Examples 195
3.7.2. Minitab Examples 196
3.7.3. SPSS Examples 198
3.7.4. SAS Examples 198
Projects for Chapter 3 200
3A. Mixture Distribution 200
3B. Generating Samples from Exponential and Poisson Probability Distribution 200
3C. Coupon Collectors Problem 201
3D. Recursive Calculation of Binomial and Poisson Probabilities 201
3E. Simulation of Poisson approximation of binomial 201
Chapter 4: Sampling Distributions 202
4.1. Introduction 203
4.1.1. Finite Population 206
Exercises 4.1 208
4.2. Sampling Distributions Associated with Normal Populations 209
4.2.1. Chi-Square Distribution 211
4.2.2. Student t-Distribution 216
4.2.3. F-Distribution 220
Exercises 4.2 223
4.3. Order Statistics 225
Exercises 4.3 228
4.4. Large Sample Approximations 230
4.4.1. The Normal Approximation to the Binomial Distribution 231
Exercises 4.4 234
4.5. Chapter Summary 235
4.6. Computer Examples 236
4.6.1. Examples Using R 236
4.6.2. Minitab Examples 238
4.6.3. SPSS Examples 239
4.6.4. SAS Examples 239
Projects for Chapter 4 240
4A. A Method to Obtain Random Samples from Different Distributions 240
4B. Simulation Experiments 241
4C. A Test for Normality 242
Exercises 242
Chapter 5: Statistical Estimation 244
5.1. Introduction 245
5.2. The Methods of Finding Point Estimators 246
5.2.1. The Method of Moments 247
5.2.2. The Method of Maximum Likelihood 252
Some Additional Probability Distributions 259
Exercises 5.2 265
5.3. Some Desirable Properties of Point Estimators 270
5.3.1. Unbiased Estimators 270
5.3.2. Sufficiency* 275
Exercises 5.3 283
5.4. A Method of Finding the Confidence Interval: Pivotal Method 286
Exercises 5.4 293
5.5. One Sample Confidence Intervals 294
5.5.1. Large Sample Confidence Intervals 294
5.5.2. Confidence Interval for Proportion, p 297
5.5.2.1. Margin of Error and Sample Size 297
5.5.3. Small Sample Confidence Intervals for µ 300
Exercises 5.5 303
5.6. A Confidence Interval for the Population Variance 309
Exercises 5.6 312
5.7. Confidence Interval Concerning Two Population Parameters 314
Exercises 5.7 320
5.8. Chapter Summary 323
5.9. Computer Examples 324
5.9.1. Examples Using R 324
5.9.2. Minitab Examples 326
5.9.3. SPSS Examples 327
5.9.4. SAS Examples 328
Exercises 5.9 328
Projects for Chapter 5 328
5A. Asymptotic Properties 328
5B. Robust Estimation 329
5C. Numerical Unbiasedness and Consistency 330
5D. Averaged Squared Errors (ASEs) 330
5E. Alternate Method of Estimating the Mean and Variance 331
5F. Newton-Raphson in One Dimension 331
5G. The Empirical Distribution Function 332
5H. Simulation of Coverage of the Small Confidence Intervals for µ 332
5I. Confidence Intervals Based on Sampling Distributions 333
5J. Large Sample Confidence Intervals: General Case 333
5K. Prediction Interval for an Observation from a Normal Population 334
Chapter 6: Hypothesis Testing 336
6.1. Introduction 337
6.1.1. Sample Size 345
Exercises 6.1 346
6.2. The Neyman-Pearson Lemma 348
Exercises 6.2 353
6.3. Likelihood Ratio Tests 353
Exercises 6.3 358
6.4. Hypotheses for a Single Parameter 358
6.4.1. The p-Value 358
6.4.2. Hypothesis Testing for a Single Parameter 361
Exercises 6.4 367
6.5. Testing of Hypotheses for Two Samples 370
6.5.1. Independent Samples 370
6.5.2. Dependent Samples 378
Exercises 6.5 381
6.6. Chapter Summary 384
6.7. Computer Examples 385
6.7.1. R-Examples 385
6.7.2. Minitab Examples 388
6.7.3. SPSS Examples 390
6.7.4. SAS Examples 391
Projects for Chapter 6 393
6A. Testing on Computer-Generated Samples 393
6B. Conducting a Statistical Test with Confidence Interval 394
Chapter 7: Goodness-of-Fit Tests Applications 396
7.1. Introduction 397
7.2. The Chi-Square Tests for Count Data 397
7.2.1. Testing the Parameters of a Multinomial Distribution: Goodness-of-Fit Test 399
7.2.2. Contingency Table: Test for Independence 401
Exercises 7.2 404
7.3. Goodness-of-Fit Tests to Identify the Probability Distribution 406
7.3.1. Pearsons Chi-Square Test 406
7.3.2. The Kolmogorov-Smirnov Test: (One Population) 409
7.3.3. The Anderson-Darling Test 412
7.3.4. Shapiro-Wilk Normality Test 413
7.3.5. The P-P Plots and Q-Q Plots 414
Exercises 7.3 416
7.4. Applications: Parametric Analysis 417
7.4.1. Global Warming 417
7.4.2. Hurricane Katrina 418
7.4.3. National Unemployment 421
7.4.4. Brain Cancer 422
7.4.5. Rainfall 424
7.4.6. Prostate Cancer 426
Exercises 7.5 427
7.6. Chapter Summary 431
7.7. Computer Examples 431
7.7.1. R-Commands 431
7.7.2. Minitab Examples 432
Projects for Chapter 7 433
7A. Fitting a Distribution to Data 433
Chapter 8: Linear Regression Models 434
8.1. Introduction 435
8.2. The Simple Linear Regression Model 436
8.2.1. The Method of Least-Squares 438
8.2.2. Derivation of ß0 and ß1 439
8.2.3. Quality of the Regression 443
8.2.4. Properties of the Least-Squares Estimators for the Model Yß0+ß1x+e 445
8.2.5. Estimation of Error Variance s2 447
Exercises 8.2 447
8.3. Inferences on the Least-Squares Estimators 450
8.3.1. Analysis of Variance (ANOVA) Approach to Regression 455
Exercises 8.3 457
8.4. Predicting a Particular Value of Y 458
Exercises 8.4 461
8.5. Correlation Analysis 461
Exercises 8.5 464
8.6. Matrix Notation for Linear Regression 465
8.6.1. ANOVA for Multiple Regression 469
Exercises 8.6 470
8.7. Regression Diagnostics 471
8.8. Chapter Summary 474
8.9. Computer Examples 475
8.9.1. Examples Using R 475
8.9.2. Minitab Examples 478
8.9.3. SPSS Examples 479
8.9.4. SAS Examples 479
Project for Chapter 8 481
8A. Checking the Adequacy of the Model by Scatterplots 481
8B. The Coefficient of Determination 482
8C. Outliers and High Leverage Points 482
Chapter 9: Design of Experiments 484
9.1. Introduction 485
9.2. Concepts from Experimental Design 486
9.2.1. Basic Terminology 486
9.2.2. Fundamental Principles: Replication, Randomization, and Blocking 491
9.2.3. Some Specific Designs 494
Exercises 9.2 501
9.3. Factorial Design 502
9.3.1. One-Factor-at-a-Time Design 503
9.3.2. Full Factorial Design 505
9.3.3. Fractional Factorial Design 505
Exercises 9.3 506
9.4. Optimal Design 506
9.4.1. Choice of Optimal Sample Size 507
Exercises 9.4 508
9.5. The Taguchi Methods 509
Exercises 9.5 512
9.6. Chapter Summary 513
9.7. Computer Examples 514
9.7.1. Examples Using R 514
9.7.2. Minitab Examples 516
9.7.3. SAS Examples 516
Projects for Chapter 9 518
9A. Sample Size and Power 518
9B. Effect of Temperature on Spoilage of Milk 518
Chapter 10: Analysis of Variance 520
10.1. Introduction 521
10.2. ANOVA Method for Two Treatments (Optional) 523
Exercises 10.2 528
10.3. ANOVA for Completely Randomized Design 530
10.3.1. The p-Value Approach 535
10.3.2. Testing the Assumptions for One-Way ANOVA 537
10.3.3. Model for One-Way ANOVA (Optional) 542
Exercises 10.3 542
10.4. Two-Way ANOVA, Randomized Complete Block Design 546
Exercises 10.4 552
10.5. Multiple Comparisons 553
Exercises 10.5 558
10.6. Chapter Summary 560
10.7. Computer Examples 560
10.7.1. Examples Using R 561
10.7.2. Minitab Examples 562
10.7.3. SPSS Examples 565
10.7.4. SAS Examples 565
Exercises 10.7 569
Projects for Chapter 10 570
10A. Transformations 570
10B. ANOVA with Missing Observations 571
10C. ANOVA in Linear Models 571
Chapter 11: Bayesian Estimation Inference 574
11.1. Introduction 575
11.2. Bayesian Point Estimation 577
11.2.1. Criteria for Finding the Bayesian Estimate 583
Exercises 11.2 591
11.3. Bayesian Confidence Interval or Credible Intervals 593
Exercises 11.3 597
11.4. Bayesian Hypothesis Testing 598
Exercises 11.4 600
11.5. Bayesian Decision Theory 601
Exercises 11.5 607
11.6. Chapter Summary 608
11.7. Computer Examples 609
11.7.1. Examples with R 609
Projects for Chapter 11 612
11A. Predicting Future Observations 612
Chapter 12: Nonparametric Tests 614
12.1. Introduction 615
12.2. Nonparametric Confidence Interval 617
Exercises 12.2 620
12.3. Nonparametric Hypothesis Tests for One Sample 622
12.3.1. The Sign Test 622
12.3.2. Wilcoxon Signed Rank Test 626
12.3.3. Dependent Samples: Paired Comparison Tests 631
Exercises 12.3 632
12.4. Nonparametric Hypothesis Tests for Two Independent Samples 634
12.4.1. Median Test 634
12.4.2. The Wilcoxon Rank Sum Test 638
Exercises 12.4 641
12.5. Nonparametric Hypothesis Tests for k2 Samples 643
12.5.1. The Kruskal-Wallis Test 643
12.5.2. The Friedman Test 646
Exercises 12.5 649
12.6. Chapter Summary 652
12.7. Computer Examples 652
12.7.1. Examples Using R 653
12.7.2. Minitab Examples 655
12.7.3. SPSS Examples 657
12.7.4. SAS Examples 658
Projects for Chapter 12 660
12A. Comparison of Wilcoxon Tests with Normal Approximation 660
12B. Randomness Test (Wald-Wolfowitz Test) 660
Chapter 13: Empirical Methods 664
13.1. Introduction 665
13.2. The Jackknife Method 665
Exercises 13.2 668
13.3. An Introduction to Bootstrap Methods 670
13.3.1. Bootstrap Confidence Intervals 675
Exercises 13.3 676
13.4. The Expectation Maximization Algorithm 676
Exercises 13.4 686
13.5. Introduction to Markov Chain Monte Carlo 687
13.5.1. Metropolis Algorithm 691
13.5.2. The Metropolis-Hastings Algorithm 695
13.5.3. Gibbs Algorithm 698
13.5.4. MCMC Issues 701
Exercises 13.5 701
13.6. Chapter Summary 703
13.7. Computer Examples 704
13.7.1. Examples Using R 704
13.7.2. Examples with Minitab 710
13.7.1. SAS Examples 711
Projects for Chapter 13 711
13A. Bootstrap Computation 711
Chapter 14: Some Issues in Statistical Applications: An Overview 712
14.1. Introduction 713
14.2. Graphical Methods 714
14.2. Exercises 718
14.3. Outliers 719
14.3. Exercises 723
14.4. Checking Assumptions 724
14.4.1. Checking the Assumption of Normality 724
14.4.2. Data Transformation 728
14.4.3. Test for Equality of Variances 731
14.4.4. Test of Independence 734
14.4. Exercises 735
14.5. Modeling Issues 737
14.5.1. A Simple Model for Univariate Data 738
14.5.2. Modeling Bivariate Data 740
Exercises 14.5 743
14.6. Parametric Versus Nonparametric Analysis 744
Exercises 14.6 746
14.7. Tying it All Together 746
Exercises 14.7 754
14.8. Conclusion 756
Appendix A: Set Theory 758
A.1. Set Operations 759
Appendix B: Review of Markov Chains 762
Appendix C: Common Probability Distributions 768
Appendix D: What is R? 770
Appendix E: Probability Tables 772
References 810
Index 816

Chapter 1

Descriptive Statistics


Abstract


In today’s society, decisions are made on the basis of data. Most scientific or industrial studies and experiments produce data, and the analysis of these data and drawing useful conclusions from them become one of the central issues. Statistics is an integral part of the quantitative approach to knowledge. The field of statistics is concerned with the scientific study of collecting, organizing, analyzing, and drawing conclusions from data. Statistics benefits all of us because of its ability to predict the future based on data we have previously gathered. Statistical methods help us to transform data to information and knowledge. Statistical concepts enable us to solve problems in a diversity of contexts, add substance to decisions, and reduce guesswork. The discipline of statistics stemmed from the need to place knowledge management on a systematic evidence base. Earlier works on statistics dealt only with the collection, organization, and presentation of data in the form of tables and charts. In order to place statistical knowledge on a systematic evidence base, we require a study of the laws of probability. In mathematical statistics we create a probabilistic model and view the data as a set of random outcomes from that model. Advances in probability theory enable us to draw valid conclusions and to make reasonable decisions on the basis of data. In the present chapter we briefly review some of the basic concepts of descriptive statistics. Such concepts will give us a visual and descriptive presentation of the problem under investigation.

Keywords

Data collection

Elementary statistics

Probabilistic model

Data representation

Numerical summary

Chapter Contents

Objective

Review the basic concepts of elementary statistics.

1.1 Introduction


In today’s society, decisions are made on the basis of data. Most scientific or industrial studies and experiments produce data, and the analysis of these data and drawing useful conclusions from them become one of the central issues. Statistics is an integral part of the quantitative approach to knowledge. The field of statistics is concerned with the scientific study of collecting, organizing, analyzing, and drawing conclusions from data. Statistics benefits all of us because of its ability to predict the future based on data we have previously gathered. Statistical methods help us to transform data to information and knowledge. Statistical concepts enable us to solve problems in a diversity of contexts, add substance to decisions, and reduce guesswork. The discipline of statistics stemmed from the need to place knowledge management on a systematic evidence base. Earlier works on statistics dealt only with the collection, organization, and presentation of data in the form of tables and charts. In order to place statistical knowledge on a systematic evidence base, we require a study of the laws of probability. In mathematical statistics we create a probabilistic model and view the data as a set of random outcomes from that model. Advances in probability theory enable us to draw valid conclusions and to make reasonable decisions on the basis of data.

Statistical methods are used in almost every discipline, including agriculture, astronomy, biology, business, communications, economics, education, electronics, geology, health sciences, and many other fields of science and engineering, and can aid us in several ways. Modern applications of statistical techniques include statistical communication theory and signal processing, information theory, network security and denial of service problems, clinical trials, artificial and biological intelligence, quality control of manufactured items, software reliability, and survival analysis. The first of these is to assist us in designing experiments and surveys. We desire our experiment to yield adequate answers to the questions that prompted the experiment or survey. We would like the answers to have good precision without involving a lot of expenditure. Statistically designed experiments facilitate development of robust products that are insensitive to changes in the environment and internal component variation. Another way that statistics assists us is in organizing, describing, summarizing, and displaying experimental data. This is termed descriptive statistics. A third use of statistics is in drawing inferences and making decisions based on data. For example, scientists may collect experimental data to prove or disprove an intuitive conjecture or hypothesis. Through the proper use of statistics we can conclude whether the hypothesis is valid or not. In the process of solving a real-life problem using statistics, the following three basic steps may be identified. First, consistent with the objective of the problem, we identify the model—the appropriate statistical method. Then, we justify the applicability of the selected model to fulfill the aim of our problem. Last, we properly apply the related model to analyze the data and make the necessary decisions, which results in answering the question of our problem with minimum risk. Starting with Chapter 2, we will study the necessary background material to proceed with the development of statistical methods for solving real-world problems.

In the present chapter we briefly review some of the basic concepts of descriptive statistics. Such concepts will give us a visual and descriptive presentation of the problem under investigation. Now, we proceed with some basic definitions.

1.1.1 Data Collection


One of the first problems that a statistician faces is obtaining data. The inferences that we make depend critically on the data that we collect and use. Data collection involves the following important steps.

General Procedure for Data Collection

1. Define the objectives of the problem and proceed to develop the experiment or survey.

2. Define the variables or parameters of interest.

3. Define the procedures of data collection and measuring techniques. This includes sampling procedures, sample size, and data-measuring devices (questionnaires, telephone interviews, etc.).

Example 1.1.1

We may be interested in estimating the average household income in a certain community. In this case, the parameter of interest is the average income of a typical household in the community. To acquire the data, we may send out a questionnaire or conduct a telephone interview. Once we have the data, we may first want to represent the data in graphical or tabular form to better understand its distributional behavior. Then we will use appropriate analytical techniques to estimate the parameter(s) of interest, in this case the average household income.

Very often a statistician is confined to data that have already been collected, possibly even collected for other purposes. This makes it very difficult to determine the quality of data. Planned collection of data, using proper techniques, is much preferred.

1.2 Basic Concepts


Statistics is the science of data. This involves collecting, classifying, summarizing, organizing, analyzing, and interpreting data. It also involves model building. Suppose we wish to study household incomes in a certain neighborhood. We may decide to randomly select, say, 50 families and examine their household incomes. As another example, suppose we wish to determine the diameter of a rod, and we take 10 measurements of the diameter. When we consider these two examples, we note that in the first case the population (the household incomes of all families in the neighborhood) really exists, whereas in the second, the population (set of all possible measurements of the diameter) is only conceptual. In either case we can visualize the totality of the population values, of which our sample data are only a small part. Thus, we define a population to be the set of all measurements or objects that are of interest and a sample to be a subset of that population. The population acts as the sampling frame from which a sample is selected. Now we introduce some basic notions commonly used in statistics.

Definition 1.2.1

A population is the collection or set of all objects or measurements that are of interest to the collector.

Example 1.2.1

Suppose we wish to study the heights of all female students at a certain university. The population will be the...

Erscheint lt. Verlag 14.9.2014
Sprache englisch
Themenwelt Mathematik / Informatik Mathematik Statistik
Technik
ISBN-10 0-12-417132-X / 012417132X
ISBN-13 978-0-12-417132-9 / 9780124171329
Haben Sie eine Frage zum Produkt?
PDFPDF (Adobe DRM)
Größe: 26,5 MB

Kopierschutz: Adobe-DRM
Adobe-DRM ist ein Kopierschutz, der das eBook vor Mißbrauch schützen soll. Dabei wird das eBook bereits beim Download auf Ihre persönliche Adobe-ID autorisiert. Lesen können Sie das eBook dann nur auf den Geräten, welche ebenfalls auf Ihre Adobe-ID registriert sind.
Details zum Adobe-DRM

Dateiformat: PDF (Portable Document Format)
Mit einem festen Seiten­layout eignet sich die PDF besonders für Fach­bücher mit Spalten, Tabellen und Abbild­ungen. Eine PDF kann auf fast allen Geräten ange­zeigt werden, ist aber für kleine Displays (Smart­phone, eReader) nur einge­schränkt geeignet.

Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen eine Adobe-ID und die Software Adobe Digital Editions (kostenlos). Von der Benutzung der OverDrive Media Console raten wir Ihnen ab. Erfahrungsgemäß treten hier gehäuft Probleme mit dem Adobe DRM auf.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen eine Adobe-ID sowie eine kostenlose App.
Geräteliste und zusätzliche Hinweise

Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

EPUBEPUB (Adobe DRM)
Größe: 24,0 MB

Kopierschutz: Adobe-DRM
Adobe-DRM ist ein Kopierschutz, der das eBook vor Mißbrauch schützen soll. Dabei wird das eBook bereits beim Download auf Ihre persönliche Adobe-ID autorisiert. Lesen können Sie das eBook dann nur auf den Geräten, welche ebenfalls auf Ihre Adobe-ID registriert sind.
Details zum Adobe-DRM

Dateiformat: EPUB (Electronic Publication)
EPUB ist ein offener Standard für eBooks und eignet sich besonders zur Darstellung von Belle­tristik und Sach­büchern. Der Fließ­text wird dynamisch an die Display- und Schrift­größe ange­passt. Auch für mobile Lese­geräte ist EPUB daher gut geeignet.

Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen eine Adobe-ID und die Software Adobe Digital Editions (kostenlos). Von der Benutzung der OverDrive Media Console raten wir Ihnen ab. Erfahrungsgemäß treten hier gehäuft Probleme mit dem Adobe DRM auf.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen eine Adobe-ID sowie eine kostenlose App.
Geräteliste und zusätzliche Hinweise

Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

Mehr entdecken
aus dem Bereich