Nicht aus der Schweiz? Besuchen Sie lehmanns.de
Für diesen Artikel ist leider kein Bild verfügbar.

Statistical Reasoning in the Behavioral Sciences

Buch | Softcover
496 Seiten
2021 | 7th edition
John Wiley & Sons Inc (Verlag)
978-1-119-37973-7 (ISBN)
CHF 133,25 inkl. MwSt
  • Titel z.Zt. nicht lieferbar
  • Versandkostenfrei
  • Auch auf Rechnung
  • Artikel merken
Cited by more than 300 scholars, Statistical Reasoning in the Behavioral Sciences continues to provide streamlined resources and easy-to-understand information on statistics in the behavioral sciences and related fields, including psychology, education, human resources management, and sociology. Students and professionals in the behavioral sciences will develop an understanding of statistical logic and procedures, the properties of statistical devices, and the importance of the assumptions underlying statistical tools.

This revised and updated edition continues to follow the recommendations of the APA Task Force on Statistical Inference and greatly expands the information on testing hypotheses about single means. The Seventh Edition moves from a focus on the use of computers in statistics to a more precise look at statistical software. The “Point of Controversy” feature embedded throughout the text provides current discussions of exciting and hotly debated topics in the field. Readers will appreciate how the comprehensive graphs, tables, cartoons and photographs lend vibrancy to all of the material covered in the text.

PREFACE vii

ABOUT THE BOOK AND AUTHORS x

1 INTRODUCTION 1

1.1 Descriptive Statistics, 3

1.2 Inferential Statistics, 3

1.3 Our Concern: Applied Statistics, 4

1.4 Variables and Constants, 5

1.5 Scales of Measurement, 6

1.6 Scales of Measurement and Problems of Statistical Treatment, 8

1.7 Do Statistics Lie?, 9

Point of Controversy: Are Statistical Procedures Necessary?, 11

1.8 Some Tips on Studying Statistics, 12

1.9 Statistics and Computers, 12

1.10 Summary, 13

2 FREQUENCY DISTRIBUTIONS, PERCENTILES, AND PERCENTILE RANKS 16

2.1 Organizing Qualitative Data, 16

2.2 Grouped Scores, 18

2.3 How to Construct a Grouped Frequency Distribution, 19

2.4 Apparent versus Real Limits, 21

2.5 The Relative Frequency Distribution, 21

2.6 The Cumulative Frequency Distribution, 22

2.7 Percentiles and Percentile Ranks, 24

2.8 Computing Percentiles from Grouped Data, 25

2.9 Computation of Percentile Rank, 28

2.10 Summary, 28

3 GRAPHIC REPRESENTATION OF FREQUENCY DISTRIBUTIONS 32

3.1 Basic Procedures, 32

3.2 The Histogram, 33

3.3 The Frequency Polygon, 34

3.4 Choosing between a Histogram and a Polygon, 35

3.5 The Bar Diagram and the Pie Chart, 37

3.6 The Cumulative Percentage Curve, 39

3.7 Factors Affecting the Shape of Graphs, 40

3.8 Shape of Frequency Distributions, 42

3.9 Summary, 43

4 CENTRAL TENDENCY 46

4.1 The Mode, 46

4.2 The Median, 47

4.3 The Mean, 48

4.4 Properties of the Mode, 49

4.5 Properties of the Mean, 50

Point of Controversy: Is It Permissible to Calculate the Mean for Tests in the Behavioral Sciences?, 51

4.6 Properties of the Median, 52

4.7 Measures of Central Tendency in Symmetrical and Asymmetrical Distributions, 53

4.8 The Effects of Score Transformations, 54

4.9 Summary, 55

5 VARIABILITY AND STANDARD (z) SCORES 58

5.1 The Range and Semi-Interquartile Range, 58

5.2 Deviation Scores, 60

5.3 Deviational Measures: The Variance, 61

5.4 Deviational Measures: The Standard Deviation, 62

5.5 Calculation of the Variance and Standard Deviation: Raw-Score Method, 63

5.6 Calculation of the Standard Deviation with SPSS, 64

Point of Controversy: Calculating the Sample Variance: Should We Divide by n or (n − 1)?, 67

5.7 Properties of the Range and Semi-Interquartile Range, 68

5.8 Properties of the Standard Deviation, 68

5.9 How Big Is a Standard Deviation?, 69

5.10 Score Transformations and Measures of Variability, 69

5.11 Standard Scores (z Scores), 70

5.12 A Comparison of z Scores and Percentile Ranks, 73

5.13 Summary, 74

6 STANDARD SCORES AND THE NORMAL CURVE 78

6.1 Historical Aspects of the Normal Curve, 78

6.2 The Nature of the Normal Curve, 81

6.3 Standard Scores and the Normal Curve, 81

6.4 The Standard Normal Curve: Finding Areas When the Score Is Known, 83

6.5 The Standard Normal Curve: Finding Scores When the Area Is Known, 86

6.6 The Normal Curve as a Model for Real Variables, 88

6.7 The Normal Curve as a Model for Sampling Distributions, 88

Point of Controversy: How Normal Is the Normal Curve?, 89

6.8 Summary, 89

7 CORRELATION 92

7.1 Some History, 93

7.2 Graphing Bivariate Distributions: The Scatter Diagram, 95

7.3 Correlation: A Matter of Direction, 96

7.4 Correlation: A Matter of Degree, 98

7.5 Understanding the Meaning of Degree of Correlation, 99

7.6 Formulas for Pearson’s Coefficient of Correlation, 100

7.7 Calculating r from Raw Scores, 101

7.8 Calculating r with SPSS, 103

7.9 Spearman’s Rank-Order Correlation Coefficient, 106

7.10 Correlation Does Not Prove Causation, 107

7.11 The Effects of Score Transformations, 110

7.12 Cautions Concerning Correlation Coefficients, 110

7.13 Summary, 114

8 PREDICTION 118

8.1 The Problem of Prediction, 118

8.2 The Criterion of Best Fit, 120

Point of Controversy: Least-Squares Regression versus the Resistant Line, 121

8.3 The Regression Equation: Standard-Score Form, 122

8.4 The Regression Equation: Raw-Score Form, 123

8.5 Error of Prediction: The Standard Error of Estimate, 125

8.6 An Alternative (and Preferred) Formula for SYX, 127

8.7 Calculating the “Raw-Score” Regression Equation and Standard Error of Estimate with SPSS, 128

8.8 Error in Estimating Y from X, 130

8.9 Cautions Concerning Estimation of Predictive Error, 132

8.10 Prediction Does Not Prove Causation, 133

8.11 Summary, 133

9 INTERPRETIVE ASPECTS OF CORRELATION AND REGRESSION 136

9.1 Factors Influencing r: Degree of Variability in Each Variable, 136

9.2 Interpretation of r: The Regression Equation I, 137

9.3 Interpretation of r: The Regression Equation II, 139

9.4 Interpretation of r : Proportion of Variation in Y Not Associated with Variation in X, 140

9.5 Interpretation of r: Proportion of Variance in Y Associated with Variation in X, 142

9.6 Interpretation of r: Proportion of Correct Placements, 144

9.7 Summary, 145

10 PROBABILITY 147

10.1 Defining Probability, 148

10.2 A Mathematical Model of Probability, 149

10.3 Two Theorems in Probability, 150

10.4 An Example of a Probability Distribution: The Binomial, 151

10.5 Applying the Binomial, 153

10.6 Probability and Odds, 155

10.7 Are Amazing Coincidences Really That Amazing?, 155

10.8 Summary, 156

11 RANDOM SAMPLING AND SAMPLING DISTRIBUTIONS 160

11.1 Random Sampling, 161

11.2 Using a Table of Random Numbers, 163

11.3 The Random Sampling Distribution of the Mean: An Introduction, 164

11.4 Characteristics of the Random Sampling Distribution of the Mean, 166

11.5 Using the Sampling Distribution of X to Determine the Probability for Different Ranges of Values of X, 168

11.6 Random Sampling without Replacement, 173

11.7 Summary, 173

12 INTRODUCTION TO STATISTICAL INFERENCE: TESTING HYPOTHESES ABOUT A SINGLE MEAN (z) 175

12.1 Testing a Hypothesis about a Single Mean, 176

12.2 The Null and Alternative Hypotheses, 176

12.3 When Do We Retain and When Do We Reject the Null Hypothesis?, 178

12.4 Review of the Procedure for Hypothesis Testing, 178

12.5 Dr. Brown’s Problem: Conclusion, 178

12.6 The Statistical Decision, 180

12.7 Choice of HA: One-Tailed and Two-Tailed Tests, 182

12.8 Review of Assumptions in Testing Hypotheses about a Single Mean, 183

Point of Controversy: The Single-Subject Research Design, 184

12.9 Summary, 185

13 TESTING HYPOTHESES ABOUT A SINGLE MEAN WHEN 𝜎 IS UNKNOWN (t) 187

13.1 Estimating the Standard Error of the Mean When 𝜎 Is Unknown, 187

13.2 The t Distribution, 189

13.3 Characteristics of Student’s Distribution of t, 191

13.4 Degrees of Freedom and Student’s Distribution of t, 192

13.5 An Example: Has the Violent Content of Television Programs Increased?, 193

13.6 Calculating t from Raw Scores, 196

13.7 Calculating t with SPSS, 198

13.8 Levels of Significance versus p-Values, 200

13.9 Summary, 202

14 INTERPRETING THE RESULTS OF HYPOTHESIS TESTING: EFFECT SIZE, TYPE I AND TYPE II ERRORS, AND POWER 205

14.1 A Statistically Significant Difference versus a Practically Important Difference, 205

Point of Controversy: The Failure to Publish “Nonsignificant” Results, 206

14.2 Effect Size, 207

14.3 Errors in Hypothesis Testing, 210

14.4 The Power of a Test, 212

14.5 Factors Affecting Power: Difference between the True Population Mean and the Hypothesized Mean (Size of Effect), 212

14.6 Factors Affecting Power: Sample Size, 213

14.7 Factors Affecting Power: Variability of the Measure, 214

14.8 Factors Affecting Power: Level of Significance (𝛼), 214

14.9 Factors Affecting Power: One-Tailed versus Two-Tailed Tests, 214

14.10 Calculating the Power of a Test, 216

Point of Controversy: Meta-Analysis, 217

14.11 Estimating Power and Sample Size for Tests of Hypotheses about Means, 218

14.12 Problems in Selecting a Random Sample and in Drawing Conclusions, 220

14.13 Summary, 221

15 TESTING HYPOTHESES ABOUT THE DIFFERENCE BETWEEN TWO INDEPENDENT GROUPS 224

15.1 The Null and Alternative Hypotheses, 224

15.2 The Random Sampling Distribution of the Difference between Two Sample Means, 225

15.3 Properties of the Sampling Distribution of the Difference between Means, 228

15.4 Determining a Formula for t, 228

15.5 Testing the Hypothesis of No Difference between Two Independent Means: The Dyslexic Children Experiment, 231

15.6 Use of a One-Tailed Test, 234

15.7 Calculation of t with SPSS, 234

15.8 Sample Size in Inference about Two Means, 237

15.9 Effect Size, 237

15.10 Estimating Power and Sample Size for Tests of Hypotheses about the

Difference between Two Independent Means, 241

15.11 Assumptions Associated with Inference about the Difference between Two Independent Means, 242

15.12 The Random-Sampling Model versus the Random-Assignment Model, 243

15.13 Random Sampling and Random Assignment as Experimental Controls, 244

15.14 Summary, 245

16 TESTING FOR A DIFFERENCE BETWEEN TWO DEPENDENT (CORRELATED) GROUPS 249

16.1 Determining a Formula for t, 250

16.2 Degrees of Freedom for Tests of No Difference between Dependent Means, 251

16.3 An Alternative Approach to the Problem of Two Dependent Means, 251

16.4 Testing a Hypothesis about Two Dependent Means: Does Text Messaging Impair Driving?, 252

16.5 Calculating t with SPSS, 254

16.6 Effect Size, 257

16.7 Power, 258

16.8 Assumptions When Testing a Hypothesis about the Difference between Two Dependent Means, 259

16.9 Problems with Using the Dependent-Samples Design, 259

16.10 Summary, 261

17 INFERENCE ABOUT CORRELATION COEFFICIENTS 264

17.1 The Random Sampling Distribution of r, 264

17.2 Testing the Hypothesis That 𝜌 = 0, 265

17.3 Fisher’s z′ Transformation, 267

17.4 Strength of Relationship, 268

17.5 A Note about Assumptions, 268

17.6 Inference When Using Spearman’s rS, 269

17.7 Summary, 269

18 AN ALTERNATIVE TO HYPOTHESIS TESTING: CONFIDENCE INTERVALS 271

18.1 Examples of Estimation, 272

18.2 Confidence Intervals for 𝜇X, 273

18.3 The Relation between Confidence Intervals and Hypothesis Testing, 276

18.4 The Advantages of Confidence Intervals, 276

18.5 Random Sampling and Generalizing Results, 277

18.6 Evaluating a Confidence Interval, 278

Point of Controversy: Objectivity and Subjectivity in Inferential Statistics: Bayesian Statistics, 279

18.7 Confidence Intervals for 𝜇X − 𝜇Y , 280

18.8 Sample Size Required for Confidence Intervals of 𝜇X and 𝜇X − 𝜇Y , 283

18.9 Confidence Intervals for 𝜌, 285

18.10 Where Are We in Statistical Reform?, 286

18.11 Summary, 287

19 TESTING FOR DIFFERENCES AMONG THREE OR MORE GROUPS: ONE-WAY ANALYSIS OF VARIANCE (AND SOME ALTERNATIVES) 289

19.1 The Null Hypothesis, 291

19.2 The Basis of One-Way Analysis of Variance: Variation within and Between Groups, 291

19.3 Partition of the Sums of Squares, 293

19.4 Degrees of Freedom, 295

19.5 Variance Estimates and the F Ratio, 296

19.6 The Summary Table, 297

19.7 Example: Does Playing Violent Video Games Desensitize People to Real-Life Aggression?, 298

19.8 Comparison of t and F, 301

19.9 Raw-Score Formulas for Analysis of Variance, 302

19.10 Calculation of ANOVA for Independent Measures with SPSS, 303

19.11 Assumptions Associated with ANOVA, 306

19.12 Effect Size, 306

19.13 ANOVA and Power, 307

19.14 Post Hoc Comparisons, 308

19.15 Some Concerns about Post Hoc Comparisons, 310

19.16 An Alternative to the F Test: Planned Comparisons, 310

19.17 How to Construct Planned Comparisons, 311

19.18 Analysis of Variance for Repeated Measures, 314

19.19 Calculation of ANOVA for Repeated Measures with SPSS, 319

19.20 Summary, 321

20 FACTORIAL ANALYSIS OF VARIANCE: THE TWO-FACTOR DESIGN 326

20.1 Main Effects, 327

20.2 Interaction, 329

20.3 The Importance of Interaction, 331

20.4 Partition of the Sums of Squares for Two-Way ANOVA, 332

20.5 Degrees of Freedom, 336

20.6 Variance Estimates and F Tests, 337

20.7 Studying the Outcome of Two-Factor Analysis of Variance, 338

20.8 Effect Size, 340

20.9 Calculation of Two-Factor ANOVA with SPSS, 341

20.10 Planned Comparisons, 342

20.11 Assumptions of the Two-Factor Design and the Problem of Unequal Numbers of Scores, 343

20.12 Mixed Two-Factor Within-Subjects Design, 344

20.13 Calculation of the Mixed Two-Factor Within-Subjects Design with SPSS, 348

20.14 Summary, 349

21 CHI-SQUARE AND INFERENCE ABOUT FREQUENCIES 353

21.1 The Chi-Square Test for Goodness of Fit, 353

21.2 Chi-Square (𝜒2) as a Measure of the Difference between Observed and Expected Frequencies, 355

21.3 The Logic of the Chi-Square Test, 356

21.4 Interpretation of the Outcome of a Chi-Square Test, 358

21.5 Different Hypothesized Proportions in the Test for Goodness of Fit, 358

21.6 Effect Size for Goodness-of-Fit Problems, 359

21.7 Assumptions in the Use of the Theoretical Distribution of Chi-Square, 360

21.8 Chi-Square as a Test for Independence between Two Variables, 360

21.9 Finding Expected Frequencies in a Contingency Table, 362

21.10 Calculation of 𝜒2 and Determination of Significance in a Contingency Table, 363

21.11 Measures of Effect Size (Strength of Association) for Tests of Independence, 364

Point of Controversy: Yates’ Correction for Continuity, 365

21.12 Power and the Chi-Square Test of Independence, 367

21.13 Summary, 368

22 SOME (ALMOST) ASSUMPTION-FREE TESTS 371

22.1 The Null Hypothesis in Assumption-Freer Tests, 372

22.2 Randomization Tests, 372

22.3 Rank-Order Tests, 374

22.4 The Bootstrap Method of Statistical Inference, 375

22.5 An Assumption-Freer Alternative to the t Test of a Difference between Two Independent Groups: The Mann–Whitney U Test, 376

Point of Controversy: A Comparison of the t Test and the Mann–Whitney U Test with Real-World Distributions, 379

22.6 An Assumption-Freer Alternative to the t Test of a Difference Between Two Dependent Groups: The Sign Test, 380

22.7 Another Assumption-Freer Alternative to the t Test of a Difference Between Two Dependent Groups: The Wilcoxon Signed-Ranks Test, 382

22.8 An Assumption-Freer Alternative to the One-Way ANOVA for Independent Groups: The Kruskal–Wallis Test, 384

22.9 An Assumption-Freer Alternative to ANOVA for Repeated Measures: Friedman’s Rank Test for Correlated Samples, 387

22.10 Summary, 389

EPILOGUE 392

APPENDIX A REVIEW OF BASIC MATHEMATICS 396

APPENDIX B LIST OF SYMBOLS 405

APPENDIX C ANSWERS TO PROBLEMS 408

APPENDIX D STATISTICAL TABLES 424

Table A: Areas under the Normal Curve Corresponding to Given Values of z, 424

Table B: The Binomial Distribution, 429

Table C: Random Numbers, 432

Table D: Student’s t Distribution, 434

Table E: The F Distribution, 436

Table F: The Studentized Range Statistic, 440

Table G: Values of the Correlation Coefficient Required for Different Levels of Significance When H0∶ 𝜌 = 0, 441

Table H: Values of Fisher’s z′ for Values of r, 443

Table I: The 𝜒2 Distribution, 444

Table J: Critical One-Tail Values of ΣRX for the Mann–Whitney U Test, 445

Table K: Critical Values for the Smaller of R+ or R− for the Wilcoxon Signed-Ranks Test, 447

REFERENCES 448

INDEX 454

Erscheinungsdatum
Verlagsort New York
Sprache englisch
Maße 201 x 252 mm
Gewicht 839 g
Themenwelt Geisteswissenschaften Psychologie
Sozialwissenschaften Soziologie Allgemeines / Lexika
Sozialwissenschaften Soziologie Empirische Sozialforschung
ISBN-10 1-119-37973-3 / 1119379733
ISBN-13 978-1-119-37973-7 / 9781119379737
Zustand Neuware
Haben Sie eine Frage zum Produkt?
Mehr entdecken
aus dem Bereich

von Fabian Kessl; Holger Schoneville

Buch | Softcover (2024)
Juventa Verlag
CHF 39,20