Journal Article

Severe Testing as a Basic Concept in a Neyman–Pearson Philosophy of Induction

Deborah G. Mayo and Aris Spanos

in The British Journal for the Philosophy of Science

Published on behalf of British Society for the Philosophy of Science

Volume 57, issue 2, pages 323-357
Published in print June 2006 | ISSN: 0007-0882
Published online April 2006 | e-ISSN: 1464-3537 | DOI:
Severe Testing as a Basic Concept in a Neyman–Pearson Philosophy of Induction

More Like This

Show all results sharing these subjects:

  • Philosophy of Science
  • Science and Mathematics


Show Summary Details


Despite the widespread use of key concepts of the Neyman–Pearson (N–P) statistical paradigm—type I and II errors, significance levels, power, confidence levels—they have been the subject of philosophical controversy and debate for over 60 years. Both current and long-standing problems of N–P tests stem from unclarity and confusion, even among N–P adherents, as to how a test's (pre-data) error probabilities are to be used for (post-data) inductive inference as opposed to inductive behavior. We argue that the relevance of error probabilities is to ensure that only statistical hypotheses that have passed severe or probative tests are inferred from the data. The severity criterion supplies a meta-statistical principle for evaluating proposed statistical inferences, avoiding classic fallacies from tests that are overly sensitive, as well as those not sensitive enough to particular errors and discrepancies.

Introduction and overview

1.1 Behavioristic and inferential rationales for Neyman–Pearson (N–P) tests

1.2 Severity rationale: induction as severe testing

1.3 Severity as a meta-statistical concept: three required restrictions on the N–P paradigm

Error statistical tests from the severity perspective

2.1 N–P test T(α): type I, II error probabilities and power

2.2 Specifying test T(α) using p-values

Neyman's post-data use of power

3.1 Neyman: does failure to reject H warrant confirming H?

Severe testing as a basic concept for an adequate post-data inference

4.1 The severity interpretation of acceptance (SIA) for test T(α)

4.2 The fallacy of acceptance (i.e., an insignificant difference): Ms Rosy

4.3 Severity and power

Fallacy of rejection: statistical vs. substantive significance

5.1 Taking a rejection of H0 as evidence for a substantive claim or theory

5.2 A statistically significant difference from H0 may fail to indicate a substantively important magnitude

5.3 Principle for the severity interpretation of a rejection (SIR)

5.4 Comparing significant results with different sample sizes in T(α): large n problem

5.5 General testing rules for T(α), using the severe testing concept

The severe testing concept and confidence intervals

6.1 Dualities between one and two-sided intervals and tests

6.2 Avoiding shortcomings of confidence intervals

Beyond the N–P paradigm: pure significance, and misspecification tests

Concluding comments: have we shown severity to be a basic concept in a N–P philosophy of induction?

Journal Article.  14072 words.  Illustrated.

Subjects: Philosophy of Science ; Science and Mathematics

Full text: subscription required

How to subscribe Recommend to my Librarian

Users without a subscription are not able to see the full content. Please, subscribe or login to access all content.