Journal Article

Bayes not Bust! Why Simplicity is no Problem for Bayesians1

David L. Dowe, Steve Gardner and Graham Oppy

in The British Journal for the Philosophy of Science

Published on behalf of British Society for the Philosophy of Science

Volume 58, issue 4, pages 709-754
Published in print December 2007 | ISSN: 0007-0882
Published online September 2007 | e-ISSN: 1464-3537 | DOI: http://dx.doi.org/10.1093/bjps/axm033
Bayes not Bust! Why Simplicity is no Problem for Bayesians1

More Like This

Show all results sharing these subjects:

  • Philosophy of Science
  • Science and Mathematics

GO

Show Summary Details

Preview

The advent of formal definitions of the simplicity of a theory has important implications for model selection. But what is the best way to define simplicity? Forster and Sober ([1994]) advocate the use of Akaike's Information Criterion (AIC), a non-Bayesian formalisation of the notion of simplicity. This forms an important part of their wider attack on Bayesianism in the philosophy of science. We defend a Bayesian alternative: the simplicity of a theory is to be characterised in terms of Wallace's Minimum Message Length (MML). We show that AIC is inadequate for many statistical problems where MML performs well. Whereas MML is always defined, AIC can be undefined. Whereas MML is not known ever to be statistically inconsistent, AIC can be. Even when defined and consistent, AIC performs worse than MML on small sample sizes. MML is statistically invariant under 1-to-1 re-parametrisation, thus avoiding a common criticism of Bayesian approaches. We also show that MML provides answers to many of Forster's objections to Bayesianism. Hence an important part of the attack on Bayesianism fails.

Introduction

The Curve Fitting Problem

2.1 Curves and families of curves

2.2 Noise

2.3 The method of Maximum Likelihood

2.4 ML and over-fitting

Akaike's Information Criterion (AIC)

The Predictive Accuracy Framework

The Minimum Message Length (MML) Principle

5.1 The Strict MML estimator

5.2 An example: The binomial distribution

5.3 Properties of the SMML estimator

5.3.1  Bayesianism

5.3.2  Language invariance

5.3.3Generality

5.3.4  Consistency and efficiency

5.4 Similarity to false oracles

5.5 Approximations to SMML

Criticisms of AIC

6.1 Problems with ML

6.1.1  Small sample bias in a Gaussian distribution

6.1.2  The von Mises circular and von Mises—Fisher spherical distributions

6.1.3  The Neyman–Scott problem

6.1.4  Neyman–Scott, predictive accuracy and minimum expected KL distance

6.2 Other problems with AIC

6.2.1  Univariate polynomial regression

6.2.2  Autoregressive econometric time series

6.2.3  Multivariate second-order polynomial model selection

6.2.4  Gap or no gap: a clustering-like problem for AIC

6.3 Conclusions from the comparison of MML and AIC

Meeting Forster's objections to Bayesianism

7.1 The sub-family problem

7.2 The problem of approximation, or, which framework for statistics?

Conclusion

Details of the derivation of the Strict MML estimator

MML, AIC and the Gap vs. No Gap Problem

B.1 Expected size of the largest gap

B.2 Performance of AIC on the gap vs. no gap problem

B.3 Performance of MML in the gap vs. no gap problem

Journal Article.  17809 words.  Illustrated.

Subjects: Philosophy of Science ; Science and Mathematics

Full text: subscription required

How to subscribe Recommend to my Librarian

Users without a subscription are not able to see the full content. Please, subscribe or login to access all content.