Statistical Modelling 13 (5&6) (2013), 541–551

Undesirable optimality results in multiple testing?

Charles Lewis
Fordham University and Educational Testing Service
e-mail: CLewis@Fordham.edu

Dorothy T Thayer
Educational Testing Service


Abstract:

A number of authors have considered the problem of making multiple comparisons among level-one parameters in multilevel models. This is a setting in which Bayesian procedures have a natural sampling theory interpretation, and where a natural justification for methods that control a directional version of the false discovery rate may be found. However, a basic desirable characteristic of multiple comparison procedures, namely that they should be more conservative than corresponding ‘per-comparison’ procedures, appears to be violated by some optimal procedures that have been developed in a multilevel setting. This concern is illustrated in the context of a very simple multilevel model, namely one-way, random-effects analysis of variance.

Keywords:

Analysis of variance; Bayesian decision theory; false discovery rate; multilevel models; multiple comparisons; random effects
back