透過您的圖書館登入
IP:3.144.102.239

並列摘要


Often research in judgment and decision making requires comparison of multiple competing models. Researchers invoke global measures such as the rate of correct predictions or the sum of squared (or absolute) deviations of the various models as part of this evaluation process. Reliance on such measures hides the (often very high) level of agreement between the predictions of the various models and does not highlight properly the relative performance of the competing models in those critical cases where they make distinct predictions. To address this important problem we propose the use of pair-wise comparisons of models to produce more informative and targeted comparisons of their performance, and we illustrate this procedure with data from two recently published papers. We use Multidimensional Scaling of these comparisons to map the competing models. We also demonstrate how intransitive cycles of pair-wise model performance can signal that certain models perform better for a given subset of decision problems.

延伸閱讀