透過您的圖書館登入
IP:3.142.12.240
  • 學位論文

一致性量測中隨機期望修正量之合理性

Study on appropriateness of interrater chance-corrected agreement coefficients

指導教授 : 陳宏

摘要


在行為研究中,量化由不同評分者(raters)或量測設備所給的評分結果間之等同性成為重要的研究課題。對於一個被評分者(object) 來說,不同的評分者可能會給予不同的評分結果(ratings)。因此,探討評分者之間的可靠性成為一個重要討論議題。實際上,研究者想要知道所有的評分者在評分時其行為是否一致。Cohen 在1960年提出kappa 係數(κ),一個藉由修正兩位評分者之間由於隨機所造成一致評分而得到的一致性量測係數(interrater agreement coefficient)。而κ 係數也被廣泛的使用在量化兩個評分者間類別尺度的評分結果。然而在文獻研究中,κ 係數由於受到潛在類別(latent classes)的比例影響(base rate)、或是無法合理的修正評分者對於不同的潛在類別有不同的評分行為而被詬病。Gwet 在2008 年的文章中,根據兩個評分者間的評分行為,提出一個新的評分者間之評分模型(Gwet's model)以及其一致性量測係數稱為AC1統計量(γ1)。 De Mast於2007年指出,合理的一致性量測係數κ* 應藉由修正其隨機期望量所造成的一致性(chance-corrected agreement)。在本文中,我們考慮兩種不同的評分者評分行為模型:隨機評分模型(random rating model)和部分隨機評分模型(Gwet's model)。在各模型之下,使用漸進理論分析來探討κ 和γ1 兩個一致性量測係數是否為κ* 的一致性估計量(consistent estimate),並且分別與κ* 比較其表現行為。

並列摘要


On behavioural research applications, it often needs to quantify the homogeneity of agreement between responses given by two (or more) raters or between two (or more) measurement devices. For a given object, it can receive different ratings from different raters. The reliability among raters becomes an important issue. In particular, investigators would like to know whether all raters classify objects in a consistent manner. Cohen (1960) proposed kappa coefficient, κ, for correcting the chance agreement among two raters. κ is widely used in literature for quantifying agreement among the raters on a nominal scale. However, Cohen's kappa coefficient has been criticized for the illness prevalence or base rate in the particular population under study or irrelevant of rater's rating abilities for latent classes. Gwet (2008) proposed an alternative agreement based on interrater reliability called AC1 statistic, γ1. De Mast (2007) suggested an appropriate chance-corrected interrater agreement coefficient κ* by correcting the agreement due to chance. In this thesis, we use asymptotic analysis to evaluate whether κ or γ1 is a consistent estimate of κ* when both raters adopt random rating model or Gwet's model (2008) and compare the performances of κ and γ1 with κ*.

參考文獻


[1] AGRESTI, A. (1989). An agreement model with kappa as parameter. Statistics and Probability Letters 7 271-273.
[2] AGRESTI, A. (2002). Categorical Data Analysis. 2nd Edition. Wiley, New York.
[3] AICKIN, M. (1990). Maximum likelihood estimation of agreement in the constant predictive probability model, and its relation to Cohen's kappa. Biometrics 46 293-302.
[6] BISHOP M. M., FIENBERG, S. E., and HOLLAND, P. W. (2007). Discrete Multivariate Analysis Theory and Practice. Springer, New York.
[7] CHRENOFF, H. (1956). Large sample theory: parametric case. Ann. Statist. 27 1-22.

延伸閱讀