Expert judgment has been widely applied in the field of decision making and risk assessment. However, when overconfidence reveals in the judgment, it might serious affect decision quality. Thus, this study aims at discussing different sources of overconfidence in an expert's probability interval estimation. To revise the instability which is caused by using binary variables as calibration measuring in previous research, we define a continuous variable as new calibration measurement by applying the ratio between EAD (Expected Absolute Deviation) of experts' subjective probability and MAD (Mean Absolute Deviation) of realization; further, analyze and interpret the data by a simpler linear mixed model. Under the new calibration measurement, we discover that the variance among experts is less than the random variance among questions or realizations. This result has overthrown the analytic outcome of binary calibration measurement. Thus, to use expert judgment in practice, the effect may be limited by adopting seed questions to select a more professional expert or the one in higher calibration level.