透過您的圖書館登入
IP:3.131.13.37
  • 學位論文

探討結合動態漸近蒙地卡羅演算法與平行調整演算法在多峰分布抽樣的表現

Combining stochastic approximation monte carlo and parallel tempering algorithms in sampling multimodal distributions

指導教授 : 張淑惠
若您是本文的作者,可授權文章由華藝線上圖書館中協助推廣。

摘要


Metropolis-Hastings演算法是基於建構馬可夫鏈自多變量分布產生一序列隨機樣本的方法,當分布為崎嶇不平或是多峰分布的維度很高時,Metropolis-Hastings演算法很容易會陷在某個區域性的單峰分布,在文獻中,有數種改善Metropolis-Hasting演算法的演算法被提出。舉例來說,平行調整演算法是利用輔助變數改善Metropolis-Hastings演算法的模擬方法;而動態漸近蒙地卡羅演算法利用過去樣本資訊改造Metropolis-Hastings演算法。在此研究中,藉由結合此兩種方法,同時利用輔助變數與過去樣本資訊提出新的演算法。模擬研究為比較新的演算法與上述兩種演算法的表現。模擬的結果顯示動態漸近蒙地卡羅演算法在多峰分布的峰有覆蓋時表現不佳,多峰分布的峰無覆蓋時其表現佳;平行調整演算法在多峰分布的峰有無覆蓋皆表現佳,而結合兩種方法的表現依賴交換發生率及候選函數。

並列摘要


Metropolis-Hastings algorithm is established based on a Markov chain method to generate a series of random samples from multivariate distributions. When the distributions are rugged or the number of dimensions in multimodal distributions is high, Metropolis-Hastings algorithm is likely to be trapped locally by a certain unimodal distributions. There are several algorithms proposed to improve Metropolis-Hastings algorithm in literature. For example, parallel tempering is a simulation method which uses auxiliary variables to modify Metropolis-Hastings algorithm. Alternatively, the stochastic approximation Monte Carlo algorithm exploits the past sample information to adapt Metropolis-Hastings algorithm. In this study, a new algorithm is proposed by combing these two methods for using both information from auxiliary variables and past samples. A simulation study is conducted to investigate and compare the performance of the new algorithm and the abovementioned algorithms. The simulation results show that the performance of stochastic approximation Monte Carlo algorithm for the multimodal distribution modes coverage is poor but its performance is better for modes without covering. Parallel tempering performs well for both situations, while performance of the combined method is dependent on the exchange of incidence and proposal function.

參考文獻


[1]Liang, F., Liu,C., Carrol, R.J. (2010) “Advanced Markov Chain Monte Carlo Method :Learning from Past Samples” John Wiley and Sons Ltd
[2] Carlo, M. (2004).” Markov Chain Monte Carlo and Gibbs Sampling.” Notes, (April).
[3] Liang, F., Liu,C., Carrol, R.J. (2007) “Stochastic Approximation in Monte Carlo Computation” Journal of the American Statistical Association 102,305-320
[4] Earlab, D.J. Deema, M. W. (2005) “ Parallel Tempering: Theory, applications, and new perspectives ”Phys.Chem.7, 3910-3916
[5] Goswami, G., Liu, J.S. (2007) “On Learning strategies for evolutionary Monte Carlo ” Stat Comput 17,23-38

延伸閱讀