透過您的圖書館登入
IP:52.14.253.170
  • 學位論文

擬人思考代理人建模 : 自我覺察代理人與自我感知代理人模擬於人工社會

Thinking Humanly Agent Modeling: Self-awareness Agents and Self-perception Agents in Artificial Simulation Society

指導教授 : 孫春在

摘要


像人一樣的思考、擁有心智模型的代理人,是人類在人工智慧研究領域中,一個崇高的目標。然而,直到現今人類離此目標尚有一段遙遠的距離。傳統代理人的學習焦點(注意力)只專注在外界環境。代理人在學習過程中逐步建構的世界模型就是該代理人所處的外界環境的一個縮影,整個世界模型的建構重心都放在外界環境的訊號刺激與代理人的行為反應兩者之間的聯繫機制。代理人針對它所處的外界環境,包含物理環境及其它代理人,透過特定的學習方法,例如:類神經網路、基因演算法、模糊規則庫系統,不斷地調整代理人個體內部所擁有的一套知識庫或規則集,並學會各項專業技能,或者找到解決問題的策略以滿足使用者的需求或預先交付的任務。明顯地,將注意力放在自己身上的學習概念(亦即人類的自我覺察機制、自我感知能力)一直被代理人研究領域所忽略。無論如何,我們認為將自我覺察、自我感知機制導入學習型代理人原有的架構中,不僅為代理人領域研究開創一條嶄新的研究方向──令代理人的思考、行為及互動模式更接近人類,也讓代理人基社會科學電腦模擬更貼近真實社會的運作方式,後續的實驗結果及結論分析更具說服力。本研究提出:(一)具備自我覺察機制的代理人模型,來改善傳統的代理人的學習績效。另外,我們將以代理人所處的環境與個別代理人的生存目標兩者之間所造成的公益與私利衝突為實驗問題,探討自我覺察機制對於代理人的行為表現、群體的合作程度及整體社會的利益三者之影響。(二) 我們描述一個自我感知代理人模式,即代理人知道他們的自身態度和在具體問題上所表達的觀點之間的差異。代理人模型混合自我感知和認知失調理論,促使他們能夠自我調節,內在的態度和表達的意見不一致的不適。我們的研究結果顯示將自我聲譽機制納入自主學習框架的方式,使代理人的思維模式更類似人類,並使代理人基人工社會更接近其真實世界的對應。而在輿論動態學模擬中,結果從一系列的模擬實驗得知,我們的模型可反映代理人的內在態度和外顯意見的差距,進而解釋私下接受和公開從眾的現象。最後,我們利用自我感知代理人,展示了多元無知與打破的社會現象。

並列摘要


The target of "Thinking Humanly---The exciting new effort to make computers think …machines with minds, in the full and literal sense” of agent model is the most noble goal in AI field. However, in the history of artificial intelligence (AI), primary agent focuses have been external environments, outside incentives, and behavioral responses. Internal operation mechanisms (i.e., attending to the self in the same manner as human self-awareness) have never been a concern for AI agent. In this thesis, we will discuss how to build a "Thinking Humanly" the agent model in several aspects: (a) we present a novel learning agent model with self-reputation awareness capabilities. Agents with our proposed model are capable of evaluating self-behaviors based on a mix of public and private interest considerations, and of testing various solutions aimed at fulfilling social standards. (b) We describe our proposal for a self-perception model in which agents are aware of differences between their attitudes and expressed opinions on specific issues. Our agents are based on a mix of self-perception and cognitive dissonance theory that allows them to self-adjust discomfort caused by inconsistencies between inner attitudes and expressed opinions. Our results show promise for the integration of a self-reputation mechanism into self-learning frameworks in a manner that makes agents more human-like, and brings agent-based artificial societies closer to their real-world counterparts. And in opinion dynamics propagation simulations, results from a series of simulation experiments indicate that our model captures the gap between inner attitude and external opinion in explaining the private acceptance/public conformity phenomenon. We conclude with a demonstration of how our proposed model can be used in sociological studies of pluralistic ignorance.

參考文獻


Kawamura, K., Noelle, D., Hambuchen, K., Rogers, T., & Turkay, E. (2003). A multi-agent approach to self-reflection for cognitive robotics. Paper presented at the International Conference on Advanced Robotics, Coimbra, Portugal.
Abrams, D., Wetherell, M., Cochrane, S., Hogg, M. A., & Turner, J. C. (2011). Knowing what to think by knowing who you are: Self-categorization and the nature of norm formation, conformity and group polarization*. British Journal of Social Psychology, 29(2), 97-119.
Adler, E. (2009). The emergence of cooperation: national epistemic communities and the international evolution of the idea of nuclear arms control. International Organization, 46(01), 101-145.
Andersen, S. M., & Chen, S. (2002). The relational self: an interpersonal social-cognitive theory. Psychological review, 109(4), 619.
Andersen, S. M., & Ross, L. (1984). Self-knowledge and social inference: I. The impact of cognitive/affective and behavioral data. Journal of Personality and Social Psychology, 46(2), 280.

延伸閱讀