本論文採用雙耳麥克風技術處理聲源定位,以頭部相關轉移函數HRTFs產生在無反射環境下,左右雙耳的聲音訊號,再運用MATLAB程式庫IPEM Toolbox中的Auditory Peripheral Module模組,模擬耳蝸的功能,將輸入聲音轉為40個頻道的神經訊號分頻排列,最後以神經系統模擬工具Nengo,實作人類聽覺神經系統中的主要模組:內側上橄欖(MSO)與外側上橄欖(LSO)模型分別計算兩耳訊號時間差(ITD)及強度差(ILD),下丘(IC)模型整合MSO及LSO輸出,得到聲源方位角的估計,完成一個仿生聽覺聲源定位系統。數值實驗發現,此一系統在無反射環境下,平均準確度可以達到82 %。在足夠延遲的回聲環境下,本系統方位辨識的準確度下降。未來可以在我們的系統添加一些下丘處理回聲的機制,相信可以使我們的系統更適應較真實的聽覺環境。
In this thesis, we use the binaural-microphone technique to process the sound source localization, and apply the Head-related Transfer Functions (HRTFs) to generate sound signals for the left and right ears in an anechoic environment. The auditory sound source localization system is bio-inspired structures and mechanisms such as the tonotopic organization and the biological nervous system operating principle. The Auditory Peripheral Module in the IPEM Toolbox of the MATLAB library is then applied to simulate the cochlear and convert the sound signal into the neural pulse rate. After the cochlear model, Nengo, a software for simulating neural systems, is applied to simulate Medial Superior Olive (MSO) and Lateral Superior Olive (LSO) for computing ITD and ILD, respectively. The Inferior Colliculus (IC) is finally added to estimate the location of the sound source by integrating the outputs of MSO and LSO. Results of numerical experiments show 82 % accuracy under anechoic environment, but the system becomes not so accurate if echoes with sufficient delay are also received. Nevertheless, we believe that our system can achieve better accuracy, if the echo-processing in IC can be simulated, too. With such revisions, we believe that our system can be more adapted to realistic auditory environments.