在資訊檢索的領域裡,音樂資訊檢索是一項重要的課題。其中有種特別的查詢方式,是讓使用者哼唱一小段旋律做為查詢的索引,並依此來查詢資料庫中相符的音樂,我們稱為此法為哼唱檢索系統。本論文提出一個全新的哼唱檢索系統,它利用模糊推論系統擷取使用者哼唱的音高軌跡,並且利用一種新的以內容為基礎的方法擷取樂曲中的重複樣式。其中,我們會先將音樂資料庫中的樂曲以小節為索引單位,將原本是複音的MIDI樂曲,轉換成單音的主旋律,再與使用者查詢的旋律片段做比對,找出相符的樂曲。為了驗證這個哼唱檢索系統的正確性,我們邀請15位受試者錄製出60個查詢的旋律片段,做為本系統的輸入端,經由上述的處理方式後,最後以最長共同子序列演算法比對出最相似的前幾位,做為本系統的輸出端。實驗結果證明在前5個出輸結果的前提下,本哼唱檢索系統可達到70%的正確性。
Music Information Retrieval (MIR) is an important topic within the domain of information retrieval. In particular, Query-by-Humming (QBH) involves retrieving music with a melody that matches the hummed query. By using a fuzzy inference model a novel Query-by-Humming system is proposed for extracting pitch contour information. In addition, a new content-based music repeating pattern extraction model is proposed. The proposed bar-indexing method can extract the melody, identify repeating patterns and deal with polyphonic MIDI files. To verify the effectiveness of the presented work, 15 volunteers recorded queries which were fed as input to the system. Then, the Longest Common Subsequence (LCS) is used to identify the most related top N matches as an evaluation standard for the system. Experimental results show that the proposed system achieves 70% accuracy among the top 5 retrievals.