隨著電腦運算能力進步,我們已經有能力對自己的數位音樂資料庫做整理、編輯、摘要等處理。然而我們能使用的工具仍然基於傳統的時間軸切割拼貼技術。在論文中我們提供一個自動化的程序,藉由分析音樂段落的方式,使音樂的結構視覺化,幫助人類理解,同時輔助編輯。亦可在不喪失聽覺連續性的情況下,生成歌曲的代表片段。產生的片段將由原曲中實際存在段落拼貼而成,且拼貼的邊界並不會有明顯的聽覺瑕疵。
With the growth of computing power, we have become able to manage our own digital music database by means of manipulating, editing or summarizing. But on the other hand, the tools we use are still based on the traditional time-line cutting and pasting operations. In this thesis we provide an automatic procedure to analyze the segmentation of one given musical piece, and use it to visualize the musical structure, which is helpful in music understanding as well as editing. This procedure can also generate a representative clip, which is the rearrangement of certain segments in the original song with little audible defect to be spotted around the segment boundary.