基於內容的影片相似尋取(content-based video retrieval)是一種處理影片資料較合適的解決方法之一。然而隨著影片資料量越來越龐大,複雜性也越來越高,因此如何找到一個方法來建立影片資料索引,使得影片相似尋取能夠更有效率,儼然成為一個重要的議題。本研究提出一個較有效率的索引結構,來提高影片尋取的效率。我們採用兩兩物件之間的空間關係(spatial relation)為基礎,來建立影片資料的索引,並且將索引資料以圖形的方式來呈現,透過圖形的呈現,以利分析兩影片段空間關係變化的差異。接著,我們利用R-Tree來建立這些圖形資料索引結構,並利用window query過濾影片資料,以提高影片尋取的效率;影片相似比對部分,我們利用尋找子字串(substring matching)以及最大完全子圖(Largest Complete Subgraph, LCS)來衡量影片相似度。
With recent advances in multimedia technologies, digital TV and information highways, more and more video data are being captured, produced and stored. However, without appropriate techniques that can make the video content more accessible, all these data are hardly usable. So the research on the management of video data is now a hot field. But the differences between multimedia and textual data in continuity and dimensionality make traditional database technology unavailable for access to handle the multimedia information. Consequently, the content-based access and retrieval become a proper solution In this paper, we propose a new spatial index structure to alleviate the above facing limitations for efficient video retrieval. We index spatial relations between objects in a video and present these spatial information by graphs. Using the graphs, we can compare any two segment video data more easier. Besides, we establish the graphs a index structure by the R-Tree. Based on the R-Tree data structure, we can use window query to filter video data, and let video retrieval more efficient. In video similarity retrieval, we use substring matching and largest complete subgraph (LCS) to measure video similarity.