高光譜影像具有圖譜合一的特性,每個像元蘊含數百個波段資訊,對地物的分類準確度提升有很大幫助,但數百波段資料量導致處理速率降低。傳統像元式分類(pixel-based classification)具有龐大資料量不易呈現地物間之空間關聯性的問題。本研究提出基於物件化及深度類神經網路(Deep Neural Network, DNN)演算法對高光譜影像進行分類。研究資料取自AVIRIS(Airborne Visible/Infrared Imaging Spectrometer)的Indian Pines與Salinas影像及其地面真實資料。首先將高光譜影像進行最小噪聲轉換(Minimum Noise Fraction, MNF)分離數據中的噪聲並提出有用的資訊,減少隨後處理的計算量,再以物件式分類(Object-Based Image Analysis, OBIA),利用光譜值考慮空間相關性,也因為物件含有多個像元,其分類速度快,適合高解析度與內容較複雜影像。採用簡單線性迭代聚類(Simple Linear Iterative Clustering, SLIC)方法,分割不同大小及緊湊度物件,再計算每個物件空間與光譜訊息,選取訓練樣本並給予類別標籤,最後以深度神經網路進行物件化影像分類。DNN於高光譜影像物件化分類,確實能解決「椒鹽」效應,物件式運算時間皆較像元式分類快速,且當實驗影像區域越大時,分類準確度與Kappa值亦會越高,且高於像元式分類,實驗驗證此流程於Salinas影像分類準確度可達94.62 %,也提升了分類速度。
In hyperspectral images, each pixel contains information from hundreds of frequency bands, which is helpful for improving the classification accuracy; however, this also reduces the processing rate. In conventional pixel-based image classification, pixel data rarely represent spatial correlations between objects. Thus, an object-based deep neural network (DNN) was developed to classify hyperspectral images. The research data included images of Indian Pines (Indiana, USA) and Salinas (California, USA) recorded by an airborne visible/infrared imaging spectrometer, along with the associated ground truths for image classification. First, the minimum noise fraction technique was applied to separate the noise from the images in order to provide useful information and reduce the need of calculations. Then, object-based image analysis was implemented to explore spatial correlation. Simple linear iterative clustering was then used to categorize the objects based on size and compactness of each object. Finally, applying DNN classification can solve the salt-and-pepper effect, and requires less computing time than pixel-based classification, especially for a large area. The classification accuracy and kappa of the object-based classification were higher than those of the pixel-based classification method. The proposed method was verified using an image of Salinas with a classification accuracy of 94.62 % and less computation time.