帳號:guest(3.144.45.137)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士以作者查詢全國書目勘誤回報
作者(中):李御國
作者(英):Li, Yu-Guo
論文名稱(中):混合人聲之聲音場景辨識
論文名稱(英):Classification of Acoustic Scenes with Mixtures of Human Voice and Background Audio
指導教授(中):廖文宏
指導教授(英):Liao, Wen-Hung
口試委員:李建興
紀明德
口試委員(外文):Lee, Chang-Hsing
Chi, Ming-Te
學位類別:碩士
校院名稱:國立政治大學
系所名稱:資訊科學系碩士在職專班
出版年:2020
畢業學年度:108
語文別:中文
論文頁數:60
中文關鍵詞:卷積神經網路DCASE音訊資料集聲音場景辨識線上身份驗證
英文關鍵詞:Voice-based Online Identity VerificationConvolutional Neural NetworkDCASE DatasetAcoustic Scene Classification
Doi Url:http://doi.org/10.6814/NCCU202001422
相關次數:
  • 推薦推薦:0
  • 點閱點閱:79
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:10
  • gshot_favorites title msg收藏:0
日常生活環境週遭聲音,從來不是單獨事件,而是多種音源重疊在一起,使得環境音辨識充滿了各種挑戰。本研究以DCASE2016 比賽Task1所提供的音訊資料,包括海邊(Beach)與輕軌電車(Tram)等共15種場景的環境錄音為基礎,搭配16位人聲進行合成,針對混合人聲後的場景進行分析與辨識。聲音特徵萃取採用了普遍使用於聲音辨識的對數梅爾頻譜(Log-Mel Spectrogram),用以保留最多聲音特徵,並利用卷積神經網路(CNN)來分辨出這些相互疊合聲音場景,整體平均辨識率達79%,於車輛(Car)類別辨識率可達93%,希望能將其運用在線上身份驗證之聲紋辨識的前處理階段。
The sounds around the environment of daily life are never separate events but consist of overlapping audio sources, making environmental sound recognition a challenging issue. This research employs audio data provided by Task1 of the Detection and Classification of Acoustic Scenes and Events 2016 (DCASE2016) competition, including environmental recordings of 15 scenes in different settings such as beach and tram. They are mixed with 16 human voices to create a new dataset. Acoustic features are extracted from the Log-Mel spectrogram, which is commonly used in voice recognition to retain the most distinct sound properties. Convolutional neural network (CNN) is employed to distinguish these overlapping sound scenes. We achiveve an overall accuracy of 79% and 93% accudacy in the ‘car’ scene. We expect the outcome to be applied as the pre-processing stage of voice-based online identity verification.
第一章 緒論 1
1.1 研究動機 1
1.2 論文架構 4
第二章 相關研究 5
2.1 文獻探討 5
2.2 工具探討 9
第三章 研究方法 12
3.1 基本構想 12
3.2 前期研究 13
3.2.1 音訊輸入(Input Signal) 13
3.2.2 短時傅立葉轉換(Short-Time Fourier Transform) 14
3.2.3 梅爾頻譜轉換(Mel Spectrogram) 15
3.2.4 對數梅爾頻譜轉換(Log-Mel Spectrogram) 16
3.3 研究架構設計 17
3.3.1 問題陳述 17
3.3.2 研究架構 18
3.3.3 研究工具 20
3.3.4 前期測試 21
3.3.4.1 音訊資料前處理 22
3.3.4.2 特徵描述設定 22
3.3.4.3 模型設定 22
3.3.4.4 初測結果與特徵描述選定 23
3.3.4.5 模型選定 26
3.3.4.6 資料長度選定 27
3.4 目標設定 28
第四章 研究過程與結果分析 29
4.1 研究過程 30
4.1.1 聲音前置處理 30
4.1.2 聲音音量正規化 30
4.1.3 聲音合成 31
4.1.4 特徵描述 31
4.1.5 模型訓練 34
4.2 預測項目 37
4.2.1 預測情境1:純場景音(-20dB)式 37
4.2.2 預測情境2:場景音音量(-20dB)小於人聲音量(-13dB) 39
4.2.3 預測情境3:場景音音量(-20dB)小於人聲音量(-13dB) 40
4.2.4 預測情境4:場景音音量(-20dB)等於人聲音量(-20dB) 42
4.2.5 預測情境5:場景音音量(-20dB)大於人聲音量(-35dB) 43
4.3 成果分析以及探討 45
4.3.1 從整體正確率來檢視預測結果 45
4.3.2 從混淆矩陣來檢視預測結果 46
4.4 延伸探討 49
第五章 結論與未來研究方向 51
5.1 結論 51
5.2 未來研究方向 51
參考文獻 53
附錄 56
[1] ESC Dataset https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/YDEPUT
[2] UrbanSound8K
https://urbansounddataset.weebly.com/urbansound8k.html
[3] DCASE Challenge
http://dcase.community/
[4] Liao, Wen-Hung, Jin-Yao Wen, and Jen-Ho Kuo. "Streaming audio classification in smart home environments." The First Asian Conference on Pattern Recognition. IEEE, 2011.
[5] Nordby, Jon Opedal. Environmental sound classification on microcontrollers using Convolutional Neural Networks. MS thesis. Norwegian University of Life Sciences, Ås, 2019.
[6] Wu, Yuzhong, and Tan Lee. "Enhancing sound texture in CNN-based acoustic scene classification." ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2019.
[7] Salamon, Justin, and Juan Pablo Bello. "Deep convolutional neural networks and data augmentation for environmental sound classification." IEEE Signal Processing Letters 24.3 (2017): 279-283.
[8] Dai Wei, Juncheng Li, et al. "Acoustic scene recognition with deep neural networks (DCASE challenge 2016)." Robert Bosch Research and Technology Center 3 (2016).
[9] Hussain, Khalid, Mazhar Hussain, and Muhammad Gufran Khan. "An Improved Acoustic Scene Classification Method Using Convolutional Neural Networks (CNNs)." American Scientific Research Journal for Engineering, Technology, and Sciences (ASRJETS) 44.1 (2018): 68-76.
[10] Han, Yoonchang, and Kyogu Lee. "Acoustic scene classification using convolutional neural network and multiple-width frequency-delta data augmentation." arXiv preprint arXiv:1607.02383 (2016).
[11] Kim, Jaehun, and Kyogu Lee. "Empirical study on ensemble method of deep neural networks for acoustic scene classification." Proc. of IEEE AASP Challenge on Detection and Classification of Acoustic Scenes and Events (DCASE) (2016).
[12] Santoso, Andri, Chien-Yao Wang, and Jia-Ching Wang. Acoustic scene classification using network-in-network based convolutional neural network. DCASE2016 Challenge, Tech. Rep, 2016.
[13] Becker, Sören, et al. "Interpreting and explaining deep neural networks for classification of audio signals." arXiv preprint arXiv:1807.03418 (2018).
[14] Keren, Gil, and Björn Schuller. "Convolutional RNN: an enhanced model for extracting features from sequential data." 2016 International Joint Conference on Neural Networks (IJCNN). IEEE, 2016.
[15] CH.Tseng,初探卷積神經網路
https://chtseng.wordpress.com/2017/09/12/%E5%88%9D%E6%8E%A2%E5%8D%B7%E7%A9%8D%E7%A5%9E%E7%B6%93%E7%B6%B2%E8%B7%AF/
[16] Lin, Min, Qiang Chen, and Shuicheng Yan. "Network in network." arXiv preprint arXiv:1312.4400 (2013).
[17] Y. LeCun, Y. Bengio, G. Hinton, L. Y., B. Y., and H. G., “Deep learning,” Nature, vol. 521,no. 7553, pp. 436–444, 2015.
[18] NVIDIA DIGITS
https://developer.nvidia.com/digits
[19] Keras
https://keras.io/
[20] François Chollet,Deep learning 深度學習必讀:Keras 大神帶你用 Python 實作,旗標,ISBN:9789863125501,2019
[21] 郭秋田等,多媒體導論與應用第三版,旗標,ISBN:9574426246,2008。
[22] 丁建均,時頻分析近年來的發展
http://www.ancad.com.tw/Training/ppt_download/%E4%B8%81%E5%BB%BA%E5%9D%87%E6%95%99%E6%8E%880628.pdf
[23] Pu Sun, “Comparison of STFT and Wavelet Transform in Timefrequency Analysis”,2014.
[24] Solovyev, Roman A., et al. "Deep Learning Approaches for Understanding Simple Speech Commands." arXiv preprint arXiv:1810.02364 (2018).
[25] Librosa
https://librosa.github.io/librosa/feature.html
[26] Pydub, AudioSegment
https://github.com/jiaaro/pydub
[27] Sklearn.preprocessing.StandardScaler
https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html
[28] description of acoustic scene classes in TUT Acoustic scenes 2016 dataset.
http://www.cs.tut.fi/sgn/arg/dcase2016/acoustic-scenes
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *