帳號:guest(3.135.190.232)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):蕭宇翔
作者(外文):Hsiao, Yu-Hsiang
論文名稱(中文):馬氏-田口系統:理論及其應用
論文名稱(外文):Mahalanobis-Taguchi System: Theory and Applications
指導教授(中文):蘇朝墩
指導教授(外文):Su, Chao-Ton
學位類別:博士
校院名稱:國立清華大學
系所名稱:工業工程與工程管理學系
學號:947815
出版年(民國):98
畢業學年度:97
語文別:英文
論文頁數:121
中文關鍵詞:馬氏-田口系統分類資料類別不平衡問題閾值特徵選取音訊辨識特徵萃取音色
外文關鍵詞:Mahalanobis-Taguchi System (MTS)ClassificationClass imbalance problemThresholdFeature selectionSound signal recognitionFeature extractionTimbre
相關次數:
  • 推薦推薦:0
  • 點閱點閱:693
  • 評分評分:*****
  • 下載下載:42
  • 收藏收藏:0
近年來,由於在大量資料的可得性上有重大發展,以及在對於將現有資料轉化為可用資訊或知識上有著相當迫切的需求,資料探勘於是在資訊產業中受到了相當的重視。馬氏-田口系統是由田口玄一博士所發展的新資料探勘工具,為一種多變量資料分析技術,可應用於於診斷、預測、二元分類及特徵選取等方面,目前已成功地運用在各式實際的問題分析上。
此論文著眼於馬氏-田口系統理論的深究與擴展,以改善馬氏-田口系統目前所存在於理論與應用上的主要限制與缺點,並加強馬氏-田口系統的可靠度與實用性。最後,本研究亦以若干實際案例的問題解決來具體地呈現上述研究成果所得的效益。本研究之主要內容概述如下:
在理論方面,本研究探討馬氏-田口系統對於處理「資料類別不平衡問題」的穩建性。資料類別不平衡問題所指的是某類別的樣本數量顯著大於另一類別之樣本數量。資料類別不平衡問題通常會降低現有分類技術在判別少量類別樣本時的敏感度。然而,少量類別在實際應用上往往是較為重要的,因而對少量類別的誤判將造成整體系統的損失。此外,決定馬氏-田口系統的分類閾值在實際應用上一直是個懸而未決的問題,因此,本研究同時針對此問題,利用柴比雪夫定理提出「機率閾值訂定法」來獲得二元分類問題的適當閾值。另一方面,由於多類別問題在應用上的普遍性,本研究以現有的馬氏-田口系統理論為基礎,提出新的多類別分類及特徵選取方法,稱為「多類別馬氏-田口系統」。上述針對理論的探討與擴展也以若干資料數據集來進行有效性的驗證。
在應用方面,本研究以三個實際案例呈現所提出方法的實用性。案例一:縮減行動電話無線射頻功能檢驗屬性。由於通過測試之行動電話遠多於未通過者,因此,本案例之數據為典型的「非平衡型態」。本研究使用馬氏-田口系統及機率閾值訂定法來有效移除檢測流程中不必要的檢驗屬性,並在減少屬性的情況下仍然保有百分之百的檢驗正確性。案例二:妊娠糖尿病發展為第二型糖尿病之預測與風險因子確認。本案例之數據分為三類:無發展為第二型糖尿病者、呈現第二型糖尿病徵兆者、已發展為第二型糖尿病者。運用所發展的多類別馬氏-田口系統,可有效預測患有妊娠糖尿病之孕婦是否在妊娠結束後會發展為第二型糖尿病患者。並且,透過風險因子的確認,可提供疾病預防與衛教上的若干用途。最後,案例三:為薩克斯風製造業建立一套「多類別自動化音色檢驗系統」。該檢驗系統旨在降低人為聽力在辨識音色上的不穩定性與偏差。本案例之薩克斯風音色分為:不合格、一般品質與高品質。為此,多類別自動化音色檢驗系統除採用所發展的多類別馬氏-田口系統外,一套針對聲音或震動等一維訊號辨識而設計的波形特徵萃取法亦被提出,並用於薩克斯風聲音訊號的特徵萃取上。應用結果顯現,多類別自動化音色檢驗系統,可達到百分之百的檢驗正確率。
In recent years, data mining has attracted a great deal of attention in information industry because of the wide availability of huge amounts of data and the imminent need for turning such data into useful information and knowledge. The information and knowledge gained can be used for applications of business management, production control, engineering design, and so on. The Mahalanobis-Taguchi System (MTS), developed by Dr. Taguchi, is a relatively new data mining tool. MTS is a collection of methods proposed for diagnosis, forecasting, binary classification, and feature selection technique using multivariate data, and has been successfully used in various applications.
This study aims to explore and extend the theory of MTS and seeks to improve its existing limit and drawbacks in both theoretical and practical domains to reinforce the reliability and practicality of MTS. Finally, several real case problems are employed and solved to specifically show the benefit coming from implementing the above-mentioned studies. The contents of this study are described as follows:
In the theoretical aspect, this study investigates the reliability and robustness of MTS for dealing with the “class imbalance problems”. In the class imbalance problems, one class might be represented by a large number of examples, while the other class, usually the more important class, is represented by only a few. Class imbalance problems always diminish the performance of classification algorithms and cause classification bias. That is, the tendency is that the classifier will produce high predictive accuracy over the majority class, but will predict poor over the minority class. This may lead to a great loss for whole system. Besides, to solve the pending practical issue of determining the classification threshold for MTS, we also develop a “probabilistic thresholding method” on the basis of the Chebyshev’s theorem to derive an appropriate threshold for binary classification. On the other hand, because of the frequent occurrence of multi-class problems in real applications, a novel multi-class classification and feature selection method, namely, multi-class Mahalanobis-Taguchi System (MMTS) is developed on the basis of MTS theory. Through establishing an individual Mahalanobis space for each of the multiple classes and applying the proposed “weighted Mahalanobis distance” as the distance metric for classification, MMTS can achieve the application of multiple classes. For validating our point of view and the proposed methodologies, some datasets are used in the numerical experiments and comparisons.
In the application aspect, three real cases are solved using MTS and our proposed MMTS. The purpose of first case is to reduce the number of radio frequency inspection attributes in the mobile phone manufacturing process. In this case, there are two inspection outcomes: pass and fail, and the collected data are typically imbalanced. Thus, MTS with our proposed probabilistic threshoding method is employed to detect and remove the redundant inspection attributes. The results show that the number of attributes is significantly reduced without losing inspection accuracy. The second case is about predicting the development of type 2 diabetes mellitus from gestational diabetes mellitus. This case is a multi-class application, and therefore we use the proposed MMTS to identify the significant risk factors of developing type 2 diabetes mellitus from gestational diabetes mellitus and further predict the occurrence of type 2 diabetes mellitus. Through MMTS, good prediction accuracy is obtained and the risk factors are found out. By monitoring the risk factors, medical personnel can effectively take care of the gestational diabetes mellitus women and thus help prevent from the occurrence of type 2 diabetes mellitus and ensure their health. The final case attempts to establish an automatic multi-class timbre classification system (AMTCS) to prevent from the timbre judgment bias caused from human hearing and increase the accuracy and reliability of timbre quality inspection in alto saxophone manufacture. For this purpose, in addition to employing MMTS, a feature extraction method, called “waveform shape-based feature extraction method (WFEM)”, for one-dimensional signal recognition, such as vibration and sound is developed and used to extract the saxophone sound features. Through employing the AMTCS, strong assistance are provided to implement the final timbre inspection of alto saxophone. The results show that AMTCS achieves 100% saxophone timbre inspection accuracy.
摘要 i

ABSTRACT iii

致謝 vi

CONTENTS vii

TABLES x

FIGURES xii


1 INTRODUCTION 1
1.1 Research Background and Motivation 1
1.2 Research Objectives 8
1.3 Research Framework and Organization 10
2 MAHALANOBIS-TAGUCHI SYSTEM AND QUADRATIC LOSS FUNCTION THRESHOLDING APPROACH 12
2.1 Mahalanobis-Taguchi System 12
2.2 Quadratic Loss Function Thresholding Approach 16
3 PROPOSED METHODOLOGIES 19
3.1 Probabilistic Thresholding Method 20
3.2 Multi-class Mahalanobis-Taguchi System 24
4 NUMERICAL EXPERIMENTS AND PERFORMANCE EVALUATION 34
4.1 Robustness Evaluation of Mahalanobis-Taguchi System 34
4.1.1 Mahalanobis-Taguchi System vs. Prevalent Classification Techniques 35
4.1.2 Mahalanobis-Taguchi System vs. Modified Support Vector Machines 40
4.1.3 Discussion and Conclusions 42
4.2 Performance Evaluation of Multi-class Mahalanobis-Taguchi System 43
4.2.1 Performance Indices and Data Sets 43
4.2.2 Numerical Experiments 46
4.2.3 Discussion and Conclusions 53
5 MAHALANOBIS-TAGUCHI SYSTEM FOR IMPROVING MOBILE PHONE TEST PROCESS USING PROBABILISTIC THRESHOLDING METHOD 57
5.1 Case Description 57
5.2 Data Collection and Analysis 58
5.3 The Benefit and Concluding Remarks 66
6 MULTI-CLASS MAHALANOBIS-TAGUCHI SYSTEM FOR DIABETES MELLITUS PREDICTION 69
6.1 Case Description 69
6.2 Data Collection and Analysis 70
6.3 The Benefit and Concluding Remarks 73
7 MULTI-CLASS MAHALANOBIS-TAGUCHI SYSTEM FOR SAXOPHONE TIMBRE QULITY INSPECTION USING WAVEFORM SHAPE-BASED FEATURE 74
7.1 Introduction 74
7.2 Proposed Waveform Shape-based Feature Extraction Method 79
7.3 Verification of Proposed Automatic Multi-class Timbre Classification System 85
7.3.1 Published Results 85
7.3.2 Experiments on Proposed Automatic Multi-class Timbre Classification System 89
7.3.3 Discussion 96
7.4 Case Study 98
7.4.1 Case Description 98
7.4.2 Data Collection and Analysis 99
7.5 The Benefit and Concluding Remarks 104
8 CONCLUSIONS 107
REFERENCES 112
[1] C. W. D. Justin and R. J. Victor, “Feature subset selection with a simulated annealing data mining algorithm,” Journal of Intelligent Information Systems, vol. 9, pp. 57-81, 1997.
[2] B. Walczk and D. L. Massart, “Rough sets theory,” Chemometrics and Intelligent Laboratory Systems, vol. 47, pp. 1-16, 1999.
[3] R. A. Johnson and D. W. Wichern, Applied Multivariate Statistical Analysis, Prentice-Hall, 1998.
[4] H. Kim and G. J. Koehler, “Theory and practice of decision tree induction,” Omega, vol. 23(6), pp. 637-652, 1995.
[5] N. Japkowicz, “Learning from imbalanced data sets: a comparison of various strategies,” Learning from Imbalanced Data Sets: The AAAI Workshop, pp. 10-15, 2000.
[6] N. Japkowicz and S. Stephen, “The class imbalance problem: a systematic study,” Intelligent Data Analysis, vol. 6(5), pp. 429-450, 2002.
[7] C. Phua, D. Alahakoon, and V. Lee, “Minority report in fraud detection: classification of skewed data,” SIGKDD Explorations, vol. 6(1), pp. 50-59, 2004.
[8] N. V. Chawla, K. Bowyer, L. Hall, and W. Kegelmeyer, “SMOTE: synthetic minority over-sampling technique,” Journal of Artificial Intelligence Research, vol. 16, pp. 231-357, 2002.
[9] J. W. Grzymala-Busse, J. Stefanowski, and S. Wilk, “A comparison of two approaches to data mining from imbalanced data,” Lecture Notes in Computer Science, vol. 3213, pp. 757-763, 2004.
[10] P. C. Pendharkar, J. A. Rodger, G. J. Yaverbaum, N. Herman, and M. Benner, “Association, statistical, mathematical and neural approaches for mining breast cancer patterns,” Expert Systems with Applications, vol. 17, pp. 223-232, 1993.
[11] M. A. Maloof, “Learning when data sets are imbalanced and when costs are unequal and unknown,” ICML-2003 Workshop on Learning from Imbalanced Data Sets II, 2003.
[12] G. Batista, R. C. Prati, and M. C. Monard, “A study of the behavior of several methods for balancing machine learning training data,” SIGKDD Explorations, vol. 6(1), pp. 20-29, 2004.
[13] H. Guo and H. L. Viktor, “Learning from imbalanced data sets with boosting and data generation: the DataBoost-IM approach,” SIGKDD Explorations, vol. 6(1), pp. 30-39, 2004.
[14] N. V. Chawla, N. Japkowicz, and A. Kolcz, “Editorial: special issue on learning from imbalanced data Sets,” SIGKDD Explorations, vol. 6(1), pp. 1-6, 2004.
[15] K. Huang, H. Yang, I. King, and M. Lyu, “Learning classifiers from imbalanced data based on biased minimax probability machine,” Proceedings of the 04’ IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’04), pp. 558-563, 2004.
[16] G. Wu and E. Y. Chang, “KBA: kernel boundary alignment considering imbalanced data distribution,” IEEE Transactions on Knowledge and Data Engineering, vol. 17, no. 6, pp.786-794, 2005.
[17] G. Wu and E. Chang, “Adaptive feature-space conformal transformation for imbalanced data learning,” Proc. 20th Int’l Conf. Machine Learning, pp. 816-823, 2003.
[18] Y. Freund and R. E. Schapire, “A decision-theoretic generalization of on-line learning and an application to boosting,” Journal of Computer and System Sciences, vol. 55(1), pp. 119-139, 1997.
[19] N. V. Chawla, A. Lazarevic, L. O. Hall, and K. W. Bowyer, “SMOTE-boost: improve prediction of the minority class in boosting,” Proc. Principles Knowledge Discovery Databases, pp. 107-119, 2003.
[20] H. Guo and H. L. Viktor, “Learning from imbalanced data sets with boosting and data generation: the DataBoost-IM approach,” ACM SIGKDD Explorations Newsletter, vol. 6(1), pp. 30-39, 2004.
[21] D. Mease, A. J. Wyner, and A. Buja, “Boosted classification trees and class probability/quantile estimation,” Journal of Machine Learning Research, vol. 8, pp. 409-439, 2007.
[22] Y. Sun, M.S. Kamel, A. K. C. Wong, and Y. Wang, “Cost-sensitive boosting for classification of imbalanced data,” Pattern Recognition, vol. 40(12), pp. 3358-3378, 2007.
[23] W. Fan, S. J. Stolfo, J. Zhang, and P. K. Chan, “AdaCost: misclassification cost-sensitive boosting,” Proc. Int. Conf. Machine Learning, pp. 97-105, 1999.
[24] L. Breiman, “Bagging predictors,” Machine Learning, vol. 24(2), pp. 123-140, 1996.
[25] S. Hido and H. Kashima, “Roughly balanced bagging for imbalanced data,” http://www.siam.org/proceedings/datamining/2008/dm08_13_hido.pdf.
[26] X. Zhu, “Lazy bagging for classifying imbalanced data,” 7th IEEE Int. Conf. Data Mining, pp.763-768, 2007.
[27] D. Tao, X. Tang, X. Li, and X. Wu, “Asymmetric bagging and random subspace for support vector machines-based relevance feedback in image retrieval,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28(7), pp. 1088-1099, 2006.
[28] J. Zhang and I. Mani, “kNN approach to unbalanced data distributions: a case study involving information extraction,” Workshop on Learning from Imbalanced Datasets (ICML'03), 2003.
[29] G. Taguchi, S. Chowdhury, and Y. Wu, The Mahalanobis-Taguchi System, McGraw-Hill, New York, NY, 2001.
[30] G. Taguchi and R. Jugulum, The Mahalanobis-Taguchi Strategy, John Wiley & Sons, New York, NY, 2002.
[31] W. H. Woodall, R. Koudelik, K. L. Tsui, S. B. Kim, Z. G. Stoumbos, and C. P. Carvounis, “A review and analysis of the Mahalanobis-Taguchi System,” Technometrics, vol. 45(1), pp. 1-15, 2003.
[32] A. Bovas and, V. Asokan Mulayath, “Discussion - a review and analysis of the Mahalanobis-Taguchi System,” Technometrics, vol. 45(1), pp. 22-25, 2003.
[33] D. M. Hawkins, “Discussion - a review and analysis of the Mahalanobis-Taguchi System,” Technometrics, vol. 45(1), pp. 25-29, 2003.
[34] J. Rajesh, G. Taguchi, and S. Taguchi, “Discussion - a review and analysis of the Mahalanobis-Taguchi System,” Technometrics, vol. 45(1), pp. 16-21, 2003.
[35] J. Srinivasaraghavan and V. Allada, “Application of Mahalanobis distance as a lean assessment metric,” International Journal of Advanced Manufacturing Technology, vol. 29, pp. 1159-1168, 2006.
[36] T. Riho, A. Suzuki, J. Oro, K. Ohmi, and H. Tanaka, “The yield enhancement methodology for invisible defects using the MTS+ method,” IEEE Transactions on Semiconductor Manufacturing, vol. 18(4), pp. 561-568, 2005.
[37] P. Das and S. Datta, “Exploring the effects of chemical composition in hot rolled steel product using Mahalanobis distance scale under Mahalanobis-Taguchi System,” Computational Materials Science, vol. 38(4), pp. 671-677, 2007.
[38] G. Taguchi, S. Chowdhury, and Y. Wu, Taguchi’s Quality Engineering Handbook, John Wiley & Sons, Hoboken, NJ, 2005.
[39] B. Schölkopf and A. J. Smola, Learning with Kernels, The MIT Press, Cambridge, MA, 2002.
[40] C. W. Hsu and C. J. Lin, “A comparison of methods for multiclass support vector machines,” IEEE Transactions on Neural Networks, vol. 13(2), pp.415-425, 2002.
[41] M. E. Tipping, “Sparse bayesian learning and the relevance vector machine,” Journal of Machine Learning Research, vol. 1, pp. 211-244, 2001.
[42] I. W. Tsang, J. T. Kwok, and P. M. Cheung, “Core vector machines: fast SVM training on very large data sets,” Journal of Machine Learning Research, vol. 6, pp. 363-392, 2005.
[43] D. W. Patterson, Artificial Neural Networks: Theory and Applications, Prentice Hall, New York, NY, 1996.
[44] Y. L. Cun, B. Boser, J. Denker, D. Hendersen, R. Howard, W. Hubbard, and L. Jackel, “Backpropagation applied to handwritten zip code recognition,” Neural Computation, vol. 1, pp. 541-551, 1989.
[45] O. Chapelle, P. Haffner, and V. N. Vapnik, “Support vector machines for histogram-based image classification,” IEEE Transactions on Neural Networks, vol. 10(5), pp. 1055-1064, 1999.
[46] M. Shami and W. Verhelst, “An evaluation of the robustness of existing supervised machine learning approaches to the classification of emotions in speech,” Speech Communication, vol. 49(3), pp. 201-212, 2007.
[47] S. Thamarai Selvi, S. Arumugam, and L. Ganesan, “BIONET: an artificial neural network model for diagnosis of diseases,” Pattern Recognition Letters, vol. 21(8), pp. 721-740, 2000.
[48] W. Lam, M. Ruiz, and P. Srinivasan, “Automatic text categorization and its application to text retrieval,” IEEE Transactions on Knowledge and Data Engineering, vol. 11(6), pp. 865-879, 1999.
[49] J. Weston, C. Watkins, “Multi-class support vector machines,” Technical Report CSD-TR-98-04, London, Egham, TW200EX, UK, 1998.
[50] V. N. Vapnik, Statistical Learning Theory, John Wiley & Sons, New York, NY, 1998.
[51] S. Asharaf, M. N. Murty, and S. K. Shevade, “Multiclass core vector machine,” Proceedings of the 24th international conference on Machine Learning, Corvallis, OR, 2007.
[52] H. Zhang and J. Malik, “Selecting shape features using multi-class relavance vector machine,” Technical Report UCB/EECS-2005-6, EECS Department, University of California, Berkeley, 2005.
[53] U. H. G. KreBel, “Pairwise classification and support vector machines,” Advances in Kernel Methods: Support Vector Learning, pp. 255-268, The MIT Press, Cambridge, MA, 1999.
[54] G. Ou and Y. L. Murphey, “Multi-class pattern classification using neural networks,” Pattern Recognition, vol. 40(1), pp. 4-18, 2007.
[55] T. G. Dietterich and G. Bakiri, “Solving multiclass learning problem via error-correcting output codes,” Journal of Artificial Intelligence Research, vol. 2, pp. 263-286, 1995.
[56] J. Wu, J. G. Zhou, and P. L. Yan, “Incremental proximal support vector classifier for multi-class classification,” Proceedings of International Conference on Machine Learning and Cybernetics, vol. 5, pp. 3201-3206, 2004.
[57] Y. Tian, Z. Qi, and N. Deng, “A new support vector machine for multi-class classification,” The Fifth International Conference on Computer and Information Technology, pp. 18-22, 2005.
[58] R. Anand, K. Mehrotra, C. K. Mohan, and S. Ranka, “Efficient classification for multiclass problems using modular neural networks,” IEEE Transactions on Neural Networks, vol. 6(1), pp. 117-124, 1995.
[59] R. Duda, P. Hart, and D. Stork, Pattern Classification, John Wiley & Sons, New York, NY, 2001.
[60] F. Masulli and G. Valentini, “Effectiveness of error correcting output codes in multiclass learning problems,” Lecture Notes in Computer Science, vol. 1857, pp. 107-116, 2000.
[61] Y. H. Hsiao, “An evaluation of the robustness of MTS for imbalanced data - a case study of the mobile phone test process,” M.S. Thesis, Department of Industrial Engineering and Management, National Chiao Tung University, Taiwan, 2005.
[62] M. Kubat and S. Matwin, “Addressing the curse of imbalanced training set: one-sided selection,” Proceeding of the 14th International Conference on Machine Learning, 1997.
[63] R. Akbani, S. Kwek, and N. Japkowicz, ”Applying support vector machines to imbalanced datasets,” Machine Learning: ECML 2004, vol. 3201 pp. 39-50, 2004.
[64] C. C. Chang and C. J. Lin, LIBSVM: A Library for Support Vector Machines, 2001. Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm.
[65] T. K. Ho and M. Basu, “Complexity measures of supervised classification problems,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24(3), pp. 289-300, 2002.
[66] T. Evgeniou, M. Pontil, C. Papageorgiou, and T. Poggio, “Image representations and feature selection for multimedia database search,” IEEE Transactions on Knowledge and Data Engineering, vol. 15(4), pp. 911-920, 2003.
[67] A. Sun, E. P. Lim, W. K. Ng, and J. Srivastava, “Blocking reduction strategies in hierarchical text classification,” IEEE Transactions on Knowledge and Data Engineering, vol. 16(10), pp. 1305-1308, 2004.
[68] G. Wu and E. Chang, “Class-boundary alignment for imbalanced dataset learning,” ICML 2003 Workshop on Learning from Imbalanced Data Sets II, Washington, DC, 2003.
[69] A. Kalousis, J. Prados, and M. Hilario, “Stability of feature selection algorithms,” Fifth IEEE International Conference on Data Mining, 2005.
[70] R. Tibshirani, “Regression shrinkage and selection via the lasso,” Journal of Royal Statistical Society. Series B, vol. 58(1), pp. 267-288, 1996.
[71] B. E. Efron, T. Hastie, I. Johnstone, and R. Tibshirani, “Least angle regression," The Annals of Statistics, vol. 32, pp. 407-451, 2004.
[72] I. Guyon, J. Weston, S. Barnhill, and V. Vapnik, “Gene selection for cancer classification using support vector machines,” Machine Learning, vol. 46, pp. 389-422, 2002.
[73] A. Rakotomamonjy, “Variable selection using SVM-based criteria,” Journal of Machine Learning Research, vol. 3, pp. 1357-1370, 2003.
[74] L. Shih, J. D. M. Rennie, Y. H. Chang, and D. R. Karger, “Text Bundling: Statistics-Based Data Reduction,” Proceedings of the Twentieth International Conference on Machine Learning, Washington DC, 2003.
[75] X. B. Li, “Data reduction via adaptive sampling,” Communications in Information and Systems, vol. 2(1), pp. 53-68, 2002.
[76] H. Liu and H. Mtotda, Instance Selection and Construction for Data Mining, Boston : Kluwer Academic Publishers, 2001.
[77] C. T. Su, L. S. Chen, and T. L. Chiang, “A neural network based information granulation approach to shorten the cellular phone test process,” Computers in Industry, vol. 57(5), pp. 379-390, 2006.
[78] N. H. Cho, H. C. Jang, H. K. Park, and Y. W. Cho, “Waist circumference is the key risk factor for diabetes in korean women with history of gestational diabetes,” Diabetes Research and Clinical Practice, vol. 71(2), pp. 177-183, 2006.
[79] M. K. Barger and M. Bidgood-Wilson, “Caring for a woman at high risk for type 2 diabetes,” Journal of Midwifery and Women's Health, vol. 51(3), pp. 222-226, 2006.
[80] B. E. Metzger, N. H. Cho, S. M. Rston, and R. Radvany, “Pregnancy weight and antepartum insulin secretion predict glucose tolerance five years after gestational diabetes mellitus,” Diabetes Care, vol. 16. pp. 1598-1605. 1993.
[81] S. L. Kjos, R. K. Peters, A. Xiang, O. A. Henry, M. Montoro, and T. A. Buchanan, “Predicting future diabetes in latino women with gestational diabetes,” Diabetes, vol. 44, pp. 586-591, 1995.
[82] M. S. Sanders and E. J. McCormick, Human Factors in Engineering and Design, McGraw-Hill, New York, NY, 1993.
[83] American National Standards Institute, American national psychoacoustical terminology, S3.20, New York: American Standards Association, 1973.
[84] S. Ando and K. Yamaguchi, “Statistical study of spectral parameters in musical instrument tones,” Journal of the Acoustical Society of America, vol. 94(1), pp. 37-45, 1993.
[85] J. G. Proakis and D. G. Manolakis, Digital Signal Processing: Principles, Algorithms, and Applications, Prentice Hall, New York, NY, 2006.
[86] J. C. Brown, “Calculation of a constant Q spectral transform,” Journal of the Acoustical Society of America, vol. 89(1), pp. 425-434, 1991.
[87] A. Wieczorkowska, “Musical sound classification based on wavelet analysis,” Fundamenta Informaticae, vol. 47(1-2), pp. 175-188, 2001.
[88] W. J. Pielemeier, G. H. Wakefield, and M. H. Simoni, “Time-frequency analysis of musical signals,” Proceedings of the IEEE, vol. 84(9), pp. 1216-1230, 1996.
[89] J. C. Brown, “Computer identification of musical instruments using pattern recognition with cepstral coefficients as features,” Journal of the Acoustical Society of Americ, vol. 105(3), pp. 1933-1941, 1999.
[90] P. J. Ponce de León and J. M. Iñesta, “Pattern recognition approach for music style identification using shallow statistical descriptors,” IEEE Transactions on Systems, Man, and Cybernetics - Part C: Applications and Reviews, vol. 37(2), 2007.
[91] R. Ramirez, E. Maestre, A. Pertusa, E. Gómez, and X. Serra, “Performance-based interpreter identification in saxophone audio recordings,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 17(3), pp. 356-364, 2007.
[92] D. Fragoulis, C. Papaodysseus, M. Exarhos, G. Roussopoulos, T. Panagopoulos, and D. Kamarotos, “Automated classification of piano–guitar notes,” IEEE Transactions on Audio, Speech, and Lngurage Processing, vol. 14(3), 2006.
[93] S. Essid, G. Richard, and B. David, “Musical instrument recognition by pairwise classification strategies,” IEEE Transactions on Audio, Speech, and Lngurage Processing, vol. 14(4), 2006.
[94] A. A. Wieczorkowska, J. Wróblewski, P. Synak, and D. Ślezak, “Application of temporal descriptors to musical instrument sound recognition,” Journal of Intelligent Information Systems, vol. 21(1), pp. 71-93, 2003.
[95] G. D. Poli and P. Prandoni, “Sonological models for timbre characterization,” Journal of New Music Research, vol. 26(2), pp. 170-197, 1997.
[96] J. D. Deng, C. Simmermacher, and S. Cranefield, “A study on feature analysis for musical instrument classification,” IEEE Transactions on Systems, Man, and Cybernetics-Part B: Cybernetics, vol. 38(2), pp. 429-438, 2008.
[97] E. Benetos, M. Kotti, C. Kotropoulos, “Musical instrument classification using non-negative matrix factorization algorithms and subset feature selection,” IEEE International Conference on Acoustics, Speech and Signal Processing, vol. 5, pp. 221-224, 2006.
[98] P. Herrera, A. Yeterian, and F. Gouyon, “Automatic classification of drum sounds: a comparison of feature selection methods and classification techniques,” Lecture Notes in Computer Science, vol. 2445, pp. 69-80, 2002.
[99] G. Agostini, M. Longari, and E. Pollastri, “Musical instrument timbres classification with spectral features,” EURASIP Journal on Applied Signal Processing, vol. 1, pp. 5-14, 2003.
[100] I. Kaminskyj and T. Czaszejko, “Automatic recognition of isolated monophonic musical instrument sounds using kNNC,” Journal of Intelligent Information Systems, vol. 24(2/3), pp. 199-221, 2005.
[101] A. M. Fanelli, G. Castellano, and C. A. Buscicchio, “A modular neuro-fuzzy network for musical instruments classification,” Lecture Notes in Computer Science, vol. 1857, pp. 372-382, 2000.
[102] B. Kostek, “Musical instrument classification and duet analysis employing music information retrieval techniques,” Proceedings of the IEEE, vol. 92(4), pp. 712-729, 2004.
[103] A. Wieczorkowska and A. Czyzewski, “Rough set based automatic classification of musical instrument sounds,” Electronic Notes in Theoretical Computer Science, vol. 82(4), pp. 298-309, 2003.
[104] J. C. Brown, O. Houix, and S. McAdams, “Feature dependence in the automatic identification of musical woodwind instruments,” Journal of the Acoustical Society of America, vol. 109(3), pp. 1064-1072, 2001.
[105] K. W. Berger, “Some factors in the recognition of timbre,” Journal of the Acoustical Society of America, vol. 36(10), pp. 1888-1891, 1964.
[106] P. Herrera, X. Amatriain, E. Batlle, and X. Serra, “Towards instrument segmentation for music content description: a critical review of instrument classification techniques,” Proceedings of ISMIR, 2000.
[107] R. L. Burden and J. D. Faires, Numerical Analysis, 7th Bk&Cdr Edition, Brooks/Cole, 2000.
[108] University of Iowa Musical Instrument Sample Database, http://theremin.music.uiowa.edu/index.html
[109] McGill University Master Samples CDs, http://www.music.mcgill.ca/resources/mums/html/mums.html
[110] M. Slaney, Auditory Toolbox, 1998. Software available at http://cobweb.ecn.purdue.edu/~malcolm/interval/1998-010/
[111] W. H. Press, b. P. Flannery, S. A. Teukolsky, and W. T. Vetterling, Numerical Recipes in C, Cambridge, U.K.: Cambridge University Press, 1988.
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *