|
[1] C. W. D. Justin and R. J. Victor, “Feature subset selection with a simulated annealing data mining algorithm,” Journal of Intelligent Information Systems, vol. 9, pp. 57-81, 1997. [2] B. Walczk and D. L. Massart, “Rough sets theory,” Chemometrics and Intelligent Laboratory Systems, vol. 47, pp. 1-16, 1999. [3] R. A. Johnson and D. W. Wichern, Applied Multivariate Statistical Analysis, Prentice-Hall, 1998. [4] H. Kim and G. J. Koehler, “Theory and practice of decision tree induction,” Omega, vol. 23(6), pp. 637-652, 1995. [5] N. Japkowicz, “Learning from imbalanced data sets: a comparison of various strategies,” Learning from Imbalanced Data Sets: The AAAI Workshop, pp. 10-15, 2000. [6] N. Japkowicz and S. Stephen, “The class imbalance problem: a systematic study,” Intelligent Data Analysis, vol. 6(5), pp. 429-450, 2002. [7] C. Phua, D. Alahakoon, and V. Lee, “Minority report in fraud detection: classification of skewed data,” SIGKDD Explorations, vol. 6(1), pp. 50-59, 2004. [8] N. V. Chawla, K. Bowyer, L. Hall, and W. Kegelmeyer, “SMOTE: synthetic minority over-sampling technique,” Journal of Artificial Intelligence Research, vol. 16, pp. 231-357, 2002. [9] J. W. Grzymala-Busse, J. Stefanowski, and S. Wilk, “A comparison of two approaches to data mining from imbalanced data,” Lecture Notes in Computer Science, vol. 3213, pp. 757-763, 2004. [10] P. C. Pendharkar, J. A. Rodger, G. J. Yaverbaum, N. Herman, and M. Benner, “Association, statistical, mathematical and neural approaches for mining breast cancer patterns,” Expert Systems with Applications, vol. 17, pp. 223-232, 1993. [11] M. A. Maloof, “Learning when data sets are imbalanced and when costs are unequal and unknown,” ICML-2003 Workshop on Learning from Imbalanced Data Sets II, 2003. [12] G. Batista, R. C. Prati, and M. C. Monard, “A study of the behavior of several methods for balancing machine learning training data,” SIGKDD Explorations, vol. 6(1), pp. 20-29, 2004. [13] H. Guo and H. L. Viktor, “Learning from imbalanced data sets with boosting and data generation: the DataBoost-IM approach,” SIGKDD Explorations, vol. 6(1), pp. 30-39, 2004. [14] N. V. Chawla, N. Japkowicz, and A. Kolcz, “Editorial: special issue on learning from imbalanced data Sets,” SIGKDD Explorations, vol. 6(1), pp. 1-6, 2004. [15] K. Huang, H. Yang, I. King, and M. Lyu, “Learning classifiers from imbalanced data based on biased minimax probability machine,” Proceedings of the 04’ IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’04), pp. 558-563, 2004. [16] G. Wu and E. Y. Chang, “KBA: kernel boundary alignment considering imbalanced data distribution,” IEEE Transactions on Knowledge and Data Engineering, vol. 17, no. 6, pp.786-794, 2005. [17] G. Wu and E. Chang, “Adaptive feature-space conformal transformation for imbalanced data learning,” Proc. 20th Int’l Conf. Machine Learning, pp. 816-823, 2003. [18] Y. Freund and R. E. Schapire, “A decision-theoretic generalization of on-line learning and an application to boosting,” Journal of Computer and System Sciences, vol. 55(1), pp. 119-139, 1997. [19] N. V. Chawla, A. Lazarevic, L. O. Hall, and K. W. Bowyer, “SMOTE-boost: improve prediction of the minority class in boosting,” Proc. Principles Knowledge Discovery Databases, pp. 107-119, 2003. [20] H. Guo and H. L. Viktor, “Learning from imbalanced data sets with boosting and data generation: the DataBoost-IM approach,” ACM SIGKDD Explorations Newsletter, vol. 6(1), pp. 30-39, 2004. [21] D. Mease, A. J. Wyner, and A. Buja, “Boosted classification trees and class probability/quantile estimation,” Journal of Machine Learning Research, vol. 8, pp. 409-439, 2007. [22] Y. Sun, M.S. Kamel, A. K. C. Wong, and Y. Wang, “Cost-sensitive boosting for classification of imbalanced data,” Pattern Recognition, vol. 40(12), pp. 3358-3378, 2007. [23] W. Fan, S. J. Stolfo, J. Zhang, and P. K. Chan, “AdaCost: misclassification cost-sensitive boosting,” Proc. Int. Conf. Machine Learning, pp. 97-105, 1999. [24] L. Breiman, “Bagging predictors,” Machine Learning, vol. 24(2), pp. 123-140, 1996. [25] S. Hido and H. Kashima, “Roughly balanced bagging for imbalanced data,” http://www.siam.org/proceedings/datamining/2008/dm08_13_hido.pdf. [26] X. Zhu, “Lazy bagging for classifying imbalanced data,” 7th IEEE Int. Conf. Data Mining, pp.763-768, 2007. [27] D. Tao, X. Tang, X. Li, and X. Wu, “Asymmetric bagging and random subspace for support vector machines-based relevance feedback in image retrieval,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28(7), pp. 1088-1099, 2006. [28] J. Zhang and I. Mani, “kNN approach to unbalanced data distributions: a case study involving information extraction,” Workshop on Learning from Imbalanced Datasets (ICML'03), 2003. [29] G. Taguchi, S. Chowdhury, and Y. Wu, The Mahalanobis-Taguchi System, McGraw-Hill, New York, NY, 2001. [30] G. Taguchi and R. Jugulum, The Mahalanobis-Taguchi Strategy, John Wiley & Sons, New York, NY, 2002. [31] W. H. Woodall, R. Koudelik, K. L. Tsui, S. B. Kim, Z. G. Stoumbos, and C. P. Carvounis, “A review and analysis of the Mahalanobis-Taguchi System,” Technometrics, vol. 45(1), pp. 1-15, 2003. [32] A. Bovas and, V. Asokan Mulayath, “Discussion - a review and analysis of the Mahalanobis-Taguchi System,” Technometrics, vol. 45(1), pp. 22-25, 2003. [33] D. M. Hawkins, “Discussion - a review and analysis of the Mahalanobis-Taguchi System,” Technometrics, vol. 45(1), pp. 25-29, 2003. [34] J. Rajesh, G. Taguchi, and S. Taguchi, “Discussion - a review and analysis of the Mahalanobis-Taguchi System,” Technometrics, vol. 45(1), pp. 16-21, 2003. [35] J. Srinivasaraghavan and V. Allada, “Application of Mahalanobis distance as a lean assessment metric,” International Journal of Advanced Manufacturing Technology, vol. 29, pp. 1159-1168, 2006. [36] T. Riho, A. Suzuki, J. Oro, K. Ohmi, and H. Tanaka, “The yield enhancement methodology for invisible defects using the MTS+ method,” IEEE Transactions on Semiconductor Manufacturing, vol. 18(4), pp. 561-568, 2005. [37] P. Das and S. Datta, “Exploring the effects of chemical composition in hot rolled steel product using Mahalanobis distance scale under Mahalanobis-Taguchi System,” Computational Materials Science, vol. 38(4), pp. 671-677, 2007. [38] G. Taguchi, S. Chowdhury, and Y. Wu, Taguchi’s Quality Engineering Handbook, John Wiley & Sons, Hoboken, NJ, 2005. [39] B. Schölkopf and A. J. Smola, Learning with Kernels, The MIT Press, Cambridge, MA, 2002. [40] C. W. Hsu and C. J. Lin, “A comparison of methods for multiclass support vector machines,” IEEE Transactions on Neural Networks, vol. 13(2), pp.415-425, 2002. [41] M. E. Tipping, “Sparse bayesian learning and the relevance vector machine,” Journal of Machine Learning Research, vol. 1, pp. 211-244, 2001. [42] I. W. Tsang, J. T. Kwok, and P. M. Cheung, “Core vector machines: fast SVM training on very large data sets,” Journal of Machine Learning Research, vol. 6, pp. 363-392, 2005. [43] D. W. Patterson, Artificial Neural Networks: Theory and Applications, Prentice Hall, New York, NY, 1996. [44] Y. L. Cun, B. Boser, J. Denker, D. Hendersen, R. Howard, W. Hubbard, and L. Jackel, “Backpropagation applied to handwritten zip code recognition,” Neural Computation, vol. 1, pp. 541-551, 1989. [45] O. Chapelle, P. Haffner, and V. N. Vapnik, “Support vector machines for histogram-based image classification,” IEEE Transactions on Neural Networks, vol. 10(5), pp. 1055-1064, 1999. [46] M. Shami and W. Verhelst, “An evaluation of the robustness of existing supervised machine learning approaches to the classification of emotions in speech,” Speech Communication, vol. 49(3), pp. 201-212, 2007. [47] S. Thamarai Selvi, S. Arumugam, and L. Ganesan, “BIONET: an artificial neural network model for diagnosis of diseases,” Pattern Recognition Letters, vol. 21(8), pp. 721-740, 2000. [48] W. Lam, M. Ruiz, and P. Srinivasan, “Automatic text categorization and its application to text retrieval,” IEEE Transactions on Knowledge and Data Engineering, vol. 11(6), pp. 865-879, 1999. [49] J. Weston, C. Watkins, “Multi-class support vector machines,” Technical Report CSD-TR-98-04, London, Egham, TW200EX, UK, 1998. [50] V. N. Vapnik, Statistical Learning Theory, John Wiley & Sons, New York, NY, 1998. [51] S. Asharaf, M. N. Murty, and S. K. Shevade, “Multiclass core vector machine,” Proceedings of the 24th international conference on Machine Learning, Corvallis, OR, 2007. [52] H. Zhang and J. Malik, “Selecting shape features using multi-class relavance vector machine,” Technical Report UCB/EECS-2005-6, EECS Department, University of California, Berkeley, 2005. [53] U. H. G. KreBel, “Pairwise classification and support vector machines,” Advances in Kernel Methods: Support Vector Learning, pp. 255-268, The MIT Press, Cambridge, MA, 1999. [54] G. Ou and Y. L. Murphey, “Multi-class pattern classification using neural networks,” Pattern Recognition, vol. 40(1), pp. 4-18, 2007. [55] T. G. Dietterich and G. Bakiri, “Solving multiclass learning problem via error-correcting output codes,” Journal of Artificial Intelligence Research, vol. 2, pp. 263-286, 1995. [56] J. Wu, J. G. Zhou, and P. L. Yan, “Incremental proximal support vector classifier for multi-class classification,” Proceedings of International Conference on Machine Learning and Cybernetics, vol. 5, pp. 3201-3206, 2004. [57] Y. Tian, Z. Qi, and N. Deng, “A new support vector machine for multi-class classification,” The Fifth International Conference on Computer and Information Technology, pp. 18-22, 2005. [58] R. Anand, K. Mehrotra, C. K. Mohan, and S. Ranka, “Efficient classification for multiclass problems using modular neural networks,” IEEE Transactions on Neural Networks, vol. 6(1), pp. 117-124, 1995. [59] R. Duda, P. Hart, and D. Stork, Pattern Classification, John Wiley & Sons, New York, NY, 2001. [60] F. Masulli and G. Valentini, “Effectiveness of error correcting output codes in multiclass learning problems,” Lecture Notes in Computer Science, vol. 1857, pp. 107-116, 2000. [61] Y. H. Hsiao, “An evaluation of the robustness of MTS for imbalanced data - a case study of the mobile phone test process,” M.S. Thesis, Department of Industrial Engineering and Management, National Chiao Tung University, Taiwan, 2005. [62] M. Kubat and S. Matwin, “Addressing the curse of imbalanced training set: one-sided selection,” Proceeding of the 14th International Conference on Machine Learning, 1997. [63] R. Akbani, S. Kwek, and N. Japkowicz, ”Applying support vector machines to imbalanced datasets,” Machine Learning: ECML 2004, vol. 3201 pp. 39-50, 2004. [64] C. C. Chang and C. J. Lin, LIBSVM: A Library for Support Vector Machines, 2001. Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm. [65] T. K. Ho and M. Basu, “Complexity measures of supervised classification problems,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24(3), pp. 289-300, 2002. [66] T. Evgeniou, M. Pontil, C. Papageorgiou, and T. Poggio, “Image representations and feature selection for multimedia database search,” IEEE Transactions on Knowledge and Data Engineering, vol. 15(4), pp. 911-920, 2003. [67] A. Sun, E. P. Lim, W. K. Ng, and J. Srivastava, “Blocking reduction strategies in hierarchical text classification,” IEEE Transactions on Knowledge and Data Engineering, vol. 16(10), pp. 1305-1308, 2004. [68] G. Wu and E. Chang, “Class-boundary alignment for imbalanced dataset learning,” ICML 2003 Workshop on Learning from Imbalanced Data Sets II, Washington, DC, 2003. [69] A. Kalousis, J. Prados, and M. Hilario, “Stability of feature selection algorithms,” Fifth IEEE International Conference on Data Mining, 2005. [70] R. Tibshirani, “Regression shrinkage and selection via the lasso,” Journal of Royal Statistical Society. Series B, vol. 58(1), pp. 267-288, 1996. [71] B. E. Efron, T. Hastie, I. Johnstone, and R. Tibshirani, “Least angle regression," The Annals of Statistics, vol. 32, pp. 407-451, 2004. [72] I. Guyon, J. Weston, S. Barnhill, and V. Vapnik, “Gene selection for cancer classification using support vector machines,” Machine Learning, vol. 46, pp. 389-422, 2002. [73] A. Rakotomamonjy, “Variable selection using SVM-based criteria,” Journal of Machine Learning Research, vol. 3, pp. 1357-1370, 2003. [74] L. Shih, J. D. M. Rennie, Y. H. Chang, and D. R. Karger, “Text Bundling: Statistics-Based Data Reduction,” Proceedings of the Twentieth International Conference on Machine Learning, Washington DC, 2003. [75] X. B. Li, “Data reduction via adaptive sampling,” Communications in Information and Systems, vol. 2(1), pp. 53-68, 2002. [76] H. Liu and H. Mtotda, Instance Selection and Construction for Data Mining, Boston : Kluwer Academic Publishers, 2001. [77] C. T. Su, L. S. Chen, and T. L. Chiang, “A neural network based information granulation approach to shorten the cellular phone test process,” Computers in Industry, vol. 57(5), pp. 379-390, 2006. [78] N. H. Cho, H. C. Jang, H. K. Park, and Y. W. Cho, “Waist circumference is the key risk factor for diabetes in korean women with history of gestational diabetes,” Diabetes Research and Clinical Practice, vol. 71(2), pp. 177-183, 2006. [79] M. K. Barger and M. Bidgood-Wilson, “Caring for a woman at high risk for type 2 diabetes,” Journal of Midwifery and Women's Health, vol. 51(3), pp. 222-226, 2006. [80] B. E. Metzger, N. H. Cho, S. M. Rston, and R. Radvany, “Pregnancy weight and antepartum insulin secretion predict glucose tolerance five years after gestational diabetes mellitus,” Diabetes Care, vol. 16. pp. 1598-1605. 1993. [81] S. L. Kjos, R. K. Peters, A. Xiang, O. A. Henry, M. Montoro, and T. A. Buchanan, “Predicting future diabetes in latino women with gestational diabetes,” Diabetes, vol. 44, pp. 586-591, 1995. [82] M. S. Sanders and E. J. McCormick, Human Factors in Engineering and Design, McGraw-Hill, New York, NY, 1993. [83] American National Standards Institute, American national psychoacoustical terminology, S3.20, New York: American Standards Association, 1973. [84] S. Ando and K. Yamaguchi, “Statistical study of spectral parameters in musical instrument tones,” Journal of the Acoustical Society of America, vol. 94(1), pp. 37-45, 1993. [85] J. G. Proakis and D. G. Manolakis, Digital Signal Processing: Principles, Algorithms, and Applications, Prentice Hall, New York, NY, 2006. [86] J. C. Brown, “Calculation of a constant Q spectral transform,” Journal of the Acoustical Society of America, vol. 89(1), pp. 425-434, 1991. [87] A. Wieczorkowska, “Musical sound classification based on wavelet analysis,” Fundamenta Informaticae, vol. 47(1-2), pp. 175-188, 2001. [88] W. J. Pielemeier, G. H. Wakefield, and M. H. Simoni, “Time-frequency analysis of musical signals,” Proceedings of the IEEE, vol. 84(9), pp. 1216-1230, 1996. [89] J. C. Brown, “Computer identification of musical instruments using pattern recognition with cepstral coefficients as features,” Journal of the Acoustical Society of Americ, vol. 105(3), pp. 1933-1941, 1999. [90] P. J. Ponce de León and J. M. Iñesta, “Pattern recognition approach for music style identification using shallow statistical descriptors,” IEEE Transactions on Systems, Man, and Cybernetics - Part C: Applications and Reviews, vol. 37(2), 2007. [91] R. Ramirez, E. Maestre, A. Pertusa, E. Gómez, and X. Serra, “Performance-based interpreter identification in saxophone audio recordings,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 17(3), pp. 356-364, 2007. [92] D. Fragoulis, C. Papaodysseus, M. Exarhos, G. Roussopoulos, T. Panagopoulos, and D. Kamarotos, “Automated classification of piano–guitar notes,” IEEE Transactions on Audio, Speech, and Lngurage Processing, vol. 14(3), 2006. [93] S. Essid, G. Richard, and B. David, “Musical instrument recognition by pairwise classification strategies,” IEEE Transactions on Audio, Speech, and Lngurage Processing, vol. 14(4), 2006. [94] A. A. Wieczorkowska, J. Wróblewski, P. Synak, and D. Ślezak, “Application of temporal descriptors to musical instrument sound recognition,” Journal of Intelligent Information Systems, vol. 21(1), pp. 71-93, 2003. [95] G. D. Poli and P. Prandoni, “Sonological models for timbre characterization,” Journal of New Music Research, vol. 26(2), pp. 170-197, 1997. [96] J. D. Deng, C. Simmermacher, and S. Cranefield, “A study on feature analysis for musical instrument classification,” IEEE Transactions on Systems, Man, and Cybernetics-Part B: Cybernetics, vol. 38(2), pp. 429-438, 2008. [97] E. Benetos, M. Kotti, C. Kotropoulos, “Musical instrument classification using non-negative matrix factorization algorithms and subset feature selection,” IEEE International Conference on Acoustics, Speech and Signal Processing, vol. 5, pp. 221-224, 2006. [98] P. Herrera, A. Yeterian, and F. Gouyon, “Automatic classification of drum sounds: a comparison of feature selection methods and classification techniques,” Lecture Notes in Computer Science, vol. 2445, pp. 69-80, 2002. [99] G. Agostini, M. Longari, and E. Pollastri, “Musical instrument timbres classification with spectral features,” EURASIP Journal on Applied Signal Processing, vol. 1, pp. 5-14, 2003. [100] I. Kaminskyj and T. Czaszejko, “Automatic recognition of isolated monophonic musical instrument sounds using kNNC,” Journal of Intelligent Information Systems, vol. 24(2/3), pp. 199-221, 2005. [101] A. M. Fanelli, G. Castellano, and C. A. Buscicchio, “A modular neuro-fuzzy network for musical instruments classification,” Lecture Notes in Computer Science, vol. 1857, pp. 372-382, 2000. [102] B. Kostek, “Musical instrument classification and duet analysis employing music information retrieval techniques,” Proceedings of the IEEE, vol. 92(4), pp. 712-729, 2004. [103] A. Wieczorkowska and A. Czyzewski, “Rough set based automatic classification of musical instrument sounds,” Electronic Notes in Theoretical Computer Science, vol. 82(4), pp. 298-309, 2003. [104] J. C. Brown, O. Houix, and S. McAdams, “Feature dependence in the automatic identification of musical woodwind instruments,” Journal of the Acoustical Society of America, vol. 109(3), pp. 1064-1072, 2001. [105] K. W. Berger, “Some factors in the recognition of timbre,” Journal of the Acoustical Society of America, vol. 36(10), pp. 1888-1891, 1964. [106] P. Herrera, X. Amatriain, E. Batlle, and X. Serra, “Towards instrument segmentation for music content description: a critical review of instrument classification techniques,” Proceedings of ISMIR, 2000. [107] R. L. Burden and J. D. Faires, Numerical Analysis, 7th Bk&Cdr Edition, Brooks/Cole, 2000. [108] University of Iowa Musical Instrument Sample Database, http://theremin.music.uiowa.edu/index.html [109] McGill University Master Samples CDs, http://www.music.mcgill.ca/resources/mums/html/mums.html [110] M. Slaney, Auditory Toolbox, 1998. Software available at http://cobweb.ecn.purdue.edu/~malcolm/interval/1998-010/ [111] W. H. Press, b. P. Flannery, S. A. Teukolsky, and W. T. Vetterling, Numerical Recipes in C, Cambridge, U.K.: Cambridge University Press, 1988.
|