Wrapping the naive bayes classifier to relax the effect of dependences
Loading...
Identifiers
Publication date
Authors
Advisors
Editors
Journal Title
Journal ISSN
Volume Title
Publisher
Springer
Abstract
The Naive Bayes Classifier is based on the (unrealistic) assumption of independence among the values of the attributes given the class value. Consequently, its effectiveness may decrease in the presence of interdependent attributes. In spite of this, in recent years, Naive Bayes classifier is worked for a privilege position due to several reasons. We present DGW (Dependency Guided Wrapper), a wrapper that uses information about dependences to transform the data representation to improve the Naive Bayes classification. This paper presents experiments comparing the performance and execution time of 12 DGW variations against 12 previous approaches, as constructive induction of cartesian product attributes, and wrappers that perform a search for optimal subsets of attributes. Experimental results show that DGW generates a new data representation that allows the Naive Bayes to obtain better accuracy more times than any other wrapper tested. DGW variations also obtain the best possible accuracy more often than the state of the art wrappers while often spending less time in the attribute subset search process.
Description
UNESCO Subjects
Keywords
Bibliographic reference
Cortizo, J. C., Giráldez, I., & Gaya, M. C. (2007). Wrapping the naive bayes classifier to relax the effect of dependences. In X. Yao, H. Yin, P. Tino, E. Corchado & W. Byrne (Eds.), International Conference on Intelligent Data Engineering and Automated Learning: IDEAL 2007 (pp. 229-239). Berlin: Springer. DOI: 10.1007/978-3-540-77226-2_24







