A simple statistical model and association rule filtering for classification

György J. Simon, Vipin Kumar, Peter W. Li

Research output: Chapter in Book/Report/Conference proceedingConference contribution

18 Scopus citations


Associative classification is a predictive modeling technique that constructs a classifier based on class association rules (also known as predictive association rules; PARs). PARs are association rules where the consequence of the rule is a class label. Associative classification has gained substantial research attention because it successfully joins the benefits of association rule mining with classification. These benefits include the inherent ability of association rule mining to extract high-order interactions among the predictors-an ability that many modern classifiers lack-and also the natural interpretability of the individual PARs. Associative classification is not without its caveats. Association rule mining often discovers a combinatorially large number of association rules, eroding the interpretability of the rule set. Extensive effort has been directed towards developing interestingness measures, which filter (predictive) association rules after they have been generated. These interestingness measures, albeit very successful at selecting interesting rules, lack two features that are highly valuable in the context of classification. First, only few of the interestingness measures are rooted in a statistical model. Given the distinction between a training and a test data set in the classification setting, the ability to make statistical inferences about the performance of the predictive classification rules on the test set is highly desirable. Second, the unfiltered set of predictive assocation rules (PARs) are often redundant, we can prove that certain PARs will not be used to construct a classification model given the presence of other PARs. In this paper, we propose a simple statistical model towards making inferences on the test set about the various performance metrics of predictive association rules. We also derive three filtering criteria based on hypothesis testing, which are very selective (reduce the number of PARs to be considered by the classifier by several orders of magnitude), yet do not effect the performance of the classification adversely. In the case, where the classification model is constructed as a logistic model on top of the PARs, we can mathematically prove, that the filtering criteria do not significantly effect the classifier's performance. We also demonstrate empirically on three publicly available data sets that the vast reduction in the number of PARs indeed did not come at the cost of reducing the predictive performance.

Original languageEnglish (US)
Title of host publicationProceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD'11
PublisherAssociation for Computing Machinery
Number of pages9
ISBN (Print)9781450308137
StatePublished - 2011
Event17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2011 - San Diego, United States
Duration: Aug 21 2011Aug 24 2011

Publication series

NameProceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining


Conference17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2011
Country/TerritoryUnited States
CitySan Diego

ASJC Scopus subject areas

  • Software
  • Information Systems


Dive into the research topics of 'A simple statistical model and association rule filtering for classification'. Together they form a unique fingerprint.

Cite this