|
|||||||||||
PREV CLASS NEXT CLASS | FRAMES NO FRAMES | ||||||||||
SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD |
java.lang.Objectweka.classifiers.Classifier
weka.classifiers.rules.JRip
public class JRip
This class implements a propositional rule learner, Repeated Incremental Pruning to Produce Error Reduction (RIPPER), which was proposed by William W. Cohen as an optimized version of IREP.
The algorithm is briefly described as follows:
Initialize RS = {}, and for each class from the less prevalent one to the more frequent one, DO:
1. Building stage:
Repeat 1.1 and 1.2 until the descrition length (DL) of the ruleset and examples is 64 bits greater than the smallest DL met so far, or there are no positive examples, or the error rate >= 50%.
1.1. Grow phase:
Grow one rule by greedily adding antecedents (or conditions) to the rule until the rule is perfect (i.e. 100% accurate). The procedure tries every possible value of each attribute and selects the condition with highest information gain: p(log(p/t)-log(P/T)).
1.2. Prune phase:
Incrementally prune each rule and allow the pruning of any final sequences of the antecedents;The pruning metric is (p-n)/(p+n) -- but it's actually 2p/(p+n) -1, so in this implementation we simply use p/(p+n) (actually (p+1)/(p+n+2), thus if p+n is 0, it's 0.5).
2. Optimization stage:
after generating the initial ruleset {Ri}, generate and prune two variants of each rule Ri from randomized data using procedure 1.1 and 1.2. But one variant is generated from an empty rule while the other is generated by greedily adding antecedents to the original rule. Moreover, the pruning metric used here is (TP+TN)/(P+N).Then the smallest possible DL for each variant and the original rule is computed. The variant with the minimal DL is selected as the final representative of Ri in the ruleset.After all the rules in {Ri} have been examined and if there are still residual positives, more rules are generated based on the residual positives using Building Stage again.
3. Delete the rules from the ruleset that would increase the DL of the whole ruleset if it were in it. and add resultant ruleset to RS.
ENDDO
Note that there seem to be 2 bugs in the original ripper program that would affect the ruleset size and accuracy slightly. This implementation avoids these bugs and thus is a little bit different from Cohen's original implementation. Even after fixing the bugs, since the order of classes with the same frequency is not defined in ripper, there still seems to be some trivial difference between this implementation and the original ripper, especially for audiology data in UCI repository, where there are lots of classes of few instances.
Details please see:
William W. Cohen: Fast Effective Rule Induction. In: Twelfth International Conference on Machine Learning, 115-123, 1995.
PS. We have compared this implementation with the original ripper implementation in aspects of accuracy, ruleset size and running time on both artificial data "ab+bcd+defg" and UCI datasets. In all these aspects it seems to be quite comparable to the original ripper implementation. However, we didn't consider memory consumption optimization in this implementation.
@inproceedings{Cohen1995, author = {William W. Cohen}, booktitle = {Twelfth International Conference on Machine Learning}, pages = {115-123}, publisher = {Morgan Kaufmann}, title = {Fast Effective Rule Induction}, year = {1995} }Valid options are:
-F <number of folds> Set number of folds for REP One fold is used as pruning set. (default 3)
-N <min. weights> Set the minimal weights of instances within a split. (default 2.0)
-O <number of runs> Set the number of runs of optimizations. (Default: 2)
-D Set whether turn on the debug mode (Default: false)
-S <seed> The seed of randomization (Default: 1)
-E Whether NOT check the error rate>=0.5 in stopping criteria (default: check)
-P Whether NOT use pruning (default: use pruning)
Nested Class Summary | |
---|---|
class |
JRip.Antd
The single antecedent in the rule, which is composed of an attribute and the corresponding value. |
class |
JRip.NominalAntd
The antecedent with nominal attribute |
class |
JRip.NumericAntd
The antecedent with numeric attribute |
class |
JRip.RipperRule
This class implements a single rule that predicts specified class. |
Constructor Summary | |
---|---|
JRip()
|
Method Summary | |
---|---|
void |
buildClassifier(Instances instances)
Builds Ripper in the order of class frequencies. |
java.lang.String |
checkErrorRateTipText()
Returns the tip text for this property |
java.lang.String |
debugTipText()
Returns the tip text for this property |
double[] |
distributionForInstance(Instance datum)
Classify the test instance with the rule learner and provide the class distributions |
java.util.Enumeration |
enumerateMeasures()
Returns an enumeration of the additional measure names |
java.lang.String |
foldsTipText()
Returns the tip text for this property |
Capabilities |
getCapabilities()
Returns default capabilities of the classifier. |
boolean |
getCheckErrorRate()
Gets whether to check for error rate is in stopping criterion |
boolean |
getDebug()
Gets whether debug information is output to the console |
int |
getFolds()
Gets the number of folds |
double |
getMeasure(java.lang.String additionalMeasureName)
Returns the value of the named measure |
double |
getMinNo()
Gets the minimum total weight of the instances in a rule |
int |
getOptimizations()
Gets the the number of optimization runs |
java.lang.String[] |
getOptions()
Gets the current settings of the Classifier. |
java.lang.String |
getRevision()
Returns the revision string. |
FastVector |
getRuleset()
Get the ruleset generated by Ripper |
RuleStats |
getRuleStats(int pos)
Get the statistics of the ruleset in the given position |
long |
getSeed()
Gets the current seed value to use in randomizing the data |
TechnicalInformation |
getTechnicalInformation()
Returns an instance of a TechnicalInformation object, containing detailed information about the technical background of this class, e.g., paper reference or book this class is based on. |
boolean |
getUsePruning()
Gets whether pruning is performed |
java.lang.String |
globalInfo()
Returns a string describing classifier |
java.util.Enumeration |
listOptions()
Returns an enumeration describing the available options Valid options are: |
static void |
main(java.lang.String[] args)
Main method. |
java.lang.String |
minNoTipText()
Returns the tip text for this property |
java.lang.String |
optimizationsTipText()
Returns the tip text for this property |
java.lang.String |
seedTipText()
Returns the tip text for this property |
void |
setCheckErrorRate(boolean d)
Sets whether to check for error rate is in stopping criterion |
void |
setDebug(boolean d)
Sets whether debug information is output to the console |
void |
setFolds(int fold)
Sets the number of folds to use |
void |
setMinNo(double m)
Sets the minimum total weight of the instances in a rule |
void |
setOptimizations(int run)
Sets the number of optimization runs |
void |
setOptions(java.lang.String[] options)
Parses a given list of options. |
void |
setSeed(long s)
Sets the seed value to use in randomizing the data |
void |
setUsePruning(boolean d)
Sets whether pruning is performed |
java.lang.String |
toString()
Prints the all the rules of the rule learner. |
java.lang.String |
usePruningTipText()
Returns the tip text for this property |
Methods inherited from class weka.classifiers.Classifier |
---|
classifyInstance, forName, makeCopies, makeCopy |
Methods inherited from class java.lang.Object |
---|
equals, getClass, hashCode, notify, notifyAll, wait, wait, wait |
Constructor Detail |
---|
public JRip()
Method Detail |
---|
public java.lang.String globalInfo()
public TechnicalInformation getTechnicalInformation()
getTechnicalInformation
in interface TechnicalInformationHandler
public java.util.Enumeration listOptions()
-F number
The number of folds for reduced error pruning. One fold is
used as the pruning set. (Default: 3)
-N number
The minimal weights of instances within a split.
(Default: 2)
-O number
Set the number of runs of optimizations. (Default: 2)
-D
Whether turn on the debug mode
-S number
The seed of randomization used in Ripper.(Default: 1)
-E
Whether NOT check the error rate >= 0.5 in stopping criteria.
(default: check)
-P
Whether NOT use pruning. (default: use pruning)
listOptions
in interface OptionHandler
listOptions
in class Classifier
public void setOptions(java.lang.String[] options) throws java.lang.Exception
-F <number of folds> Set number of folds for REP One fold is used as pruning set. (default 3)
-N <min. weights> Set the minimal weights of instances within a split. (default 2.0)
-O <number of runs> Set the number of runs of optimizations. (Default: 2)
-D Set whether turn on the debug mode (Default: false)
-S <seed> The seed of randomization (Default: 1)
-E Whether NOT check the error rate>=0.5 in stopping criteria (default: check)
-P Whether NOT use pruning (default: use pruning)
setOptions
in interface OptionHandler
setOptions
in class Classifier
options
- the list of options as an array of strings
java.lang.Exception
- if an option is not supportedpublic java.lang.String[] getOptions()
getOptions
in interface OptionHandler
getOptions
in class Classifier
public java.util.Enumeration enumerateMeasures()
enumerateMeasures
in interface AdditionalMeasureProducer
public double getMeasure(java.lang.String additionalMeasureName)
getMeasure
in interface AdditionalMeasureProducer
additionalMeasureName
- the name of the measure to query for its value
java.lang.IllegalArgumentException
- if the named measure is not supportedpublic java.lang.String foldsTipText()
public void setFolds(int fold)
fold
- the number of foldspublic int getFolds()
public java.lang.String minNoTipText()
public void setMinNo(double m)
m
- the minimum total weight of the instances in a rulepublic double getMinNo()
public java.lang.String seedTipText()
public void setSeed(long s)
s
- the new seed valuepublic long getSeed()
public java.lang.String optimizationsTipText()
public void setOptimizations(int run)
run
- the number of optimization runspublic int getOptimizations()
public java.lang.String debugTipText()
debugTipText
in class Classifier
public void setDebug(boolean d)
setDebug
in class Classifier
d
- whether debug information is output to the consolepublic boolean getDebug()
getDebug
in class Classifier
public java.lang.String checkErrorRateTipText()
public void setCheckErrorRate(boolean d)
d
- whether to check for error rate is in stopping criterionpublic boolean getCheckErrorRate()
public java.lang.String usePruningTipText()
public void setUsePruning(boolean d)
d
- Whether pruning is performedpublic boolean getUsePruning()
public FastVector getRuleset()
public RuleStats getRuleStats(int pos)
pos
- the position of the stats, assuming correct
public Capabilities getCapabilities()
getCapabilities
in interface CapabilitiesHandler
getCapabilities
in class Classifier
Capabilities
public void buildClassifier(Instances instances) throws java.lang.Exception
buildClassifier
in class Classifier
instances
- the training data
java.lang.Exception
- if classifier can't be built successfullypublic double[] distributionForInstance(Instance datum)
distributionForInstance
in class Classifier
datum
- the instance to be classified
public java.lang.String toString()
toString
in class java.lang.Object
public java.lang.String getRevision()
getRevision
in interface RevisionHandler
getRevision
in class Classifier
public static void main(java.lang.String[] args)
args
- the options for the classifier
|
|||||||||||
PREV CLASS NEXT CLASS | FRAMES NO FRAMES | ||||||||||
SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD |