skmultiflow.meta.
LeveragingBaggingClassifier
Leveraging Bagging ensemble classifier.
Each member of the ensemble is an instance of the base estimator.
The size of the ensemble, in other words, how many classifiers to train.
The poisson distribution’s parameter, which is used to simulate re-sampling.
The delta parameter for the ADWIN change detector.
If set, enables Leveraging Bagging MC using Random Output Codes.
weight=1
weight=error/(1-error)
If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.
classes
partial_fit
Notes
An ensemble method, which represents an improvement from the online Oza Bagging algorithm. The complete description of this method can be found in [1].
The bagging performance is leveraged by increasing the re-sampling and by using output detection codes. We use a poisson distribution to simulate the re-sampling process. To increase re-sampling we use a higher value of the w parameter of the Poisson distribution, which is 6 by default. With this value we are increasing the input space diversity, by attributing a different range of weights to our samples.
The second improvement is to use output detection codes. This consists of coding each label with a n bit long binary code and then associating n classifiers, one to each bit of the binary codes. At each new sample analyzed, each classifier is trained on its own bit. This allows, to some extent, the correction of errors.
To deal with concept drift we use the ADWIN algorithm, one instance for each classifier. Each time a concept drift is detected we reset the worst ensemble’s classifier, which is done by comparing the adwins’ window sizes.
References
A. Bifet, G. Holmes, and B. Pfahringer, “Leveraging Bagging for Evolving Data Streams,” in Joint European Conference on Machine Learning and Knowledge Discovery in Databases, 2010, no. 1, pp. 135–150.
Examples
>>> # Imports >>> from skmultiflow.meta import LeveragingBaggingClassifier >>> from skmultiflow.lazy import KNNClassifier >>> from skmultiflow.data import SEAGenerator >>> # Setting up the stream >>> stream = SEAGenerator(1, noise_percentage=.067) >>> # Setting up the LeverageBagging classifier to work with KNN classifiers >>> clf = LeveragingBaggingClassifier(base_estimator= >>> KNNClassifier(n_neighbors=8, >>> max_window_size=2000, >>> leaf_size=30) >>> , n_estimators=2) >>> # Keeping track of sample count and correct prediction count >>> sample_count = 0 >>> corrects = 0 >>> for i in range(2000): >>> X, y = stream.next_sample() >>> pred = clf.predict(X) >>> clf = clf.partial_fit(X, y, classes=stream.target_values) >>> if pred is not None: >>> if y[0] == pred[0]: ... corrects += 1 ... sample_count += 1 ... # Displaying the results ... print(str(sample_count) + ' samples analyzed.') 2000 samples analyzed. >>> print('LeveragingBaggingClassifier performance: ' + str(corrects / sample_count)) LeveragingBagging classifier performance: 0.843
Methods
fit(self, X, y[, classes, sample_weight])
fit
Fit the model.
get_info(self)
get_info
Collects and returns the information about the configuration of the estimator
get_params(self[, deep])
get_params
Get parameters for this estimator.
partial_fit(self, X, y[, classes, sample_weight])
Partially (incrementally) fit the model.
predict(self, X)
predict
Predict classes for the passed data.
predict_binary_proba(self, X)
predict_binary_proba
Calculates the probability of a sample belonging to each coded label.
predict_proba(self, X)
predict_proba
Estimate the probability of X belonging to each class-label.
reset(self)
reset
Resets all the estimators, as well as all the ADWIN change detectors.
score(self, X, y[, sample_weight])
score
Returns the mean accuracy on the given test data and labels.
set_params(self, **params)
set_params
Set the parameters of this estimator.
The features to train the model.
An array-like with the class labels of all samples in X.
Contains all possible/known class labels. Usage varies depending on the learning method.
Samples weight. If not provided, uniform weights are assumed. Usage varies depending on the learning method.
Configuration of the estimator.
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Parameter names mapped to their values.
Array with all possible/known class labels.
self
The set of data samples to predict the class labels for.
This will only be used if matrix codes are enabled. Otherwise the method will use the normal predict_proba function.
All the samples we want to predict the label for.
A list of lists, in which each outer entry is associated with the X entry of the same index. And where the list in index [i] contains len(self.target_values) elements, each of which represents the probability that the i-th sample of X belongs to a certain label.
The matrix of samples to predict the class probabilities for.
Calculates the probability of each sample in X belonging to each of the labels, based on the base estimator. This is done by predicting the class probability for each one of the ensemble’s classifier, and then taking the absolute probability from the ensemble itself.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
Test samples.
True labels for X.
Sample weights.
Mean accuracy of self.predict(X) wrt. y.
The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.
<component>__<parameter>