skmultiflow.meta.LeveragingBaggingClassifier

class skmultiflow.meta.LeveragingBaggingClassifier(base_estimator=KNNClassifier(leaf_size=30, max_window_size=1000, metric='euclidean', n_neighbors=5), n_estimators=10, w=6, delta=0.002, enable_code_matrix=False, leverage_algorithm='leveraging_bag', random_state=None)[source]

Leveraging Bagging ensemble classifier.

Parameters
base_estimator: skmultiflow.core.BaseSKMObject or sklearn.BaseEstimator (default=KNN)

Each member of the ensemble is an instance of the base estimator.

n_estimators: int (default=10)

The size of the ensemble, in other words, how many classifiers to train.

w: int (default=6)

The poisson distribution’s parameter, which is used to simulate re-sampling.

delta: float (default=0.002)

The delta parameter for the ADWIN change detector.

enable_code_matrix: bool (default=False)

If set, enables Leveraging Bagging MC using Random Output Codes.

leverage_algorithm: string (default=’leveraging_bag’)
The bagging algorithm to use. Can be one of the following:
‘leveraging_bag’ - Leveraging Bagging using ADWIN
‘leveraging_bag_me’ - Assigns to a sample weight=1 if misclassified, otherwise weight=error/(1-error)
‘leveraging_bag_half’ - Use resampling without replacement for half of the instances
‘leveraging_bag_wt’ - Without taking out all instances
‘leveraging_subag’ - Using resampling without replacement
random_state: int, RandomState instance or None, optional (default=None)

If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.

Raises
ValueError: A ValueError is raised if the classes parameter is not passed in the first partial_fit call.

Notes

An ensemble method, which represents an improvement from the online Oza Bagging algorithm. The complete description of this method can be found in [1].

The bagging performance is leveraged by increasing the re-sampling and by using output detection codes. We use a poisson distribution to simulate the re-sampling process. To increase re-sampling we use a higher value of the w parameter of the Poisson distribution, which is 6 by default. With this value we are increasing the input space diversity, by attributing a different range of weights to our samples.

The second improvement is to use output detection codes. This consists of coding each label with a n bit long binary code and then associating n classifiers, one to each bit of the binary codes. At each new sample analyzed, each classifier is trained on its own bit. This allows, to some extent, the correction of errors.

To deal with concept drift we use the ADWIN algorithm, one instance for each classifier. Each time a concept drift is detected we reset the worst ensemble’s classifier, which is done by comparing the adwins’ window sizes.

References

1

A. Bifet, G. Holmes, and B. Pfahringer, “Leveraging Bagging for Evolving Data Streams,” in Joint European Conference on Machine Learning and Knowledge Discovery in Databases, 2010, no. 1, pp. 135–150.

Examples

>>> # Imports
>>> from skmultiflow.meta import LeveragingBaggingClassifier
>>> from skmultiflow.lazy import KNNClassifier
>>> from skmultiflow.data import SEAGenerator
>>> # Setting up the stream
>>> stream = SEAGenerator(1, noise_percentage=.067)
>>> # Setting up the LeverageBagging classifier to work with KNN classifiers
>>> clf = LeveragingBaggingClassifier(base_estimator=
>>>                                 KNNClassifier(n_neighbors=8,
>>>                                               max_window_size=2000,
>>>                                               leaf_size=30)
>>>                                 , n_estimators=2)
>>> # Keeping track of sample count and correct prediction count
>>> sample_count = 0
>>> corrects = 0
>>> for i in range(2000):
>>>     X, y = stream.next_sample()
>>>     pred = clf.predict(X)
>>>     clf = clf.partial_fit(X, y, classes=stream.target_values)
>>>     if pred is not None:
>>>         if y[0] == pred[0]:
...             corrects += 1
...     sample_count += 1
... # Displaying the results
... print(str(sample_count) + ' samples analyzed.')
2000 samples analyzed.
>>> print('LeveragingBaggingClassifier performance: ' + str(corrects / sample_count))
LeveragingBagging classifier performance: 0.843

Methods

fit(self, X, y[, classes, sample_weight])

Fit the model.

get_info(self)

Collects and returns the information about the configuration of the estimator

get_params(self[, deep])

Get parameters for this estimator.

partial_fit(self, X, y[, classes, sample_weight])

Partially (incrementally) fit the model.

predict(self, X)

Predict classes for the passed data.

predict_binary_proba(self, X)

Calculates the probability of a sample belonging to each coded label.

predict_proba(self, X)

Estimate the probability of X belonging to each class-label.

reset(self)

Resets all the estimators, as well as all the ADWIN change detectors.

score(self, X, y[, sample_weight])

Returns the mean accuracy on the given test data and labels.

set_params(self, **params)

Set the parameters of this estimator.

fit(self, X, y, classes=None, sample_weight=None)[source]

Fit the model.

Parameters
Xnumpy.ndarray of shape (n_samples, n_features)

The features to train the model.

y: numpy.ndarray of shape (n_samples, n_targets)

An array-like with the class labels of all samples in X.

classes: numpy.ndarray, optional (default=None)

Contains all possible/known class labels. Usage varies depending on the learning method.

sample_weight: numpy.ndarray, optional (default=None)

Samples weight. If not provided, uniform weights are assumed. Usage varies depending on the learning method.

Returns
self
get_info(self)[source]

Collects and returns the information about the configuration of the estimator

Returns
string

Configuration of the estimator.

get_params(self, deep=True)[source]

Get parameters for this estimator.

Parameters
deepboolean, optional

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns
paramsmapping of string to any

Parameter names mapped to their values.

partial_fit(self, X, y, classes=None, sample_weight=None)[source]

Partially (incrementally) fit the model.

Parameters
Xnumpy.ndarray of shape (n_samples, n_features)

The features to train the model.

y: numpy.ndarray of shape (n_samples)

An array-like with the class labels of all samples in X.

classes: numpy.ndarray, optional (default=None)

Array with all possible/known class labels.

sample_weight: not used (default=None)
Returns
LeveragingBaggingClassifier

self

Raises
ValueError: A ValueError is raised if the ‘classes’ parameter is not
passed in the first partial_fit call, or if they are passed in further
calls but differ from the initial classes list passed.
predict(self, X)[source]

Predict classes for the passed data.

Parameters
Xnumpy.ndarray of shape (n_samples, n_features)

The set of data samples to predict the class labels for.

Returns
A numpy.ndarray with all the predictions for the samples in X.
predict_binary_proba(self, X)[source]

Calculates the probability of a sample belonging to each coded label.

This will only be used if matrix codes are enabled. Otherwise the method will use the normal predict_proba function.

Parameters
X: numpy.ndarray of shape (n_samples, n_features)

All the samples we want to predict the label for.

Returns
list

A list of lists, in which each outer entry is associated with the X entry of the same index. And where the list in index [i] contains len(self.target_values) elements, each of which represents the probability that the i-th sample of X belongs to a certain label.

predict_proba(self, X)[source]

Estimate the probability of X belonging to each class-label.

Parameters
Xnumpy.ndarray of shape (n_samples, n_features)

The matrix of samples to predict the class probabilities for.

Returns
A numpy.ndarray of shape (n_samples, n_labels), in which each outer
entry is associated with the X entry of the same index. And where the
list in index [i] contains len(self.target_values) elements, each of
which represents the probability that the i-th sample of X belongs to
a certain class-label.
Raises
ValueError: A ValueError is raised if the number of classes in the base
learner exceed that of the ensemble learner.

Notes

Calculates the probability of each sample in X belonging to each of the labels, based on the base estimator. This is done by predicting the class probability for each one of the ensemble’s classifier, and then taking the absolute probability from the ensemble itself.

reset(self)[source]

Resets all the estimators, as well as all the ADWIN change detectors.

Returns
LeveragingBaggingClassifier

self

score(self, X, y, sample_weight=None)[source]

Returns the mean accuracy on the given test data and labels.

In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.

Parameters
Xarray-like, shape = (n_samples, n_features)

Test samples.

yarray-like, shape = (n_samples) or (n_samples, n_outputs)

True labels for X.

sample_weightarray-like, shape = [n_samples], optional

Sample weights.

Returns
scorefloat

Mean accuracy of self.predict(X) wrt. y.

set_params(self, **params)[source]

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns
self