skmultiflow.meta.OzaBaggingADWINClassifier

class skmultiflow.meta.OzaBaggingADWINClassifier(base_estimator=KNNADWINClassifier(leaf_size=30, max_window_size=1000, metric='euclidean', n_neighbors=5), n_estimators=10, random_state=None)[source]

Oza Bagging ensemble classifier with ADWIN change detector.

Parameters
base_estimator: skmultiflow.core.BaseSKMObject or sklearn.BaseEstimator (default=KNNADWINClassifier)

Each member of the ensemble is an instance of the base estimator.

n_estimators: int (default=10)

The size of the ensemble, in other words, how many classifiers to train.

random_state: int, RandomState instance or None, optional (default=None)

If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.

Raises
ValueError: A ValueError is raised if the ‘classes’ parameter is
not passed in the first partial_fit call.

Notes

This online ensemble learner method is an improvement from the Online Bagging algorithm [1]. The improvement comes from the addition of a ADWIN change detector.

ADWIN stands for Adaptive Windowing. It works by keeping updated statistics of a variable sized window, so it can detect changes and perform cuts in its window to better adapt the learning algorithms.

References

1

N. C. Oza, “Online Bagging and Boosting,” in 2005 IEEE International Conference on Systems, Man and Cybernetics, 2005, vol. 3, no. 3, pp. 2340–2345.

Examples

>>> # Imports
>>> from skmultiflow.meta import OzaBaggingADWINClassifier
>>> from skmultiflow.lazy import KNNClassifier
>>> from skmultiflow.data.sea_generator import SEAGenerator
>>> # Setting up the stream
>>> stream = SEAGenerator(1, noise_percentage=6.7)
>>> # Setting up the OzaBaggingADWINClassifier to work with KNN as base estimator
>>> clf = OzaBaggingADWINClassifier(base_estimator=KNNClassifier(n_neighbors=8, max_window_size=2000, leaf_size=30),
...                                 n_estimators=2)
>>> # Keeping track of sample count and correct prediction count
>>> sample_count = 0
>>> corrects = 0
>>> # Pre training the classifier with 200 samples
>>> X, y = stream.next_sample(200)
>>> clf = clf.partial_fit(X, y, classes=stream.target_values)
>>> for i in range(2000):
...     X, y = stream.next_sample()
...     pred = clf.predict(X)
...     clf = clf.partial_fit(X, y)
...     if pred is not None:
...         if y[0] == pred[0]:
...             corrects += 1
...     sample_count += 1
>>> 
>>> # Displaying the results
>>> print(str(sample_count) + ' samples analyzed.')
2000 samples analyzed.
>>> print('OzaBaggingADWINClassifier performance: ' + str(corrects / sample_count))
OzaBaggingADWINClassifier performance: 0.9645

Methods

fit(self, X, y[, classes, sample_weight])

Fit the model.

get_info(self)

Collects and returns the information about the configuration of the estimator

get_params(self[, deep])

Get parameters for this estimator.

partial_fit(self, X, y[, classes, sample_weight])

Partially (incrementally) fit the model.

predict(self, X)

Predict classes for the passed data.

predict_proba(self, X)

Estimates the probability of each sample in X belonging to each of the class-labels.

reset(self)

Resets the estimator to its initial state.

score(self, X, y[, sample_weight])

Returns the mean accuracy on the given test data and labels.

set_params(self, **params)

Set the parameters of this estimator.

fit(self, X, y, classes=None, sample_weight=None)[source]

Fit the model.

Parameters
Xnumpy.ndarray of shape (n_samples, n_features)

The features to train the model.

y: numpy.ndarray of shape (n_samples, n_targets)

An array-like with the class labels of all samples in X.

classes: numpy.ndarray, optional (default=None)

Contains all possible/known class labels. Usage varies depending on the learning method.

sample_weight: numpy.ndarray, optional (default=None)

Samples weight. If not provided, uniform weights are assumed. Usage varies depending on the learning method.

Returns
self
get_info(self)[source]

Collects and returns the information about the configuration of the estimator

Returns
string

Configuration of the estimator.

get_params(self, deep=True)[source]

Get parameters for this estimator.

Parameters
deepboolean, optional

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns
paramsmapping of string to any

Parameter names mapped to their values.

partial_fit(self, X, y, classes=None, sample_weight=None)[source]

Partially (incrementally) fit the model.

Parameters
Xnumpy.ndarray of shape (n_samples, n_features)

The features to train the model.

y: numpy.ndarray of shape (n_samples)

An array-like with the class labels of all samples in X.

classes: numpy.ndarray, optional (default=None)

Array with all possible/known class labels. This is an optional parameter, except for the first partial_fit call where it is compulsory.

sample_weight: numpy.ndarray of shape (n_samples), optional (default=None)

Samples weight. If not provided, uniform weights are assumed. Usage varies depending on the base estimator.

Returns
OzaBaggingADWINClassifier

self

Raises
ValueError

A ValueError is raised if the ‘classes’ parameter is not passed in the first partial_fit call, or if they are passed in further calls but differ from the initial classes list passed.

Notes

Since it’s an ensemble learner, if X and y matrix of more than one sample are passed, the algorithm will partial fit the model one sample at a time.

Each sample is trained by each classifier a total of K times, where K is drawn by a Poisson(1) distribution.

Alongside updating the model, the learner will also update ADWIN’s statistics over the new samples, so that the change detector can evaluate if a concept drift was detected. In the case drift is detected, the bagging algorithm will find the worst performing classifier and reset its statistics and window.

predict(self, X)[source]

Predict classes for the passed data.

Parameters
Xnumpy.ndarray of shape (n_samples, n_features)

The set of data samples to predict the class labels for.

Returns
A numpy.ndarray with all the predictions for the samples in X.

Notes

The predict function will average the predictions from all its learners to find the most likely prediction for the sample matrix X.

predict_proba(self, X)[source]

Estimates the probability of each sample in X belonging to each of the class-labels.

Parameters
Xnumpy.ndarray of shape (n_samples, n_features)

The matrix of samples one wants to predict the class probabilities for.

Returns
A numpy.ndarray of shape (n_samples, n_labels), in which each outer entry is associated with the X entry of the
same index. And where the list in index [i] contains len(self.target_values) elements, each of which represents
the probability that the i-th sample of X belongs to a certain class-label.
Raises
ValueError: A ValueError is raised if the number of classes in the base_estimator
learner differs from that of the ensemble learner.
reset(self)[source]

Resets the estimator to its initial state.

Returns
self
score(self, X, y, sample_weight=None)[source]

Returns the mean accuracy on the given test data and labels.

In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.

Parameters
Xarray-like, shape = (n_samples, n_features)

Test samples.

yarray-like, shape = (n_samples) or (n_samples, n_outputs)

True labels for X.

sample_weightarray-like, shape = [n_samples], optional

Sample weights.

Returns
scorefloat

Mean accuracy of self.predict(X) wrt. y.

set_params(self, **params)[source]

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns
self