skmultiflow.meta.
OzaBaggingADWINClassifier
Oza Bagging ensemble classifier with ADWIN change detector.
Each member of the ensemble is an instance of the base estimator.
The size of the ensemble, in other words, how many classifiers to train.
If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.
Notes
This online ensemble learner method is an improvement from the Online Bagging algorithm [1]. The improvement comes from the addition of a ADWIN change detector.
ADWIN stands for Adaptive Windowing. It works by keeping updated statistics of a variable sized window, so it can detect changes and perform cuts in its window to better adapt the learning algorithms.
References
N. C. Oza, “Online Bagging and Boosting,” in 2005 IEEE International Conference on Systems, Man and Cybernetics, 2005, vol. 3, no. 3, pp. 2340–2345.
Examples
>>> # Imports >>> from skmultiflow.meta import OzaBaggingADWINClassifier >>> from skmultiflow.lazy import KNNClassifier >>> from skmultiflow.data.sea_generator import SEAGenerator >>> # Setting up the stream >>> stream = SEAGenerator(1, noise_percentage=6.7) >>> # Setting up the OzaBaggingADWINClassifier to work with KNN as base estimator >>> clf = OzaBaggingADWINClassifier(base_estimator=KNNClassifier(n_neighbors=8, max_window_size=2000, leaf_size=30), ... n_estimators=2) >>> # Keeping track of sample count and correct prediction count >>> sample_count = 0 >>> corrects = 0 >>> # Pre training the classifier with 200 samples >>> X, y = stream.next_sample(200) >>> clf = clf.partial_fit(X, y, classes=stream.target_values) >>> for i in range(2000): ... X, y = stream.next_sample() ... pred = clf.predict(X) ... clf = clf.partial_fit(X, y) ... if pred is not None: ... if y[0] == pred[0]: ... corrects += 1 ... sample_count += 1 >>> >>> # Displaying the results >>> print(str(sample_count) + ' samples analyzed.') 2000 samples analyzed. >>> print('OzaBaggingADWINClassifier performance: ' + str(corrects / sample_count)) OzaBaggingADWINClassifier performance: 0.9645
Methods
fit(self, X, y[, classes, sample_weight])
fit
Fit the model.
get_info(self)
get_info
Collects and returns the information about the configuration of the estimator
get_params(self[, deep])
get_params
Get parameters for this estimator.
partial_fit(self, X, y[, classes, sample_weight])
partial_fit
Partially (incrementally) fit the model.
predict(self, X)
predict
Predict classes for the passed data.
predict_proba(self, X)
predict_proba
Estimates the probability of each sample in X belonging to each of the class-labels.
reset(self)
reset
Resets the estimator to its initial state.
score(self, X, y[, sample_weight])
score
Returns the mean accuracy on the given test data and labels.
set_params(self, **params)
set_params
Set the parameters of this estimator.
The features to train the model.
An array-like with the class labels of all samples in X.
Contains all possible/known class labels. Usage varies depending on the learning method.
Samples weight. If not provided, uniform weights are assumed. Usage varies depending on the learning method.
Configuration of the estimator.
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Parameter names mapped to their values.
Array with all possible/known class labels. This is an optional parameter, except for the first partial_fit call where it is compulsory.
Samples weight. If not provided, uniform weights are assumed. Usage varies depending on the base estimator.
self
A ValueError is raised if the ‘classes’ parameter is not passed in the first partial_fit call, or if they are passed in further calls but differ from the initial classes list passed.
Since it’s an ensemble learner, if X and y matrix of more than one sample are passed, the algorithm will partial fit the model one sample at a time.
Each sample is trained by each classifier a total of K times, where K is drawn by a Poisson(1) distribution.
Alongside updating the model, the learner will also update ADWIN’s statistics over the new samples, so that the change detector can evaluate if a concept drift was detected. In the case drift is detected, the bagging algorithm will find the worst performing classifier and reset its statistics and window.
The set of data samples to predict the class labels for.
The predict function will average the predictions from all its learners to find the most likely prediction for the sample matrix X.
The matrix of samples one wants to predict the class probabilities for.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
Test samples.
True labels for X.
Sample weights.
Mean accuracy of self.predict(X) wrt. y.
The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.
<component>__<parameter>