skmultiflow.meta.
OzaBaggingClassifier
Oza Bagging ensemble classifier.
Each member of the ensemble is an instance of the base estimator.
The size of the ensemble, in other words, how many classifiers to train.
If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.
Notes
Oza Bagging [1] is an ensemble learning method first introduced by Oza and Russel’s ‘Online Bagging and Boosting’. They are an improvement of the well known Bagging ensemble method for the batch setting, which in this version can effectively handle data streams.
For a traditional Bagging algorithm, adapted for the batch setting, we would have M classifiers training on M different datasets, created by drawing N samples from the N-sized training set with replacement.
In the online context, since there is no training dataset, but a stream of samples, the drawing of samples with replacement can’t be trivially executed. The strategy adopted by the Online Bagging algorithm is to simulate this task by training each arriving sample K times, which is drawn by the binomial distribution. Since we can consider the data stream to be infinite, and knowing that with infinite samples the binomial distribution tends to a Poisson(1) distribution, Oza and Russel found that to be a good ‘drawing with replacement’.
References
N. C. Oza, “Online Bagging and Boosting,” in 2005 IEEE International Conference on Systems, Man and Cybernetics, 2005, vol. 3, no. 3, pp. 2340–2345.
Examples
>>> # Imports >>> from skmultiflow.meta import OzaBaggingClassifier >>> from skmultiflow.lazy import KNNClassifier >>> from skmultiflow.data import SEAGenerator >>> # Setting up the stream >>> stream = SEAGenerator(1, noise_percentage=0.07) >>> # Setting up the OzaBagging classifier to work with KNN as base estimator >>> clf = OzaBaggingClassifier(base_estimator=KNNClassifier(n_neighbors=8, max_window_size=2000, leaf_size=30), n_estimators=2) >>> # Keeping track of sample count and correct prediction count >>> sample_count = 0 >>> corrects = 0 >>> # Pre training the classifier with 200 samples >>> X, y = stream.next_sample(200) >>> clf = clf.partial_fit(X, y, classes=stream.target_values) >>> for i in range(2000): ... X, y = stream.next_sample() ... pred = clf.predict(X) ... clf = clf.partial_fit(X, y) ... if pred is not None: ... if y[0] == pred[0]: ... corrects += 1 ... sample_count += 1 >>> >>> # Displaying the results >>> print(str(sample_count) + ' samples analyzed.') 2000 samples analyzed. >>> print('OzaBaggingClassifier performance: ' + str(corrects / sample_count)) OzaBagging classifier performance: 0.9095
Methods
fit(self, X, y[, classes, sample_weight])
fit
Fit the model.
get_info(self)
get_info
Collects and returns the information about the configuration of the estimator
get_params(self[, deep])
get_params
Get parameters for this estimator.
partial_fit(self, X, y[, classes, sample_weight])
partial_fit
Partially (incrementally) fit the model.
predict(self, X)
predict
Predict classes for the passed data.
predict_proba(self, X)
predict_proba
Estimates the probability of each sample in X belonging to each of the class-labels.
reset(self)
reset
Resets the estimator to its initial state.
score(self, X, y[, sample_weight])
score
Returns the mean accuracy on the given test data and labels.
set_params(self, **params)
set_params
Set the parameters of this estimator.
The features to train the model.
An array-like with the class labels of all samples in X.
Contains all possible/known class labels. Usage varies depending on the learning method.
Samples weight. If not provided, uniform weights are assumed. Usage varies depending on the learning method.
Configuration of the estimator.
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Parameter names mapped to their values.
Array with all possible/known class labels. This is an optional parameter, except for the first partial_fit call where it is compulsory.
Samples weight. If not provided, uniform weights are assumed. Usage varies depending on the base estimator.
self
A ValueError is raised if the ‘classes’ parameter is not passed in the first partial_fit call, or if they are passed in further calls but differ from the initial classes list passed.
Since it’s an ensemble learner, if X and y matrix of more than one sample are passed, the algorithm will partial fit the model one sample at a time.
Each sample is trained by each classifier a total of K times, where K is drawn by a Poisson(1) distribution.
The set of data samples to predict the class labels for.
The predict function will average the predictions from all its learners to find the most likely prediction for the sample matrix X.
The matrix of samples one wants to predict the class probabilities for.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
Test samples.
True labels for X.
Sample weights.
Mean accuracy of self.predict(X) wrt. y.
The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.
<component>__<parameter>