skmultiflow.meta.
LearnPPClassifier
Learn++ ensemble classifier.
Learn++ [1] does not require access to previously used data during subsequent incremental learning steps. At the same time, it does not forget previously acquired knowledge. Learn++ utilizes an ensemble of classifiers by generating multiple hypotheses using training data sampled according to carefully tailored distributions.
(default=DecisionTreeClassifier) Each member of the ensemble is an instance of the base estimator.
The number of classifiers per ensemble
The number of ensembles to keep.
The size of the training window (batch), in other words, how many instances are kept for training.
Only keep the learner with the error smaller than error_threshold
If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.
A RuntimeError is raised if the base_estimator is too weak. In other words, it has too low accuracy on the dataset. A RuntimeError is also raised if the ‘classes’ parameter is not passed in the first partial_fit call, or if they are passed in further calls but differ from the initial classes.
Notes
Originally, Learn++ is designed to train all of its members and combine their predictions considering the observed normalized errors. However, when training the base estimators, if the observed prediction error shrinks to zero before all estimators are trained, the error normalization is ill-defined, i.e., the instance error-based weight normalization factor (the sum of the errors) is zero. This implementation adds an ‘early stop’ mechanism to circumvent this corner case: LearnPPClassifier stops adding members to the ensemble if all instances are correctly classified. Otherwise (its normal behavior), the model uses as many ensemble members as defined via the n_estimators parameter.
n_estimators
References
Polikar, Robi and Upda, Lalita and Upda, Satish S and Honavar, Vasant. Learn++: An Incremental Learning Algorithm for Supervised Neural Networks. IEEE Transactions on Systems Man and Cybernetics Part C (Applications and Reviews), 2002.
Examples
>>> # Imports >>> import numpy as np >>> from skmultiflow.meta.learn_pp import LearnPPClassifier >>> from skmultiflow.lazy import KNNClassifier >>> from skmultiflow.data.sea_generator import SEAGenerator >>> # Setting up the stream >>> stream = SEAGenerator(1) >>> # Setting up the Learn++ classifier to work with KNN classifiers >>> clf = LearnPPClassifier(base_estimator=KNNClassifier(n_neighbors=8, max_window_size=2000, ... leaf_size=30), n_estimators=30) >>> # Keeping track of sample count and correct prediction count >>> sample_count = 0 >>> corrects = 0 >>> m = 200 >>> # Pre training the classifier with 200 samples >>> X, y = stream.next_sample(m) >>> clf = clf.partial_fit(X, y, classes=stream.target_values) >>> for i in range(3): ... X, y = stream.next_sample(m) ... pred = clf.predict(X) ... clf = clf.partial_fit(X, y) ... if pred is not None: ... corrects += np.sum(y == pred) ... sample_count += m >>> >>> # Displaying the results >>> print('Learn++ classifier performance: ' + str(corrects / sample_count)) Learn++ classifier performance: 0.9555
Methods
fit(X, y[, classes, sample_weight])
fit
Fit the model.
get_info()
get_info
Collects and returns the information about the configuration of the estimator
get_params([deep])
get_params
Get parameters for this estimator.
partial_fit(X, y[, classes, sample_weight])
partial_fit
Partially (incrementally) fit the model.
predict(X)
predict
Predict classes for the passed data.
predict_proba(X)
predict_proba
Predicts the probability of each sample belonging to each one of the known classes.
reset()
reset
Resets the estimator to its initial state.
score(X, y[, sample_weight])
score
Returns the mean accuracy on the given test data and labels.
set_params(**params)
set_params
Set the parameters of this estimator.
The features to train the model.
An array-like with the class labels of all samples in X.
Contains all possible/known class labels. Usage varies depending on the learning method.
Samples weight. If not provided, uniform weights are assumed. Usage varies depending on the learning method.
Configuration of the estimator.
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Parameter names mapped to their values.
Array with all possible/known class labels. This is an optional parameter, except for the first partial_fit call where it is compulsory.
self
A RuntimeError is raised if the ‘classes’ parameter is not passed in the first partial_fit call, or if they are passed in further calls but differ from the initial classes list passed. A RuntimeError is raised if the base_estimator is too weak. In other word, it has too low accuracy on the dataset.
The set of data samples to predict the labels for.
The predict function uses majority votes from all its learners with their weights to find the most likely prediction for the sample matrix X.
A matrix of the samples we want to predict.
An array of shape (n_samples, n_features), in which each outer entry is associated with the X entry of the same index. And where the list in index [i] contains len(self.target_values) elements, each of which represents the probability that the i-th sample of X belongs to a certain label.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
Test samples.
True labels for X.
Sample weights.
Mean accuracy of self.predict(X) wrt. y.
The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.
<component>__<parameter>