skmultiflow.meta.
LearnPPNSEClassifier
Learn++.NSE ensemble classifier.
Learn++.NSE [1] is an ensemble of classifiers for incremental learning from non-stationary environments (NSEs) where the underlying data distributions change over time. It learns from consecutive batches of data that experience constant or variable rate of drift, addition or deletion of concept classes, as well as cyclical drift.
Each member of the ensemble is an instance of the base estimator.
The number of base estimators in the ensemble.
The size of the training window (batch), in other words, how many instances are kept for training.
Halfway crossing point of the sigmoid function controlling the number of previous periods taken into account during weighting.
Slope of the sigmoid function controlling the number of previous periods taken into account during weighting.
Classifiers pruning strategy to be used. pruning=None: Don’t prune classifiers pruning=’age’: Age-based pruning=’error’: Error-based
References
Ryan Elwell and Robi Polikar. Incremental learning of concept drift in non-stationary environments. IEEE Transactions on Neural Networks, 22(10):1517-1531, October 2011. ISSN 1045-9227. URL http://dx.doi.org/10.1109/TNN.2011.2160459
Examples
>>> # Imports >>> from skmultiflow.data import SEAGenerator >>> from skmultiflow.meta import LearnPPNSEClassifier >>> >>> # Setup a data stream >>> stream = SEAGenerator(random_state=1) >>> >>> # Setup Learn++.NSE Classifier >>> learn_pp_nse = LearnPPNSEClassifier() >>> >>> # Setup variables to control loop and track performance >>> n_samples = 0 >>> correct_cnt = 0 >>> max_samples = 200 >>> >>> # Train the classifier with the samples provided by the data stream >>> while n_samples < max_samples and stream.has_more_samples(): >>> X, y = stream.next_sample() >>> y_pred = learn_pp_nse.predict(X) >>> if y[0] == y_pred[0]: >>> correct_cnt += 1 >>> learn_pp_nse.partial_fit(X, y, classes=stream.target_values) >>> n_samples += 1 >>> >>> # Display results >>> print('{} samples analyzed.'.format(n_samples)) >>> print('Learn++.NSE classifier accuracy: {}'.format(correct_cnt / n_samples))
Methods
fit(self, X, y[, classes, sample_weight])
fit
Fit the model.
get_info(self)
get_info
Collects and returns the information about the configuration of the estimator
get_params(self[, deep])
get_params
Get parameters for this estimator.
partial_fit(self, X[, y, classes, sample_weight])
partial_fit
Partially fits the model, based on the X and y matrix.
predict(self, X)
predict
Predicts the class for a given sample by majority vote from all the members of the ensemble.
predict_proba(self, X)
predict_proba
Predicts the probability of each sample belonging to each one of the known classes.
reset(self)
reset
Resets the estimator to its initial state.
score(self, X, y[, sample_weight])
score
Returns the mean accuracy on the given test data and labels.
set_params(self, **params)
set_params
Set the parameters of this estimator.
The features to train the model.
An array-like with the class labels of all samples in X.
Contains all possible/known class labels. Usage varies depending on the learning method.
Samples weight. If not provided, uniform weights are assumed. Usage varies depending on the learning method.
Configuration of the estimator.
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Parameter names mapped to their values.
Features matrix used for partially updating the model.
An array-like of all the class labels for the samples in X.
Array with all possible/known class labels. This is an optional parameter, except for the first partial_fit call where it is compulsory.
self
A RuntimeError is raised if the ‘classes’ parameter is not passed in the first partial_fit call, or if they are passed in further calls but differ from the initial classes list passed. A RuntimeError is raised if the base_estimator is too weak. In other word, it has too low accuracy on the dataset.
A matrix of the samples we want to predict.
A numpy.ndarray with the label prediction for all the samples in X.
An array of shape (n_samples, n_features), in which each outer entry is associated with the X entry of the same index. And where the list in index [i] contains len(self.target_values) elements, each of which represents the probability that the i-th sample of X belongs to a certain label.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
Test samples.
True labels for X.
Sample weights.
Mean accuracy of self.predict(X) wrt. y.
The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.
<component>__<parameter>