skmultiflow.meta.
StreamingRandomPatchesClassifier
Streaming Random Patches ensemble classifier.
The base estimator.
Number of members in the ensemble.
m
M
sqrt(M)+1
M-(sqrt(M)+1)
Number of features per subset for each classifier. Negative value means total_features - subspace_size.
total_features - subspace_size
Lambda value for bagging.
Drift detection method.
Warning detection method.
If True, disables weighted voting.
If True, disables drift detection and background learner.
If True, disables background learner and trees are reset immediately if drift is detected.
List of Nominal attributes. If emtpy, then assume that all attributes are numerical.
If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.
Notes
The Streaming Random Patches (SRP) [1] ensemble method simulates bagging or random subspaces. The default algorithm uses both bagging and random subspaces, namely Random Patches. The default base estimator is a Hoeffding Tree, but it can be used with any other base estimator (differently from random forest variations).
References
Heitor Murilo Gomes, Jesse Read, Albert Bifet. Streaming Random Patches for Evolving Data Stream Classification. IEEE International Conference on Data Mining (ICDM), 2019.
Examples
>>> from skmultiflow.data import AGRAWALGenerator >>> from skmultiflow.meta import StreamingRandomPatchesClassifier >>> >>> stream = AGRAWALGenerator(random_state=1) >>> srp = StreamingRandomPatchesClassifier(random_state=1, >>> n_estimators=3) >>> >>> # Variables to control loop and track performance >>> n_samples = 0 >>> correct_cnt = 0 >>> max_samples = 200 >>> >>> # Run test-then-train loop for max_samples >>> # or while there is data in the stream >>> while n_samples < max_samples and stream.has_more_samples(): >>> X, y = stream.next_sample() >>> y_pred = srp.predict(X) >>> if y[0] == y_pred[0]: >>> correct_cnt += 1 >>> srp.partial_fit(X, y) >>> n_samples += 1 >>> >>> print('{} samples analyzed.'.format(n_samples))
Methods
fit(self, X, y[, classes, sample_weight])
fit
Fit the model.
get_info(self)
get_info
Collects and returns the information about the configuration of the estimator
get_params(self[, deep])
get_params
Get parameters for this estimator.
partial_fit(self, X, y[, classes, sample_weight])
partial_fit
Partially (incrementally) fit the model.
predict(self, X)
predict
Predict classes for the passed data.
predict_proba(self, X)
predict_proba
Estimate the probability of X belonging to each class-labels.
reset(self)
reset
Resets the estimator to its initial state.
score(self, X, y[, sample_weight])
score
Returns the mean accuracy on the given test data and labels.
set_params(self, **params)
set_params
Set the parameters of this estimator.
The features to train the model.
An array-like with the class labels of all samples in X.
Contains all possible/known class labels. Usage varies depending on the learning method.
Samples weight. If not provided, uniform weights are assumed. Usage varies depending on the learning method.
Configuration of the estimator.
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Parameter names mapped to their values.
No used.
The set of data samples to predict the class labels for.
Samples one wants to predict the class probabilities for.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
Test samples.
True labels for X.
Sample weights.
Mean accuracy of self.predict(X) wrt. y.
The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.
<component>__<parameter>