skmultiflow.meta.BatchIncrementalClassifier

class skmultiflow.meta.BatchIncrementalClassifier(base_estimator=DecisionTreeClassifier(), window_size=100, n_estimators=100)[source]

Batch Incremental ensemble classifier.

This is a wrapper that allows the application of any batch model to a stream by incrementally building an ensemble of instances of the batch model. A window of examples is collected, then used to train a new model, which is added to the ensemble. A maximum number of models ensures memory use is finite (the oldest model is deleted when this number is exceeded).

Parameters
base_estimator: skmultiflow.core.BaseSKMObject or sklearn.BaseEstimator (default=DecisionTreeClassifier)

Each member of the ensemble is an instance of the base estimator

window_size: int (default=100)

The size of the training window (batch), in other words, how many instances are kept for training.

n_estimators: int (default=100)

Number of estimators in the ensemble.

Notes

Not yet multi-label capable.

Examples

>>> # Imports
>>> from skmultiflow.data import SEAGenerator
>>> from skmultiflow.meta import BatchIncrementalClassifier
>>>
>>> # Setup a data stream
>>> stream = SEAGenerator(random_state=1)
>>>
>>> # Pre-training the classifier with 200 samples
>>> X, y = stream.next_sample(200)
>>> batch_incremental_cfier = BatchIncrementalClassifier()
>>> batch_incremental_cfier.partial_fit(X, y)
>>>
>>> # Preparing the processing of 5000 samples and correct prediction count
>>> n_samples = 0
>>> correct_cnt = 0
>>> while n_samples < 5000 and stream.has_more_samples():
>>>     X, y = stream.next_sample()
>>>     y_pred = batch_incremental_cfier.predict(X)
>>>     if y[0] == y_pred[0]:
>>>         correct_cnt += 1
>>>     batch_incremental_cfier.partial_fit(X, y)
>>>     n_samples += 1
>>>
>>> # Display results
>>> print('Batch Incremental ensemble classifier example')
>>> print('{} samples analyzed'.format(n_samples))
>>> print('Performance: {}'.format(correct_cnt / n_samples))

Methods

fit(self, X, y[, classes, sample_weight])

Fit the model.

get_info(self)

Collects and returns the information about the configuration of the estimator

get_params(self[, deep])

Get parameters for this estimator.

partial_fit(self, X[, y, classes, sample_weight])

Partially (incrementally) fit the model.

predict(self, X)

Predict classes for the passed data.

predict_proba(self, X)

Estimates the probability of each sample in X belonging to each of the class-labels.

reset(self)

Resets the estimator to its initial state.

score(self, X, y[, sample_weight])

Returns the mean accuracy on the given test data and labels.

set_params(self, **params)

Set the parameters of this estimator.

fit(self, X, y, classes=None, sample_weight=None)[source]

Fit the model.

Parameters
Xnumpy.ndarray of shape (n_samples, n_features)

The features to train the model.

y: numpy.ndarray of shape (n_samples, n_targets)

An array-like with the class labels of all samples in X.

classes: numpy.ndarray, optional (default=None)

Contains all possible/known class labels. Usage varies depending on the learning method.

sample_weight: numpy.ndarray, optional (default=None)

Samples weight. If not provided, uniform weights are assumed. Usage varies depending on the learning method.

Returns
self
get_info(self)[source]

Collects and returns the information about the configuration of the estimator

Returns
string

Configuration of the estimator.

get_params(self, deep=True)[source]

Get parameters for this estimator.

Parameters
deepboolean, optional

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns
paramsmapping of string to any

Parameter names mapped to their values.

partial_fit(self, X, y=None, classes=None, sample_weight=None)[source]

Partially (incrementally) fit the model.

Parameters
Xnumpy.ndarray of shape (n_samples, n_features)

The features to train the model.

y: numpy.ndarray of shape (n_samples)

An array-like with the labels of all samples in X.

classes: Not used (default=None)
sample_weight: numpy.ndarray of shape (n_samples), optional (default=None)

Samples weight. If not provided, uniform weights are assumed.

Returns
self
predict(self, X)[source]

Predict classes for the passed data.

Parameters
Xnumpy.ndarray of shape (n_samples, n_features)

The set of data samples to predict the labels for.

Returns
A numpy.ndarray with all the predictions for the samples in X.
predict_proba(self, X)[source]

Estimates the probability of each sample in X belonging to each of the class-labels.

Parameters
Xnumpy.ndarray of shape (n_samples, n_features)

The matrix of samples one wants to predict the class probabilities for.

Returns
A numpy.ndarray of shape (n_samples, n_labels), in which each outer entry is associated with the X entry of the
same index. And where the list in index [i] contains len(self.target_values) elements, each of which represents
the probability that the i-th sample of X belongs to a certain class-label.
reset(self)[source]

Resets the estimator to its initial state.

Returns
self
score(self, X, y, sample_weight=None)[source]

Returns the mean accuracy on the given test data and labels.

In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.

Parameters
Xarray-like, shape = (n_samples, n_features)

Test samples.

yarray-like, shape = (n_samples) or (n_samples, n_outputs)

True labels for X.

sample_weightarray-like, shape = [n_samples], optional

Sample weights.

Returns
scorefloat

Mean accuracy of self.predict(X) wrt. y.

set_params(self, **params)[source]

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns
self