skmultiflow.anomaly_detection.HalfSpaceTrees

class skmultiflow.anomaly_detection.HalfSpaceTrees(window_size=250, depth=15, n_estimators=25, size_limit=50, anomaly_threshold=0.5, random_state=None)[source]

Half–Space Trees.

Implementation of the Streaming Half–Space–Trees (HS–Trees) [1], a fast one-class anomaly detector for evolving data streams. It requires only normal data for training and works well when anomalous data are rare. The model features an ensemble of random HS–Trees, and the tree structure is constructed without any data. This makes the method highly efficient because it requires no model restructuring when adapting to evolving data streams.

Parameters
n_estimators: int, optional (default=25)

Number of trees in the ensemble. ‘t’ in the original paper.

window_size: int, optional (default=250)

The window size of the stream. ‘Psi’ in the original paper.

depth: int, optional (default=15)

The maximum depth of the trees in the ensemble. ‘maxDepth’ in the original paper.

size_limit: int, optional (default=50)

The minimum mass required in a node (as a fraction of the window size) to calculate the anomaly score. ‘sizeLimit’ in the original paper. A good setting is 0.1 * window_size

anomaly_threshold: double, optional (default=0.5)

The threshold for declaring anomalies. Any instance prediction probability above this threshold will be declared as an anomaly.

random_state: int, RandomState instance or None, optional (default=None)

If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.

References

1

S.C.Tan, K.M.Ting, and T.F.Liu, “Fast anomaly detection for streaming data,” in IJCAI Proceedings - International Joint Conference on Artificial Intelligence, 2011, vol. 22, no. 1, pp. 1511–1516.

Examples

>>> # Imports
>>> from skmultiflow.data import AnomalySineGenerator
>>> from skmultiflow.anomaly_detection import HalfSpaceTrees
>>> # Setup a data stream
>>> stream = AnomalySineGenerator(random_state=1, n_samples=1000, n_anomalies=250)
>>> # Setup Half-Space Trees estimator
>>> half_space_trees = HalfSpaceTrees(random_state=1)
>>> # Setup variables to control loop and track performance
>>> max_samples = 1000
>>> n_samples = 0
>>> true_positives = 0
>>> detected_anomalies = 0
>>> # Train the estimator(s) with the samples provided by the data stream
>>> while n_samples < max_samples and stream.has_more_samples():
>>>     X, y = stream.next_sample()
>>>     y_pred = half_space_trees.predict(X)
>>>     if y[0] == 1:
>>>         true_positives += 1
>>>         if y_pred[0] == 1:
>>>             detected_anomalies += 1
>>>     half_space_trees.partial_fit(X, y)
>>>     n_samples += 1
>>> print('{} samples analyzed.'.format(n_samples))
>>> print('Half-Space Trees correctly detected {} out of {} anomalies'.
>>>       format(detected_anomalies, true_positives))

Methods

build_trees(self)

Initialises ensemble.

fit(self, X, y[, classes, sample_weight])

Fit the model.

get_info(self)

Collects and returns the information about the configuration of the estimator

get_params(self[, deep])

Get parameters for this estimator.

initialise_work_space(self)

Initialises work spaces.

partial_fit(self, X[, y, classes, sample_weight])

Partially (incrementally) fit the model.

predict(self, X)

Predict classes for the passed data.

predict_proba(self, X)

Estimate the probability of a sample being normal or abnormal.

reset(self)

Resets the estimator to its initial state.

score(self, X, y[, sample_weight])

Returns the mean accuracy on the given test data and labels.

set_is_learning_phase_on(self, boolean)

Sets learning phase in each tree defined in the ensemble.

set_params(self, **params)

Set the parameters of this estimator.

update_mass(self, X, boolean)

Populates mass profiles for every tree defined in the ensemble.

update_models(self)

Updates the mass profile of every tree in the ensemble.

build_trees(self)[source]

Initialises ensemble.

fit(self, X, y, classes=None, sample_weight=None)[source]

Fit the model.

Parameters
Xnumpy.ndarray of shape (n_samples, n_features)

The features to train the model.

y: numpy.ndarray of shape (n_samples, n_targets)

An array-like with the class labels of all samples in X.

classes: numpy.ndarray, optional (default=None)

Contains all possible/known class labels. Usage varies depending on the learning method.

sample_weight: numpy.ndarray, optional (default=None)

Samples weight. If not provided, uniform weights are assumed. Usage varies depending on the learning method.

Returns
self
get_info(self)[source]

Collects and returns the information about the configuration of the estimator

Returns
string

Configuration of the estimator.

get_params(self, deep=True)[source]

Get parameters for this estimator.

Parameters
deepboolean, optional

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns
paramsmapping of string to any

Parameter names mapped to their values.

initialise_work_space(self)[source]

Initialises work spaces.

For every dimension in the feature space, creates a minimum and a maximum work range.

partial_fit(self, X, y=None, classes=None, sample_weight=None)[source]

Partially (incrementally) fit the model.

Parameters
Xnumpy.ndarray of shape (n_samples, n_features)

The features to train the model.

y: Not used

Kept in the signature for compatibility with parent class.

classes: None

Not used by this method.

sample_weight: None

Not used by this method.

Returns
self
predict(self, X)[source]

Predict classes for the passed data.

Parameters
Xnumpy.ndarray of shape (n_samples, n_features)

The set of data samples to predict the class labels for.

Returns
A numpy.ndarray with all the predictions for the samples in X.
predict_proba(self, X)[source]

Estimate the probability of a sample being normal or abnormal.

Class probabilities are calculated as the mean predicted class probabilities per base estimator.

Parameters
X: numpy.ndarray of shape (n_samples, n_features)

Samples for which we want to predict the class probabilities.

Returns
numpy.ndarray of shape (n_samples, n_classes)

Predicted class probabilities for all instances in X. Class probabilities for a sample shall sum to 1 as long as at least one estimators has non-zero predictions. If no estimator can predict probabilities, probabilities of 0 are returned.

reset(self)[source]

Resets the estimator to its initial state.

Returns
self
score(self, X, y, sample_weight=None)[source]

Returns the mean accuracy on the given test data and labels.

In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.

Parameters
Xarray-like, shape = (n_samples, n_features)

Test samples.

yarray-like, shape = (n_samples) or (n_samples, n_outputs)

True labels for X.

sample_weightarray-like, shape = [n_samples], optional

Sample weights.

Returns
scorefloat

Mean accuracy of self.predict(X) wrt. y.

set_is_learning_phase_on(self, boolean)[source]

Sets learning phase in each tree defined in the ensemble.

Parameters
boolean: Boolean
set_params(self, **params)[source]

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns
self
update_mass(self, X, boolean)[source]

Populates mass profiles for every tree defined in the ensemble.

Parameters
X: numpy.ndarray of shape (1, n_features)

Instance attributes.

boolean: boolean, True or False

True to update reference mass, False to update latest mass

update_models(self)[source]

Updates the mass profile of every tree in the ensemble.