skmultiflow.lazy.
SAMKNNClassifier
Self Adjusting Memory coupled with the kNN classifier.
number of evaluated nearest neighbors.
Type of weighting of the nearest neighbors. It must be either ‘distance’ or ‘uniform’ (majority voting).
Maximum number of overall stored data points.
Proportion of the overall instances that may be used for the LTM. This is only relevant when the maximum number(maxSize) of stored instances is reached.
Type of STM size adaption. ‘maxACC’ calculates the Interleaved test-train error exactly for each of the evaluated window sizes, which means it has often to be recalculated from the scratch. ‘maxACCApprox’ approximates the Interleaved test-train error and is significantly faster than the exact version. If set to None, the STM is not adapted at all. When additionally use_ltm=false, this algorithm is simply a kNN with fixed sliding window size.
Minimum STM size which is evaluated during the STM size adaption.
Specifies whether the LTM should be used at all.
Notes
The Self Adjusting Memory (SAM) [1] model builds an ensemble with models targeting current or former concepts. SAM is built using two memories: STM for the current concept, and the LTM to retain information about past concepts. A cleaning process is in charge of controlling the size of the STM while keeping the information in the LTM consistent with the STM.
This modules uses the libNearestNeighbor, a C++ library used to speed up some of the algorithm’s computations. When invoking the library’s functions it’s important to pass the right argument type. Although most of this framework’s functionality will work with python standard types, the C++ library will work with 8-bit labels, which is already done by the SAMKNN class, but may be absent in custom classes that use SAMKNN static methods, or other custom functions that use the C++ library.
References
Losing, Viktor, Barbara Hammer, and Heiko Wersing. “Knn classifier with self adjusting memory for heterogeneous concept drift.” In Data Mining (ICDM), 2016 IEEE 16th International Conference on, pp. 291-300. IEEE, 2016.
Examples
>>> from skmultiflow.lazy import SAMKNNClassifier >>> from skmultiflow.data import FileStream >>> from skmultiflow.evaluation import EvaluatePrequential >>> # Setup the File Stream >>> stream = FileStream("https://raw.githubusercontent.com/scikit-multiflow/" ... "streaming-datasets/master/moving_squares.csv") >>> # Setup the classifier >>> classifier = SAMKNNClassifier(n_neighbors=5, weighting='distance', max_window_size=1000, >>> stm_size_option='maxACCApprox', use_ltm=False) >>> # Setup the evaluator >>> evaluator = EvaluatePrequential(pretrain_size=0, max_samples=100000, batch_size=1, ... n_wait=100, max_time=1000, output_file=None, ... show_plot=True, metrics=['accuracy', 'kappa_t']) >>> # Evaluate >>> evaluator.evaluate(stream=stream, model=classifier)
Methods
clean_samples(self, samplesCl, labelsCl[, …])
clean_samples
Removes distance-based all instances from the input samples that contradict those in the STM.
cluster_down(self, samples, labels)
cluster_down
Performs classwise kMeans++ clustering for given samples with corresponding labels.
fit(self, X, y[, classes, sample_weight])
fit
Fit the model.
get_complexity(self)
get_complexity
get_complexity_num_parameter_metric(self)
get_complexity_num_parameter_metric
get_distance_weighted_label(distances, …)
get_distance_weighted_label
Returns the the distance weighted label of the k nearest neighbors.
get_distances(sample, samples)
get_distances
Calculate distances from sample to all samples.
get_info(self)
get_info
Collects and returns the information about the configuration of the estimator
get_maj_label(distances, labels, numNeighbours)
get_maj_label
Returns the majority label of the k nearest neighbors.
get_params(self[, deep])
get_params
Get parameters for this estimator.
partial_fit(self, X, y[, classes, sample_weight])
partial_fit
Partially (incrementally) fit the model.
predict(self, X)
predict
Predict classes for the passed data.
predict_proba(self, X)
predict_proba
Estimates the probability of each sample in X belonging to each of the class-labels.
reset(self)
reset
Resets the estimator to its initial state.
score(self, X, y[, sample_weight])
score
Returns the mean accuracy on the given test data and labels.
set_params(self, **params)
set_params
Set the parameters of this estimator.
size_check_STMLTM(self)
size_check_STMLTM
Makes sure that the STM and LTM combined doe not surpass the maximum size, only used when use_ltm=True.
size_check_fade_out(self)
size_check_fade_out
Makes sure that the STM does not surpass the maximum size, only used when use_ltm=False.
Attributes
LTMLabels
LTMSamples
STMLabels
STMSamples
Performs classwise kMeans++ clustering for given samples with corresponding labels. The number of samples is halved per class.
The features to train the model.
An array-like with the class labels of all samples in X.
Contains all possible/known class labels. Usage varies depending on the learning method.
Samples weight. If not provided, uniform weights are assumed. Usage varies depending on the learning method.
Configuration of the estimator.
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Parameter names mapped to their values.
An array-like with the labels of all samples in X.
Array with all possible/known classes. Usage varies depending on the learning method.
The set of data samples to predict the class labels for.
The matrix of samples one wants to predict the class probabilities for.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
Test samples.
True labels for X.
Sample weights.
Mean accuracy of self.predict(X) wrt. y.
The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.
<component>__<parameter>