estimator_checks
Utilities for unit testing and sanity checking estimators.
check_disappearing_features(model, dataset)
¶
The model should work fine when features disappear.
check_emerging_features(model, dataset)
¶
The model should work fine when new features appear.
check_estimator(model)
¶
Check if a model adheres to river
's conventions.
This will run a series of unit tests. The nature of the
unit tests depends on the type of model.
PARAMETER | DESCRIPTION |
---|---|
model |
|
check_init_default_params_are_not_mutable(model)
¶
Mutable parameters in signatures are discouraged, as explained in https://docs.python-guide.org/writing/gotchas/#mutable-default-arguments We enforce immutable parameters by only allowing a certain list of basic types.
check_learn_one(model, dataset)
¶
learn_one should return the calling model and be pure.
check_predict_proba_one(classifier, dataset)
¶
predict_proba_one should return a valid probability distribution and be pure.
check_predict_proba_one_binary(classifier, dataset)
¶
predict_proba_one should return a dict with True and False keys.
check_shuffle_features_no_impact(model, dataset)
¶
Changing the order of the features between calls should have no effect on a model.
check_tags(model)
¶
Checks that the _tags
property works.
seed_params(params, seed)
¶
Looks for "seed" keys and sets the value.
wrapped_partial(func, *args, **kwargs)
¶
Taken from http://louistiao.me/posts/adding-name-and- doc-attributes-to-functoolspartial-objects/
yield_checks(model)
¶
Generates unit tests for a given model. Parameters: model (base.Estimator)