RFE
Module with tools to perform feature selection.
This module contains the following classes:
FeliminationRFECV
: base class for feature selection.PermutationImportanceRFECV
: recursive feature elimination with cross-validation based on permutation importance.
FeliminationRFECV(estimator, *, step=1, n_features_to_select=1, cv=None, scoring=None, random_state=None, verbose=0, n_jobs=None, importance_getter='auto', callbacks=None)
Bases: RFE
Perform recursive feature elimination with cross-validation following scikit-learn standards.
It has the following differences with RFECV from scikit-learn:
- It supports an
importance_getter
function that also uses a validation set to compute the feature importances. This allows to use importance measures like permutation importance or shap. - Instead of using Cross Validation to select the number of features, it uses cross validation to get a more accurate estimate of the feature importances. This means that the number of features to select has to be set during initialization, similarly to RFE.
- When
step
is a float value it is removes a percentage of the number of remaining features, not total like in RFE/RFECV. This allows to drop big chunks of feature at the beginning of the RFE process and to slow down towards the end of the process. - Has a plotting function
- Adds information about the number of features selected at each step in the
attribute
cv_results_
- Allows to change the number of features to be selected after fitting.
Rater than that, it is a copy-paste of RFE, so credit goes to scikit-learn.
The algorithm of feature selection goes as follows:
while n_features > n_features_to_select:
- The estimator is trained on the selected features and the score is
computed using cross validation.
- feature importance is computed for each validation fold on the validation
set and then averaged.
- The least important features are pruned.
- The pruned features are removed from the dataset.
Parameters:
-
estimator
(``Estimator`` instance
) –A supervised learning estimator with a
fit
method. -
step
(int or float
, default:1
) –If greater than or equal to 1, then
step
corresponds to the (integer) number of features to remove at each iteration. If within (0.0, 1.0), thenstep
corresponds to the percentage (rounded down) of remaining features to remove at each iteration. Note that the last iteration may remove fewer thanstep
features in order to reachmin_features_to_select
. -
n_features_to_select
(int or float
, default:None
) –The number of features to select. If
None
, half of the features are selected. If integer, the parameter is the absolute number of features to select. If float between 0 and 1, it is the fraction of the features to select. -
cv
(int, cross-validation generator or an iterable
, default:None
) –Determines the cross-validation splitting strategy. Possible inputs for cv are:
- None, to use the default 5-fold cross-validation, - integer, to specify the number of folds. - :term:`CV splitter`, - An iterable yielding (train, test) splits as arrays of indices.
For integer/None inputs, if
y
is binary or multiclass,~sklearn.model_selection.StratifiedKFold
is used. If the estimator is a classifier or ify
is neither binary nor multiclass,~sklearn.model_selection.KFold
is used.Refer :ref:
User Guide <cross_validation>
for the various cross-validation strategies that can be used here. -
scoring
((str, callable or None)
, default:None
) –A string (see model evaluation documentation) or a scorer callable object / function with signature
scorer(estimator, X, y)
. -
verbose
(int
, default:0
) –Controls verbosity of output.
-
n_jobs
(int or None
, default:None
) –Number of cores to run in parallel while fitting across folds.
None
means 1 unless in a :obj:joblib.parallel_backend
context.-1
means using all processors. -
importance_getter
(str or callable
, default:'auto'
) –If 'auto', uses the feature importance either through a
coef_
orfeature_importances_
attributes of estimator.Also accepts a string that specifies an attribute name/path for extracting feature importance. For example, give
regressor_.coef_
in case of~sklearn.compose.TransformedTargetRegressor
ornamed_steps.clf.feature_importances_
in case of~sklearn.pipeline.Pipeline
with its last step namedclf
.If
callable
, overrides the default feature importance getter. The callable is passed with the fitted estimator and the validation set (X_val, y_val, estimator) and it should return importance for each feature.
Attributes:
-
classes_
(ndarray of shape (n_classes,)
) –The classes labels. Only available when
estimator
is a classifier. -
estimator_
(``Estimator`` instance
) –The fitted estimator used to select features.
-
cv_results_
(dict of ndarrays
) –A dict with keys: n_features : ndarray of shape (n_subsets_of_features,) The number of features used at that step. split(k)_test_score : ndarray of shape (n_subsets_of_features,) The cross-validation scores across (k)th fold. mean_test_score : ndarray of shape (n_subsets_of_features,) Mean of scores over the folds. std_test_score : ndarray of shape (n_subsets_of_features,) Standard deviation of scores over the folds. split(k)_train_score : ndarray of shape (n_subsets_of_features,) The cross-validation scores across (k)th fold. mean_train_score : ndarray of shape (n_subsets_of_features,) Mean of scores over the folds. std_train_score : ndarray of shape (n_subsets_of_features,) Standard deviation of scores over the folds.
-
n_features_
(int
) –The number of selected features.
-
n_features_in_
(int
) –Number of features seen during :term:
fit
. Only defined if the underlying estimator exposes such an attribute when fit. -
feature_names_in_
(ndarray of shape (`n_features_in_`,)
) –Names of features seen during :term:
fit
. Defined only whenX
has feature names that are all strings. -
ranking_
(ndarray of shape (n_features,)
) –The feature ranking, such that
ranking_[i]
corresponds to the ranking position of the i-th feature. Selected (i.e., estimated best) features are assigned rank 1. -
support_
(ndarray of shape (n_features,)
) –The mask of selected features.
-
callbacks
(list of callable, default=None
) –List of callables to be called at the end of each step of the feature selection. Each callable should accept two parameters: the selector and the importances computed at that step.
Examples:
The following example shows how to retrieve the 5 most informative features in the Friedman #1 dataset.
>>> from felimination.rfe import FeliminationRFECV
>>> from felimination.importance import PermutationImportance
>>> from sklearn.datasets import make_friedman1
>>> from sklearn.svm import SVR
>>> X, y = make_friedman1(n_samples=50, n_features=10, random_state=0)
>>> estimator = SVR(kernel="linear")
>>> selector = selector = FeliminationRFECV(
estimator,
step=1,
cv=5,
n_features_to_select=5,
importance_getter=PermutationImportance()
)
>>> selector = selector.fit(X, y)
>>> selector.support_
array([ True, True, True, True, True, False, False, False, False,
False])
>>> selector.ranking_
array([1, 1, 1, 1, 1, 6, 3, 4, 2, 5])
Source code in felimination/rfe.py
fit(X, y, groups=None, **fit_params)
Fit the RFE model and then the underlying estimator on the selected features.
Parameters:
-
X
(array-like, sparse matrix
, default:array-like
) –The training input samples.
-
y
(array-like of shape (n_samples,)
) –The target values.
-
**fit_params
(dict
, default:{}
) –Additional parameters passed to the
fit
method of the underlying estimator.
Returns:
-
self
(object
) –Fitted estimator.
Source code in felimination/rfe.py
229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 |
|
plot(**kwargs)
Plot a feature selection plot with number of features
Parameters:
-
**kwargs
(dict
, default:{}
) –Additional parameters passed to seaborn.lineplot. For a list of possible options, please visit seaborn.lineplot # noqa
Returns:
-
Axes
–The axis where the plot has been plotted.
Source code in felimination/rfe.py
set_n_features_to_select(n_features_to_select)
Changes the number of features to select after fitting.
The underlying estimator will not be retrained. So this method will not alter the behavior of predict/predict_proba but it will change the behavior of transform and get_feature_names_out.
Parameters:
-
n_features_to_select
(int
) –The number of features to select. Must be a value among
cv_results_["n_features"]
Returns:
-
self
(object
) –Fitted estimator.
Raises:
-
ValueError
–When the number of features to select has not been tried during the feature selection procedure.
Source code in felimination/rfe.py
PermutationImportanceRFECV(estimator, *, step=1, n_features_to_select=1, cv=None, scoring=None, verbose=0, n_jobs=None, n_repeats=5, random_state=None, sample_weight=None, max_samples=1.0, callbacks=None)
Bases: FeliminationRFECV
Preset of FeliminationRFECV using permutation importance as importance getter.
It has the following differences with RFECV from scikit-learn:
- It supports an
importance_getter
function that also uses a validation set to compute the feature importances. This allows to use importance measures like permutation importance or shap. - Instead of using Cross Validation to select the number of features, it uses cross validation to get a more accurate estimate of the feature importances. This means that the number of features to select has to be set during initialization, similarly to RFE.
- When
step
is a float value it is removes a percentage of the number of remaining features, not total like in RFE/RFECV. This allows to drop big chunks of feature at the beginning of the RFE process and to slow down towards the end of the process. - Has a plotting function
- Adds information about the number of features selected at each step in the
attribute
cv_results_
- Allows to change the number of features to be selected after fitting.
Rater than that, it is a copy-paste of RFE, so credit goes to scikit-learn.
The algorithm of feature selection goes as follows:
while n_features > n_features_to_select:
- The estimator is trained on the selected features and the score is
computed using cross validation.
- feature importance is computed for each validation fold on the validation
set and then averaged.
- The least important features are pruned.
- The pruned features are removed from the dataset.
Parameters:
-
estimator
(``Estimator`` instance
) –A supervised learning estimator with a
fit
method. -
step
(int or float
, default:1
) –If greater than or equal to 1, then
step
corresponds to the (integer) number of features to remove at each iteration. If within (0.0, 1.0), thenstep
corresponds to the percentage (rounded down) of remaining features to remove at each iteration. Note that the last iteration may remove fewer thanstep
features in order to reachmin_features_to_select
. -
n_features_to_select
(int or float
, default:None
) –The number of features to select. If
None
, half of the features are selected. If integer, the parameter is the absolute number of features to select. If float between 0 and 1, it is the fraction of the features to select. -
cv
(int, cross-validation generator or an iterable
, default:None
) –Determines the cross-validation splitting strategy. Possible inputs for cv are:
- None, to use the default 5-fold cross-validation,
- integer, to specify the number of folds.
- :term:
CV splitter
, - An iterable yielding (train, test) splits as arrays of indices.
For integer/None inputs, if
y
is binary or multiclass,~sklearn.model_selection.StratifiedKFold
is used. If the estimator is a classifier or ify
is neither binary nor multiclass,~sklearn.model_selection.KFold
is used.Refer :ref:
User Guide <cross_validation>
for the various cross-validation strategies that can be used here. -
scoring
((str, callable or None)
, default:None
) –A string (see model evaluation documentation) or a scorer callable object / function with signature
scorer(estimator, X, y)
. -
verbose
(int
, default:0
) –Controls verbosity of output.
-
n_jobs
(int or None
, default:None
) –Number of cores to run in parallel while fitting across folds.
None
means 1 unless in a :obj:joblib.parallel_backend
context.-1
means using all processors. -
n_repeats
(int
, default:5
) –Number of times to permute a feature.
-
random_state
(int, RandomState instance
, default:None
) –Pseudo-random number generator to control the permutations of each feature. Pass an int to get reproducible results across function calls.
-
sample_weight
(array-like of shape (n_samples,)
, default:None
) –Sample weights used in scoring.
-
max_samples
(int or float
, default:1.0
) –The number of samples to draw from X to compute feature importance in each repeat (without replacement). - If int, then draw
max_samples
samples. - If float, then drawmax_samples * X.shape[0]
samples. - Ifmax_samples
is equal to1.0
orX.shape[0]
, all samples will be used. While using this option may provide less accurate importance estimates, it keeps the method tractable when evaluating feature importance on large datasets. In combination withn_repeats
, this allows to control the computational speed vs statistical accuracy trade-off of this method. -
callbacks
(list of callable
, default:None
) –List of callables to be called at the end of each step of the feature selection. Each callable should accept two parameters: the selector and the importances computed at that step.
Attributes:
-
classes_
(ndarray of shape (n_classes,)
) –The classes labels. Only available when
estimator
is a classifier. -
estimator_
(``Estimator`` instance
) –The fitted estimator used to select features.
-
cv_results_
(dict of ndarrays
) –A dict with keys: n_features : ndarray of shape (n_subsets_of_features,) The number of features used at that step. split(k)_test_score : ndarray of shape (n_subsets_of_features,) The cross-validation scores across (k)th fold. mean_test_score : ndarray of shape (n_subsets_of_features,) Mean of scores over the folds. std_test_score : ndarray of shape (n_subsets_of_features,) Standard deviation of scores over the folds. split(k)_train_score : ndarray of shape (n_subsets_of_features,) The cross-validation scores across (k)th fold. mean_train_score : ndarray of shape (n_subsets_of_features,) Mean of scores over the folds. std_train_score : ndarray of shape (n_subsets_of_features,) Standard deviation of scores over the folds.
-
n_features_
(int
) –The number of selected features.
-
n_features_in_
(int
) –Number of features seen during :term:
fit
. Only defined if the underlying estimator exposes such an attribute when fit. -
feature_names_in_
(ndarray of shape (`n_features_in_`,)
) –Names of features seen during :term:
fit
. Defined only whenX
has feature names that are all strings. -
ranking_
(ndarray of shape (n_features,)
) –The feature ranking, such that
ranking_[i]
corresponds to the ranking position of the i-th feature. Selected (i.e., estimated best) features are assigned rank 1. -
support_
(ndarray of shape (n_features,)
) –The mask of selected features.
Examples:
The following example shows how to retrieve the 5 most informative features in the Friedman #1 dataset.
>>> from felimination.rfe import PermutationImportanceRFECV
>>> from sklearn.datasets import make_friedman1
>>> from sklearn.svm import SVR
>>> X, y = make_friedman1(n_samples=50, n_features=10, random_state=0)
>>> estimator = SVR(kernel="linear")
>>> selector = selector = PermutationImportanceRFECV(
estimator,
step=1,
cv=5,
n_features_to_select=5,
)
>>> selector = selector.fit(X, y)
>>> selector.support_
array([ True, True, True, True, True, False, False, False, False,
False])
>>> selector.ranking_
array([1, 1, 1, 1, 1, 6, 3, 4, 2, 5])
Source code in felimination/rfe.py
fit(X, y, groups=None, **fit_params)
Fit the RFE model and then the underlying estimator on the selected features.
Parameters:
-
X
(array-like, sparse matrix
, default:array-like
) –The training input samples.
-
y
(array-like of shape (n_samples,)
) –The target values.
-
**fit_params
(dict
, default:{}
) –Additional parameters passed to the
fit
method of the underlying estimator.
Returns:
-
self
(object
) –Fitted estimator.
Source code in felimination/rfe.py
229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 |
|
plot(**kwargs)
Plot a feature selection plot with number of features
Parameters:
-
**kwargs
(dict
, default:{}
) –Additional parameters passed to seaborn.lineplot. For a list of possible options, please visit seaborn.lineplot # noqa
Returns:
-
Axes
–The axis where the plot has been plotted.
Source code in felimination/rfe.py
set_n_features_to_select(n_features_to_select)
Changes the number of features to select after fitting.
The underlying estimator will not be retrained. So this method will not alter the behavior of predict/predict_proba but it will change the behavior of transform and get_feature_names_out.
Parameters:
-
n_features_to_select
(int
) –The number of features to select. Must be a value among
cv_results_["n_features"]
Returns:
-
self
(object
) –Fitted estimator.
Raises:
-
ValueError
–When the number of features to select has not been tried during the feature selection procedure.