Title: | Stability Measures for Feature Selection |
---|---|
Description: | An implementation of many measures for the assessment of the stability of feature selection. Both simple measures and measures which take into account the similarities between features are available, see Bommert (2020) <doi:10.17877/DE290R-21906>. |
Authors: | Andrea Bommert [aut, cre] , Michel Lang [aut] |
Maintainer: | Andrea Bommert <[email protected]> |
License: | LGPL-3 |
Version: | 1.2.2 |
Built: | 2024-10-26 03:25:26 UTC |
Source: | https://github.com/bommert/stabm |
An implementation of many measures for the assessment of the stability of feature selection. Both simple measures and measures which take into account the similarities between features are available, see Bommert (2020) doi:10.17877/DE290R-21906.
Maintainer: Andrea Bommert [email protected] (ORCID)
Authors:
Michel Lang [email protected] (ORCID)
Useful links:
Report bugs at https://github.com/bommert/stabm/issues
Lists all stability measures of package stabm and provides information about them.
listStabilityMeasures()
listStabilityMeasures()
data.frame
For each stability measure, its name,
the information, whether it is corrected for chance by definition,
the information, whether it is adjusted for similar features,
its minimal value and its maximal value are displayed.
The given minimal values might only be reachable
in some scenarios, e.g. if the feature sets have a certain size.
The measures which are not corrected for chance by definition can
be corrected for chance with correction.for.chance
.
This however changes the minimal value.
For the adjusted stability measures, the minimal value depends
on the similarity structure.
listStabilityMeasures()
listStabilityMeasures()
Creates a heatmap of the features which are selected in at least one feature set.
The sets are ordered according to average linkage hierarchical clustering based on the Manhattan
distance. If sim.mat
is given, the features are ordered according to average linkage
hierarchical clustering based on 1 - sim.mat
. Otherwise, the features are ordered in
the same way as the feature sets.
Note that this function needs the packages ggplot2, cowplot and ggdendro installed.
plotFeatures(features, sim.mat = NULL)
plotFeatures(features, sim.mat = NULL)
features |
|
sim.mat |
|
Object of class ggplot
.
feats = list(1:3, 1:4, 1:5) mat = 0.92 ^ abs(outer(1:10, 1:10, "-")) plotFeatures(features = feats) plotFeatures(features = feats, sim.mat = mat)
feats = list(1:3, 1:4, 1:5) mat = 0.92 ^ abs(outer(1:10, 1:10, "-")) plotFeatures(features = feats) plotFeatures(features = feats, sim.mat = mat)
The stability of feature selection is defined as the robustness of the sets of selected features with respect to small variations in the data on which the feature selection is conducted. To quantify stability, several datasets from the same data generating process can be used. Alternatively, a single dataset can be split into parts by resampling. Either way, all datasets used for feature selection must contain exactly the same features. The feature selection method of interest is applied on all of the datasets and the sets of chosen features are recorded. The stability of the feature selection is assessed based on the sets of chosen features using stability measures.
stabilityDavis( features, p, correction.for.chance = "none", N = 10000, impute.na = NULL, penalty = 0 )
stabilityDavis( features, p, correction.for.chance = "none", N = 10000, impute.na = NULL, penalty = 0 )
features |
|
p |
|
correction.for.chance |
|
N |
|
impute.na |
|
penalty |
|
The stability measure is defined as (see Notation)
numeric(1)
Stability value.
For the definition of all stability measures in this package,
the following notation is used:
Let denote the sets of chosen features
for the
datasets, i.e.
features
has length and
is a set which contains the
-th entry of
features
.
Furthermore, let denote the number of sets that contain feature
so that
is the absolute frequency with which feature
is chosen.
Analogously, let
denote the number of sets that include both
and
.
Also, let
and
.
Davis CA, Gerick F, Hintermair V, Friedel CC, Fundel K, Kuffner R, Zimmer R (2006). “Reliable gene signatures for microarray classification: assessment of stability and performance.” Bioinformatics, 22(19), 2356–2363. doi:10.1093/bioinformatics/btl400.
Bommert A, Rahnenführer J, Lang M (2017). “A Multicriteria Approach to Find Predictive and Sparse Models with Stable Feature Selection for High-Dimensional Data.” Computational and Mathematical Methods in Medicine, 2017, 1–18. doi:10.1155/2017/7907163.
Bommert A (2020). Integration of Feature Selection Stability in Model Fitting. Ph.D. thesis, TU Dortmund University, Germany. doi:10.17877/DE290R-21906.
feats = list(1:3, 1:4, 1:5) stabilityDavis(features = feats, p = 10)
feats = list(1:3, 1:4, 1:5) stabilityDavis(features = feats, p = 10)
The stability of feature selection is defined as the robustness of the sets of selected features with respect to small variations in the data on which the feature selection is conducted. To quantify stability, several datasets from the same data generating process can be used. Alternatively, a single dataset can be split into parts by resampling. Either way, all datasets used for feature selection must contain exactly the same features. The feature selection method of interest is applied on all of the datasets and the sets of chosen features are recorded. The stability of the feature selection is assessed based on the sets of chosen features using stability measures.
stabilityDice( features, p = NULL, correction.for.chance = "none", N = 10000, impute.na = NULL )
stabilityDice( features, p = NULL, correction.for.chance = "none", N = 10000, impute.na = NULL )
features |
|
p |
|
correction.for.chance |
|
N |
|
impute.na |
|
The stability measure is defined as (see Notation)
numeric(1)
Stability value.
For the definition of all stability measures in this package,
the following notation is used:
Let denote the sets of chosen features
for the
datasets, i.e.
features
has length and
is a set which contains the
-th entry of
features
.
Furthermore, let denote the number of sets that contain feature
so that
is the absolute frequency with which feature
is chosen.
Analogously, let
denote the number of sets that include both
and
.
Also, let
and
.
Dice LR (1945). “Measures of the Amount of Ecologic Association Between Species.” Ecology, 26(3), 297–302. doi:10.2307/1932409.
Bommert A, Rahnenführer J, Lang M (2017). “A Multicriteria Approach to Find Predictive and Sparse Models with Stable Feature Selection for High-Dimensional Data.” Computational and Mathematical Methods in Medicine, 2017, 1–18. doi:10.1155/2017/7907163.
Bommert A (2020). Integration of Feature Selection Stability in Model Fitting. Ph.D. thesis, TU Dortmund University, Germany. doi:10.17877/DE290R-21906.
feats = list(1:3, 1:4, 1:5) stabilityDice(features = feats)
feats = list(1:3, 1:4, 1:5) stabilityDice(features = feats)
The stability of feature selection is defined as the robustness of the sets of selected features with respect to small variations in the data on which the feature selection is conducted. To quantify stability, several datasets from the same data generating process can be used. Alternatively, a single dataset can be split into parts by resampling. Either way, all datasets used for feature selection must contain exactly the same features. The feature selection method of interest is applied on all of the datasets and the sets of chosen features are recorded. The stability of the feature selection is assessed based on the sets of chosen features using stability measures.
stabilityHamming( features, p, correction.for.chance = "none", N = 10000, impute.na = NULL )
stabilityHamming( features, p, correction.for.chance = "none", N = 10000, impute.na = NULL )
features |
|
p |
|
correction.for.chance |
|
N |
|
impute.na |
|
The stability measure is defined as (see Notation)
numeric(1)
Stability value.
For the definition of all stability measures in this package,
the following notation is used:
Let denote the sets of chosen features
for the
datasets, i.e.
features
has length and
is a set which contains the
-th entry of
features
.
Furthermore, let denote the number of sets that contain feature
so that
is the absolute frequency with which feature
is chosen.
Analogously, let
denote the number of sets that include both
and
.
Also, let
and
.
Dunne, Kevin, Cunningham, Padraig, Azuaje, Francisco (2002). “Solutions to instability problems with sequential wrapper-based approaches to feature selection.” Machine Learning Group, Department of Computer Science, Trinity College, Dublin.
Bommert A (2020). Integration of Feature Selection Stability in Model Fitting. Ph.D. thesis, TU Dortmund University, Germany. doi:10.17877/DE290R-21906.
feats = list(1:3, 1:4, 1:5) stabilityHamming(features = feats, p = 10)
feats = list(1:3, 1:4, 1:5) stabilityHamming(features = feats, p = 10)
The stability of feature selection is defined as the robustness of the sets of selected features with respect to small variations in the data on which the feature selection is conducted. To quantify stability, several datasets from the same data generating process can be used. Alternatively, a single dataset can be split into parts by resampling. Either way, all datasets used for feature selection must contain exactly the same features. The feature selection method of interest is applied on all of the datasets and the sets of chosen features are recorded. The stability of the feature selection is assessed based on the sets of chosen features using stability measures.
stabilityIntersectionCount( features, sim.mat, threshold = 0.9, correction.for.chance = "estimate", N = 10000, impute.na = NULL )
stabilityIntersectionCount( features, sim.mat, threshold = 0.9, correction.for.chance = "estimate", N = 10000, impute.na = NULL )
features |
|
sim.mat |
|
threshold |
|
correction.for.chance |
|
N |
|
impute.na |
|
The stability measure is defined as (see Notation)
with
and
numeric(1)
Stability value.
For the definition of all stability measures in this package,
the following notation is used:
Let denote the sets of chosen features
for the
datasets, i.e.
features
has length and
is a set which contains the
-th entry of
features
.
Furthermore, let denote the number of sets that contain feature
so that
is the absolute frequency with which feature
is chosen.
Analogously, let
denote the number of sets that include both
and
.
Also, let
and
.
Bommert A, Rahnenführer J (2020). “Adjusted Measures for Feature Selection Stability for Data Sets with Similar Features.” In Machine Learning, Optimization, and Data Science, 203–214. doi:10.1007/978-3-030-64583-0_19.
Bommert A (2020). Integration of Feature Selection Stability in Model Fitting. Ph.D. thesis, TU Dortmund University, Germany. doi:10.17877/DE290R-21906.
feats = list(1:3, 1:4, 1:5) mat = 0.92 ^ abs(outer(1:10, 1:10, "-")) stabilityIntersectionCount(features = feats, sim.mat = mat, N = 1000)
feats = list(1:3, 1:4, 1:5) mat = 0.92 ^ abs(outer(1:10, 1:10, "-")) stabilityIntersectionCount(features = feats, sim.mat = mat, N = 1000)
The stability of feature selection is defined as the robustness of the sets of selected features with respect to small variations in the data on which the feature selection is conducted. To quantify stability, several datasets from the same data generating process can be used. Alternatively, a single dataset can be split into parts by resampling. Either way, all datasets used for feature selection must contain exactly the same features. The feature selection method of interest is applied on all of the datasets and the sets of chosen features are recorded. The stability of the feature selection is assessed based on the sets of chosen features using stability measures.
stabilityIntersectionGreedy( features, sim.mat, threshold = 0.9, correction.for.chance = "estimate", N = 10000, impute.na = NULL )
stabilityIntersectionGreedy( features, sim.mat, threshold = 0.9, correction.for.chance = "estimate", N = 10000, impute.na = NULL )
features |
|
sim.mat |
|
threshold |
|
correction.for.chance |
|
N |
|
impute.na |
|
The stability measure is defined as (see Notation)
with
denotes a greedy approximation
of
, see stabilityIntersectionMBM.
numeric(1)
Stability value.
For the definition of all stability measures in this package,
the following notation is used:
Let denote the sets of chosen features
for the
datasets, i.e.
features
has length and
is a set which contains the
-th entry of
features
.
Furthermore, let denote the number of sets that contain feature
so that
is the absolute frequency with which feature
is chosen.
Analogously, let
denote the number of sets that include both
and
.
Also, let
and
.
Bommert A, Rahnenführer J (2020). “Adjusted Measures for Feature Selection Stability for Data Sets with Similar Features.” In Machine Learning, Optimization, and Data Science, 203–214. doi:10.1007/978-3-030-64583-0_19.
Bommert A (2020). Integration of Feature Selection Stability in Model Fitting. Ph.D. thesis, TU Dortmund University, Germany. doi:10.17877/DE290R-21906.
feats = list(1:3, 1:4, 1:5) mat = 0.92 ^ abs(outer(1:10, 1:10, "-")) stabilityIntersectionGreedy(features = feats, sim.mat = mat, N = 1000)
feats = list(1:3, 1:4, 1:5) mat = 0.92 ^ abs(outer(1:10, 1:10, "-")) stabilityIntersectionGreedy(features = feats, sim.mat = mat, N = 1000)
The stability of feature selection is defined as the robustness of the sets of selected features with respect to small variations in the data on which the feature selection is conducted. To quantify stability, several datasets from the same data generating process can be used. Alternatively, a single dataset can be split into parts by resampling. Either way, all datasets used for feature selection must contain exactly the same features. The feature selection method of interest is applied on all of the datasets and the sets of chosen features are recorded. The stability of the feature selection is assessed based on the sets of chosen features using stability measures.
stabilityIntersectionMBM( features, sim.mat, threshold = 0.9, correction.for.chance = "estimate", N = 10000, impute.na = NULL )
stabilityIntersectionMBM( features, sim.mat, threshold = 0.9, correction.for.chance = "estimate", N = 10000, impute.na = NULL )
features |
|
sim.mat |
|
threshold |
|
correction.for.chance |
|
N |
|
impute.na |
|
The stability measure is defined as (see Notation)
with
denotes the size of the
maximum bipartite matching based on the graph whose vertices are the features
of
on the one side and the features of
on the other side. Vertices x and y are connected if and only if
Requires the package igraph.
numeric(1)
Stability value.
For the definition of all stability measures in this package,
the following notation is used:
Let denote the sets of chosen features
for the
datasets, i.e.
features
has length and
is a set which contains the
-th entry of
features
.
Furthermore, let denote the number of sets that contain feature
so that
is the absolute frequency with which feature
is chosen.
Analogously, let
denote the number of sets that include both
and
.
Also, let
and
.
Bommert A, Rahnenführer J (2020). “Adjusted Measures for Feature Selection Stability for Data Sets with Similar Features.” In Machine Learning, Optimization, and Data Science, 203–214. doi:10.1007/978-3-030-64583-0_19.
Bommert A (2020). Integration of Feature Selection Stability in Model Fitting. Ph.D. thesis, TU Dortmund University, Germany. doi:10.17877/DE290R-21906.
feats = list(1:3, 1:4, 1:5) mat = 0.92 ^ abs(outer(1:10, 1:10, "-")) stabilityIntersectionMBM(features = feats, sim.mat = mat, N = 1000)
feats = list(1:3, 1:4, 1:5) mat = 0.92 ^ abs(outer(1:10, 1:10, "-")) stabilityIntersectionMBM(features = feats, sim.mat = mat, N = 1000)
The stability of feature selection is defined as the robustness of the sets of selected features with respect to small variations in the data on which the feature selection is conducted. To quantify stability, several datasets from the same data generating process can be used. Alternatively, a single dataset can be split into parts by resampling. Either way, all datasets used for feature selection must contain exactly the same features. The feature selection method of interest is applied on all of the datasets and the sets of chosen features are recorded. The stability of the feature selection is assessed based on the sets of chosen features using stability measures.
stabilityIntersectionMean( features, sim.mat, threshold = 0.9, correction.for.chance = "estimate", N = 10000, impute.na = NULL )
stabilityIntersectionMean( features, sim.mat, threshold = 0.9, correction.for.chance = "estimate", N = 10000, impute.na = NULL )
features |
|
sim.mat |
|
threshold |
|
correction.for.chance |
|
N |
|
impute.na |
|
The stability measure is defined as (see Notation)
with
and
numeric(1)
Stability value.
For the definition of all stability measures in this package,
the following notation is used:
Let denote the sets of chosen features
for the
datasets, i.e.
features
has length and
is a set which contains the
-th entry of
features
.
Furthermore, let denote the number of sets that contain feature
so that
is the absolute frequency with which feature
is chosen.
Analogously, let
denote the number of sets that include both
and
.
Also, let
and
.
Bommert A, Rahnenführer J (2020). “Adjusted Measures for Feature Selection Stability for Data Sets with Similar Features.” In Machine Learning, Optimization, and Data Science, 203–214. doi:10.1007/978-3-030-64583-0_19.
Bommert A (2020). Integration of Feature Selection Stability in Model Fitting. Ph.D. thesis, TU Dortmund University, Germany. doi:10.17877/DE290R-21906.
feats = list(1:3, 1:4, 1:5) mat = 0.92 ^ abs(outer(1:10, 1:10, "-")) stabilityIntersectionMean(features = feats, sim.mat = mat, N = 1000)
feats = list(1:3, 1:4, 1:5) mat = 0.92 ^ abs(outer(1:10, 1:10, "-")) stabilityIntersectionMean(features = feats, sim.mat = mat, N = 1000)
The stability of feature selection is defined as the robustness of the sets of selected features with respect to small variations in the data on which the feature selection is conducted. To quantify stability, several datasets from the same data generating process can be used. Alternatively, a single dataset can be split into parts by resampling. Either way, all datasets used for feature selection must contain exactly the same features. The feature selection method of interest is applied on all of the datasets and the sets of chosen features are recorded. The stability of the feature selection is assessed based on the sets of chosen features using stability measures.
stabilityJaccard( features, p = NULL, correction.for.chance = "none", N = 10000, impute.na = NULL )
stabilityJaccard( features, p = NULL, correction.for.chance = "none", N = 10000, impute.na = NULL )
features |
|
p |
|
correction.for.chance |
|
N |
|
impute.na |
|
The stability measure is defined as (see Notation)
numeric(1)
Stability value.
For the definition of all stability measures in this package,
the following notation is used:
Let denote the sets of chosen features
for the
datasets, i.e.
features
has length and
is a set which contains the
-th entry of
features
.
Furthermore, let denote the number of sets that contain feature
so that
is the absolute frequency with which feature
is chosen.
Analogously, let
denote the number of sets that include both
and
.
Also, let
and
.
Jaccard, Paul (1901). “Étude comparative de la distribution florale dans une portion des Alpes et du Jura.” Bulletin de la Société Vaudoise des Sciences Naturelles, 37, 547-579. doi:10.5169/SEALS-266450.
Bommert A, Rahnenführer J, Lang M (2017). “A Multicriteria Approach to Find Predictive and Sparse Models with Stable Feature Selection for High-Dimensional Data.” Computational and Mathematical Methods in Medicine, 2017, 1–18. doi:10.1155/2017/7907163.
Bommert A (2020). Integration of Feature Selection Stability in Model Fitting. Ph.D. thesis, TU Dortmund University, Germany. doi:10.17877/DE290R-21906.
feats = list(1:3, 1:4, 1:5) stabilityJaccard(features = feats)
feats = list(1:3, 1:4, 1:5) stabilityJaccard(features = feats)
The stability of feature selection is defined as the robustness of the sets of selected features with respect to small variations in the data on which the feature selection is conducted. To quantify stability, several datasets from the same data generating process can be used. Alternatively, a single dataset can be split into parts by resampling. Either way, all datasets used for feature selection must contain exactly the same features. The feature selection method of interest is applied on all of the datasets and the sets of chosen features are recorded. The stability of the feature selection is assessed based on the sets of chosen features using stability measures.
stabilityKappa(features, p, impute.na = NULL)
stabilityKappa(features, p, impute.na = NULL)
features |
|
p |
|
impute.na |
|
The stability measure is defined as the average kappa coefficient between all pairs of feature sets. It can be rewritten as (see Notation)
numeric(1)
Stability value.
For the definition of all stability measures in this package,
the following notation is used:
Let denote the sets of chosen features
for the
datasets, i.e.
features
has length and
is a set which contains the
-th entry of
features
.
Furthermore, let denote the number of sets that contain feature
so that
is the absolute frequency with which feature
is chosen.
Analogously, let
denote the number of sets that include both
and
.
Also, let
and
.
Carletta, Jean (1996). “Assessing Agreement on Classification Tasks: The Kappa Statistic.” Computational Linguistics, 22(2), 249–254.
Bommert A (2020). Integration of Feature Selection Stability in Model Fitting. Ph.D. thesis, TU Dortmund University, Germany. doi:10.17877/DE290R-21906.
feats = list(1:3, 1:4, 1:5) stabilityKappa(features = feats, p = 10)
feats = list(1:3, 1:4, 1:5) stabilityKappa(features = feats, p = 10)
The stability of feature selection is defined as the robustness of the sets of selected features with respect to small variations in the data on which the feature selection is conducted. To quantify stability, several datasets from the same data generating process can be used. Alternatively, a single dataset can be split into parts by resampling. Either way, all datasets used for feature selection must contain exactly the same features. The feature selection method of interest is applied on all of the datasets and the sets of chosen features are recorded. The stability of the feature selection is assessed based on the sets of chosen features using stability measures.
stabilityLustgarten(features, p, impute.na = NULL)
stabilityLustgarten(features, p, impute.na = NULL)
features |
|
p |
|
impute.na |
|
The stability measure is defined as (see Notation)
numeric(1)
Stability value.
For the definition of all stability measures in this package,
the following notation is used:
Let denote the sets of chosen features
for the
datasets, i.e.
features
has length and
is a set which contains the
-th entry of
features
.
Furthermore, let denote the number of sets that contain feature
so that
is the absolute frequency with which feature
is chosen.
Analogously, let
denote the number of sets that include both
and
.
Also, let
and
.
Lustgarten, L J, Gopalakrishnan, Vanathi, Visweswaran, Shyam (2009). “Measuring stability of feature selection in biomedical datasets.” In AMIA annual symposium proceedings, volume 2009, 406. American Medical Informatics Association.
Bommert A, Rahnenführer J, Lang M (2017). “A Multicriteria Approach to Find Predictive and Sparse Models with Stable Feature Selection for High-Dimensional Data.” Computational and Mathematical Methods in Medicine, 2017, 1–18. doi:10.1155/2017/7907163.
Bommert A (2020). Integration of Feature Selection Stability in Model Fitting. Ph.D. thesis, TU Dortmund University, Germany. doi:10.17877/DE290R-21906.
feats = list(1:3, 1:4, 1:5) stabilityLustgarten(features = feats, p = 10)
feats = list(1:3, 1:4, 1:5) stabilityLustgarten(features = feats, p = 10)
The stability of feature selection is defined as the robustness of the sets of selected features with respect to small variations in the data on which the feature selection is conducted. To quantify stability, several datasets from the same data generating process can be used. Alternatively, a single dataset can be split into parts by resampling. Either way, all datasets used for feature selection must contain exactly the same features. The feature selection method of interest is applied on all of the datasets and the sets of chosen features are recorded. The stability of the feature selection is assessed based on the sets of chosen features using stability measures.
stabilityNogueira(features, p, impute.na = NULL)
stabilityNogueira(features, p, impute.na = NULL)
features |
|
p |
|
impute.na |
|
The stability measure is defined as (see Notation)
numeric(1)
Stability value.
For the definition of all stability measures in this package,
the following notation is used:
Let denote the sets of chosen features
for the
datasets, i.e.
features
has length and
is a set which contains the
-th entry of
features
.
Furthermore, let denote the number of sets that contain feature
so that
is the absolute frequency with which feature
is chosen.
Analogously, let
denote the number of sets that include both
and
.
Also, let
and
.
Nogueira S, Sechidis K, Brown G (2018). “On the Stability of Feature Selection Algorithms.” Journal of Machine Learning Research, 18(174), 1–54. https://jmlr.org/papers/v18/17-514.html.
Bommert A (2020). Integration of Feature Selection Stability in Model Fitting. Ph.D. thesis, TU Dortmund University, Germany. doi:10.17877/DE290R-21906.
feats = list(1:3, 1:4, 1:5) stabilityNogueira(features = feats, p = 10)
feats = list(1:3, 1:4, 1:5) stabilityNogueira(features = feats, p = 10)
The stability of feature selection is defined as the robustness of the sets of selected features with respect to small variations in the data on which the feature selection is conducted. To quantify stability, several datasets from the same data generating process can be used. Alternatively, a single dataset can be split into parts by resampling. Either way, all datasets used for feature selection must contain exactly the same features. The feature selection method of interest is applied on all of the datasets and the sets of chosen features are recorded. The stability of the feature selection is assessed based on the sets of chosen features using stability measures.
stabilityNovovicova( features, p = NULL, correction.for.chance = "none", N = 10000, impute.na = NULL )
stabilityNovovicova( features, p = NULL, correction.for.chance = "none", N = 10000, impute.na = NULL )
features |
|
p |
|
correction.for.chance |
|
N |
|
impute.na |
|
The stability measure is defined as (see Notation)
numeric(1)
Stability value.
For the definition of all stability measures in this package,
the following notation is used:
Let denote the sets of chosen features
for the
datasets, i.e.
features
has length and
is a set which contains the
-th entry of
features
.
Furthermore, let denote the number of sets that contain feature
so that
is the absolute frequency with which feature
is chosen.
Analogously, let
denote the number of sets that include both
and
.
Also, let
and
.
Novovičová J, Somol P, Pudil P (2009). “A New Measure of Feature Selection Algorithms' Stability.” In 2009 IEEE International Conference on Data Mining Workshops. doi:10.1109/icdmw.2009.32.
Bommert A, Rahnenführer J, Lang M (2017). “A Multicriteria Approach to Find Predictive and Sparse Models with Stable Feature Selection for High-Dimensional Data.” Computational and Mathematical Methods in Medicine, 2017, 1–18. doi:10.1155/2017/7907163.
Bommert A (2020). Integration of Feature Selection Stability in Model Fitting. Ph.D. thesis, TU Dortmund University, Germany. doi:10.17877/DE290R-21906.
feats = list(1:3, 1:4, 1:5) stabilityNovovicova(features = feats)
feats = list(1:3, 1:4, 1:5) stabilityNovovicova(features = feats)
The stability of feature selection is defined as the robustness of the sets of selected features with respect to small variations in the data on which the feature selection is conducted. To quantify stability, several datasets from the same data generating process can be used. Alternatively, a single dataset can be split into parts by resampling. Either way, all datasets used for feature selection must contain exactly the same features. The feature selection method of interest is applied on all of the datasets and the sets of chosen features are recorded. The stability of the feature selection is assessed based on the sets of chosen features using stability measures.
stabilityOchiai( features, p = NULL, correction.for.chance = "none", N = 10000, impute.na = NULL )
stabilityOchiai( features, p = NULL, correction.for.chance = "none", N = 10000, impute.na = NULL )
features |
|
p |
|
correction.for.chance |
|
N |
|
impute.na |
|
The stability measure is defined as (see Notation)
numeric(1)
Stability value.
For the definition of all stability measures in this package,
the following notation is used:
Let denote the sets of chosen features
for the
datasets, i.e.
features
has length and
is a set which contains the
-th entry of
features
.
Furthermore, let denote the number of sets that contain feature
so that
is the absolute frequency with which feature
is chosen.
Analogously, let
denote the number of sets that include both
and
.
Also, let
and
.
Ochiai A (1957). “Zoogeographical Studies on the Soleoid Fishes Found in Japan and its Neighbouring Regions-III.” Nippon Suisan Gakkaishi, 22(9), 531-535. doi:10.2331/suisan.22.531.
Bommert A, Rahnenführer J, Lang M (2017). “A Multicriteria Approach to Find Predictive and Sparse Models with Stable Feature Selection for High-Dimensional Data.” Computational and Mathematical Methods in Medicine, 2017, 1–18. doi:10.1155/2017/7907163.
Bommert A (2020). Integration of Feature Selection Stability in Model Fitting. Ph.D. thesis, TU Dortmund University, Germany. doi:10.17877/DE290R-21906.
feats = list(1:3, 1:4, 1:5) stabilityOchiai(features = feats)
feats = list(1:3, 1:4, 1:5) stabilityOchiai(features = feats)
The stability of feature selection is defined as the robustness of the sets of selected features with respect to small variations in the data on which the feature selection is conducted. To quantify stability, several datasets from the same data generating process can be used. Alternatively, a single dataset can be split into parts by resampling. Either way, all datasets used for feature selection must contain exactly the same features. The feature selection method of interest is applied on all of the datasets and the sets of chosen features are recorded. The stability of the feature selection is assessed based on the sets of chosen features using stability measures.
stabilityPhi(features, p, impute.na = NULL)
stabilityPhi(features, p, impute.na = NULL)
features |
|
p |
|
impute.na |
|
The stability measure is defined as the average phi coefficient between all pairs of feature sets. It can be rewritten as (see Notation)
numeric(1)
Stability value.
For the definition of all stability measures in this package,
the following notation is used:
Let denote the sets of chosen features
for the
datasets, i.e.
features
has length and
is a set which contains the
-th entry of
features
.
Furthermore, let denote the number of sets that contain feature
so that
is the absolute frequency with which feature
is chosen.
Analogously, let
denote the number of sets that include both
and
.
Also, let
and
.
Nogueira S, Brown G (2016). “Measuring the Stability of Feature Selection.” In Machine Learning and Knowledge Discovery in Databases, 442–457. Springer International Publishing. doi:10.1007/978-3-319-46227-1_28.
Bommert A, Rahnenführer J, Lang M (2017). “A Multicriteria Approach to Find Predictive and Sparse Models with Stable Feature Selection for High-Dimensional Data.” Computational and Mathematical Methods in Medicine, 2017, 1–18. doi:10.1155/2017/7907163.
Bommert A (2020). Integration of Feature Selection Stability in Model Fitting. Ph.D. thesis, TU Dortmund University, Germany. doi:10.17877/DE290R-21906.
feats = list(1:3, 1:4, 1:5) stabilityPhi(features = feats, p = 10)
feats = list(1:3, 1:4, 1:5) stabilityPhi(features = feats, p = 10)
The stability of feature selection is defined as the robustness of the sets of selected features with respect to small variations in the data on which the feature selection is conducted. To quantify stability, several datasets from the same data generating process can be used. Alternatively, a single dataset can be split into parts by resampling. Either way, all datasets used for feature selection must contain exactly the same features. The feature selection method of interest is applied on all of the datasets and the sets of chosen features are recorded. The stability of the feature selection is assessed based on the sets of chosen features using stability measures.
stabilitySechidis(features, sim.mat, threshold = 0.9, impute.na = NULL)
stabilitySechidis(features, sim.mat, threshold = 0.9, impute.na = NULL)
features |
|
sim.mat |
|
threshold |
|
impute.na |
|
The stability measure is defined as
with ()-matrices
and
The matrix is created from matrix
sim.mat
by setting all values of sim.mat
that are smaller
than threshold
to 0. If you want to to be equal to
sim.mat
, use threshold = 0
.
numeric(1)
Stability value.
For the definition of all stability measures in this package,
the following notation is used:
Let denote the sets of chosen features
for the
datasets, i.e.
features
has length and
is a set which contains the
-th entry of
features
.
Furthermore, let denote the number of sets that contain feature
so that
is the absolute frequency with which feature
is chosen.
Analogously, let
denote the number of sets that include both
and
.
Also, let
and
.
This stability measure is not corrected for chance.
Unlike for the other stability measures in this R package, that are not corrected for chance,
for stabilitySechidis
, no correction.for.chance
can be applied.
This is because for stabilitySechidis
, no finite upper bound is known at the moment,
see listStabilityMeasures.
Sechidis K, Papangelou K, Nogueira S, Weatherall J, Brown G (2020). “On the Stability of Feature Selection in the Presence of Feature Correlations.” In Machine Learning and Knowledge Discovery in Databases, 327–342. Springer International Publishing. doi:10.1007/978-3-030-46150-8_20.
Bommert A (2020). Integration of Feature Selection Stability in Model Fitting. Ph.D. thesis, TU Dortmund University, Germany. doi:10.17877/DE290R-21906.
feats = list(1:3, 1:4, 1:5) mat = 0.92 ^ abs(outer(1:10, 1:10, "-")) stabilitySechidis(features = feats, sim.mat = mat)
feats = list(1:3, 1:4, 1:5) mat = 0.92 ^ abs(outer(1:10, 1:10, "-")) stabilitySechidis(features = feats, sim.mat = mat)
The stability of feature selection is defined as the robustness of the sets of selected features with respect to small variations in the data on which the feature selection is conducted. To quantify stability, several datasets from the same data generating process can be used. Alternatively, a single dataset can be split into parts by resampling. Either way, all datasets used for feature selection must contain exactly the same features. The feature selection method of interest is applied on all of the datasets and the sets of chosen features are recorded. The stability of the feature selection is assessed based on the sets of chosen features using stability measures.
stabilitySomol(features, p, impute.na = NULL)
stabilitySomol(features, p, impute.na = NULL)
features |
|
p |
|
impute.na |
|
The stability measure is defined as (see Notation)
with
numeric(1)
Stability value.
For the definition of all stability measures in this package,
the following notation is used:
Let denote the sets of chosen features
for the
datasets, i.e.
features
has length and
is a set which contains the
-th entry of
features
.
Furthermore, let denote the number of sets that contain feature
so that
is the absolute frequency with which feature
is chosen.
Analogously, let
denote the number of sets that include both
and
.
Also, let
and
.
Somol P, Novovičová J (2010). “Evaluating Stability and Comparing Output of Feature Selectors that Optimize Feature Subset Cardinality.” IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(11), 1921–1939. doi:10.1109/tpami.2010.34.
Bommert A, Rahnenführer J, Lang M (2017). “A Multicriteria Approach to Find Predictive and Sparse Models with Stable Feature Selection for High-Dimensional Data.” Computational and Mathematical Methods in Medicine, 2017, 1–18. doi:10.1155/2017/7907163.
Bommert A (2020). Integration of Feature Selection Stability in Model Fitting. Ph.D. thesis, TU Dortmund University, Germany. doi:10.17877/DE290R-21906.
feats = list(1:3, 1:4, 1:5) stabilitySomol(features = feats, p = 10)
feats = list(1:3, 1:4, 1:5) stabilitySomol(features = feats, p = 10)
The stability of feature selection is defined as the robustness of the sets of selected features with respect to small variations in the data on which the feature selection is conducted. To quantify stability, several datasets from the same data generating process can be used. Alternatively, a single dataset can be split into parts by resampling. Either way, all datasets used for feature selection must contain exactly the same features. The feature selection method of interest is applied on all of the datasets and the sets of chosen features are recorded. The stability of the feature selection is assessed based on the sets of chosen features using stability measures.
stabilityUnadjusted(features, p, impute.na = NULL)
stabilityUnadjusted(features, p, impute.na = NULL)
features |
|
p |
|
impute.na |
|
The stability measure is defined as (see Notation)
This is what stabilityIntersectionMBM, stabilityIntersectionGreedy, stabilityIntersectionCount and stabilityIntersectionMean become, when there are no similar features.
numeric(1)
Stability value.
For the definition of all stability measures in this package,
the following notation is used:
Let denote the sets of chosen features
for the
datasets, i.e.
features
has length and
is a set which contains the
-th entry of
features
.
Furthermore, let denote the number of sets that contain feature
so that
is the absolute frequency with which feature
is chosen.
Analogously, let
denote the number of sets that include both
and
.
Also, let
and
.
Bommert A, Rahnenführer J (2020). “Adjusted Measures for Feature Selection Stability for Data Sets with Similar Features.” In Machine Learning, Optimization, and Data Science, 203–214. doi:10.1007/978-3-030-64583-0_19.
Bommert A (2020). Integration of Feature Selection Stability in Model Fitting. Ph.D. thesis, TU Dortmund University, Germany. doi:10.17877/DE290R-21906.
feats = list(1:3, 1:4, 1:5) stabilityUnadjusted(features = feats, p = 10)
feats = list(1:3, 1:4, 1:5) stabilityUnadjusted(features = feats, p = 10)
The stability of feature selection is defined as the robustness of the sets of selected features with respect to small variations in the data on which the feature selection is conducted. To quantify stability, several datasets from the same data generating process can be used. Alternatively, a single dataset can be split into parts by resampling. Either way, all datasets used for feature selection must contain exactly the same features. The feature selection method of interest is applied on all of the datasets and the sets of chosen features are recorded. The stability of the feature selection is assessed based on the sets of chosen features using stability measures.
stabilityWald(features, p, impute.na = NULL)
stabilityWald(features, p, impute.na = NULL)
features |
|
p |
|
impute.na |
|
The stability measure is defined as (see Notation)
numeric(1)
Stability value.
For the definition of all stability measures in this package,
the following notation is used:
Let denote the sets of chosen features
for the
datasets, i.e.
features
has length and
is a set which contains the
-th entry of
features
.
Furthermore, let denote the number of sets that contain feature
so that
is the absolute frequency with which feature
is chosen.
Analogously, let
denote the number of sets that include both
and
.
Also, let
and
.
Wald R, Khoshgoftaar TM, Napolitano A (2013). “Stability of Filter- and Wrapper-Based Feature Subset Selection.” In 2013 IEEE 25th International Conference on Tools with Artificial Intelligence. doi:10.1109/ictai.2013.63.
Bommert A (2020). Integration of Feature Selection Stability in Model Fitting. Ph.D. thesis, TU Dortmund University, Germany. doi:10.17877/DE290R-21906.
feats = list(1:3, 1:4, 1:5) stabilityWald(features = feats, p = 10)
feats = list(1:3, 1:4, 1:5) stabilityWald(features = feats, p = 10)
The stability of feature selection is defined as the robustness of the sets of selected features with respect to small variations in the data on which the feature selection is conducted. To quantify stability, several datasets from the same data generating process can be used. Alternatively, a single dataset can be split into parts by resampling. Either way, all datasets used for feature selection must contain exactly the same features. The feature selection method of interest is applied on all of the datasets and the sets of chosen features are recorded. The stability of the feature selection is assessed based on the sets of chosen features using stability measures.
stabilityYu( features, sim.mat, threshold = 0.9, correction.for.chance = "estimate", N = 10000, impute.na = NULL )
stabilityYu( features, sim.mat, threshold = 0.9, correction.for.chance = "estimate", N = 10000, impute.na = NULL )
features |
|
sim.mat |
|
threshold |
|
correction.for.chance |
|
N |
|
impute.na |
|
Let denote the number of features in
that are not
shared with
but that have a highly simlar feature in
:
Then the stability measure is defined as (see Notation)
with
Note that this definition slightly differs from its original in order to make it suitable
for arbitrary datasets and similarity measures and applicable in situations with .
numeric(1)
Stability value.
For the definition of all stability measures in this package,
the following notation is used:
Let denote the sets of chosen features
for the
datasets, i.e.
features
has length and
is a set which contains the
-th entry of
features
.
Furthermore, let denote the number of sets that contain feature
so that
is the absolute frequency with which feature
is chosen.
Analogously, let
denote the number of sets that include both
and
.
Also, let
and
.
Yu L, Han Y, Berens ME (2012). “Stable Gene Selection from Microarray Data via Sample Weighting.” IEEE/ACM Transactions on Computational Biology and Bioinformatics, 9(1), 262–272. doi:10.1109/tcbb.2011.47.
Zhang M, Zhang L, Zou J, Yao C, Xiao H, Liu Q, Wang J, Wang D, Wang C, Guo Z (2009). “Evaluating reproducibility of differential expression discoveries in microarray studies by considering correlated molecular changes.” Bioinformatics, 25(13), 1662–1668. doi:10.1093/bioinformatics/btp295.
Bommert A (2020). Integration of Feature Selection Stability in Model Fitting. Ph.D. thesis, TU Dortmund University, Germany. doi:10.17877/DE290R-21906.
feats = list(1:3, 1:4, 1:5) mat = 0.92 ^ abs(outer(1:10, 1:10, "-")) stabilityYu(features = feats, sim.mat = mat, N = 1000)
feats = list(1:3, 1:4, 1:5) mat = 0.92 ^ abs(outer(1:10, 1:10, "-")) stabilityYu(features = feats, sim.mat = mat, N = 1000)
The stability of feature selection is defined as the robustness of the sets of selected features with respect to small variations in the data on which the feature selection is conducted. To quantify stability, several datasets from the same data generating process can be used. Alternatively, a single dataset can be split into parts by resampling. Either way, all datasets used for feature selection must contain exactly the same features. The feature selection method of interest is applied on all of the datasets and the sets of chosen features are recorded. The stability of the feature selection is assessed based on the sets of chosen features using stability measures.
stabilityZucknick( features, sim.mat, threshold = 0.9, correction.for.chance = "none", N = 10000, impute.na = NULL )
stabilityZucknick( features, sim.mat, threshold = 0.9, correction.for.chance = "none", N = 10000, impute.na = NULL )
features |
|
sim.mat |
|
threshold |
|
correction.for.chance |
|
N |
|
impute.na |
|
The stability measure is defined as
with
Note that this definition slightly differs from its original in order to make it suitable for arbitrary similarity measures.
numeric(1)
Stability value.
For the definition of all stability measures in this package,
the following notation is used:
Let denote the sets of chosen features
for the
datasets, i.e.
features
has length and
is a set which contains the
-th entry of
features
.
Furthermore, let denote the number of sets that contain feature
so that
is the absolute frequency with which feature
is chosen.
Analogously, let
denote the number of sets that include both
and
.
Also, let
and
.
Zucknick M, Richardson S, Stronach EA (2008). “Comparing the Characteristics of Gene Expression Profiles Derived by Univariate and Multivariate Classification Methods.” Statistical Applications in Genetics and Molecular Biology, 7(1). doi:10.2202/1544-6115.1307.
Bommert A, Rahnenführer J, Lang M (2017). “A Multicriteria Approach to Find Predictive and Sparse Models with Stable Feature Selection for High-Dimensional Data.” Computational and Mathematical Methods in Medicine, 2017, 1–18. doi:10.1155/2017/7907163.
Bommert A (2020). Integration of Feature Selection Stability in Model Fitting. Ph.D. thesis, TU Dortmund University, Germany. doi:10.17877/DE290R-21906.
feats = list(1:3, 1:4, 1:5) mat = 0.92 ^ abs(outer(1:10, 1:10, "-")) stabilityZucknick(features = feats, sim.mat = mat)
feats = list(1:3, 1:4, 1:5) mat = 0.92 ^ abs(outer(1:10, 1:10, "-")) stabilityZucknick(features = feats, sim.mat = mat)