Admissible Machine Learning
We have developed new tools to aid in the design of admissible learning algorithms which are efficient (enjoy good predictive accuracy), fair (minimize discrimination against minority groups), and interpretable (provide mechanistic understanding) to the best possible extent.
Admissible ML introduces two methodological tools:
Infogram, an “information diagram”, is a new graphical feature-exploration method that facilitates the development of admissible machine learning methods.
L-features, which mitigate unfairness, offer ways to systematically discover the hidden problematic proxy features from a dataset. L-features are inadmissible features.
The Infogram and Admissible Machine Learning bring a new research direction to machine learning interpretability. You can find the theoretical foundations and several real-life examples of it’s utility in the Admissble ML paper. Below we introduce the concepts at a high level and provide an example using the H2O Infogram implementation.