Track Awesome Xai Updates Dailyhttps://www.trackawesomelist.com/altamiracorp/awesome-xai/feed.xml2021-05-04T13:47:03.000Zjsonfeed-to-atomhttps://www.trackawesomelist.com/favicon.icohttps://www.trackawesomelist.com/icon.pngAwesome Explainable AI (XAI) and Interpretable ML Papers and Resourceshttps://www.trackawesomelist.com/2021/05/04/Awesome Xai Updates on May 04, 20212021-05-04T13:47:03.000Z2021-05-04T11:37:06.000Z
]]>2 awesome projects updated on May 04, 2021https://www.trackawesomelist.com/2021/04/05/Awesome Xai Updates on Apr 05, 20212021-04-05T14:17:46.000Z2021-04-05T14:17:46.000Z
Follow / Critiques
Rich Caruana - The man behind Explainable Boosting Machines.
]]>1 awesome projects updated on Apr 05, 2021https://www.trackawesomelist.com/2021/03/23/Awesome Xai Updates on Mar 23, 20212021-03-23T15:33:08.000Z2021-03-23T15:33:08.000Z
Papers / Interpretable Models
Decision List - Like a decision tree with no branches.
Naive Bayes - Good classification, poor estimation using conditional probabilities.
RuleFit - Sparse linear model as decision rules including feature interactions.
Videos / Critiques
Debate: Interpretability is necessary for ML - A debate on whether interpretability is necessary for ML with Rich Caruana and Patrice Simard for and Kilian Weinberger and Yann LeCun against.
]]>9 awesome projects updated on Mar 23, 2021https://www.trackawesomelist.com/2021/03/17/Awesome Xai Updates on Mar 17, 20212021-03-17T17:37:11.000Z2021-03-17T14:38:10.000Z
Papers / Landmarks
Explanation in Artificial Intelligence: Insights from the Social Sciences - This paper provides an introduction to the social science research into explanations. The author provides 4 major findings: (1) explanations are constrastive, (2) explanations are selected, (3) probabilities probably don't matter, (4) explanations are social. These fit into the general theme that explanations are -contextual-.
Sanity Checks for Saliency Maps - An important read for anyone using saliency maps. This paper proposes two experiments to determine whether saliency maps are useful: (1) model parameter randomization test compares maps from trained and untrained models, (2) data randomization test compares maps from models trained on the original dataset and models trained on the same dataset with randomized labels. They find that "some widely deployed saliency methods are independent of both the data the model was trained on, and the model parameters".
Attention is not Explanation - Authors perform a series of NLP experiments which argue attention does not provide meaningful explanations. They also demosntrate that different attentions can generate similar model outputs.
Attention is not --not-- Explanation - This is a rebutal to the above paper. Authors argue that multiple explanations can be valid and that the and that attention can produce a valid explanation, if not -the- valid explanation.
Do Not Trust Additive Explanations - Authors argue that addditive explanations (e.g. LIME, SHAP, Break Down) fail to take feature ineractions into account and are thus unreliable.
Stop Explaining Black Box Machine Learning Models for High States Decisions and Use Interpretable Models Instead - Authors present a number of issues with explainable ML and challenges to interpretable ML: (1) constructing optimal logical models, (2) constructing optimal sparse scoring systems, (3) defining interpretability and creating methods for specific methods. They also offer an argument for why interpretable models might exist in many different domains.
The (Un)reliability of Saliency Methods - Authors demonstrate how saliency methods vary attribution when adding a constant shift to the input data. They argue that methods should fulfill input invariance, that a saliency method mirror the sensistivity of the model with respect to transformations of the input.
Tim Miller - One of the preeminent researchers in XAI.
]]>87 awesome projects updated on Mar 17, 2021https://www.trackawesomelist.com/2021/03/02/Awesome Xai Updates on Mar 02, 20212021-03-02T16:01:13.000Z2021-03-02T15:56:11.000Z
Repositories / Critiques
EthicalML/xai (⭐859) - A toolkit for XAI which is focused exclusively on tabular data. It implements a variety of data and model evaluation techniques.
PAIR-code/what-if-tool (⭐753) - A tool for Tensorboard or Notebooks which allows investigating model performance and fairness.