<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
  <title>Track Awesome Xai Updates Daily</title>
  <id>https://www.trackawesomelist.com/altamiracorp/awesome-xai/feed.xml</id>
  <updated>2021-05-04T13:47:03.000Z</updated>
  <link rel="self" type="application/atom+xml" href="https://www.trackawesomelist.com/altamiracorp/awesome-xai/feed.xml"/>
  <link rel="alternate" type="application/json" href="https://www.trackawesomelist.com/altamiracorp/awesome-xai/feed.json"/>
  <link rel="alternate" type="text/html" href="https://www.trackawesomelist.com/altamiracorp/awesome-xai/"/>
  <generator uri="https://github.com/bcomnes/jsonfeed-to-atom#readme" version="1.2.2">jsonfeed-to-atom</generator>
  <icon>https://www.trackawesomelist.com/favicon.ico</icon>
  <logo>https://www.trackawesomelist.com/icon.png</logo>
  <subtitle>Awesome Explainable AI (XAI) and Interpretable ML Papers and Resources</subtitle>
  <entry>
    <id>https://www.trackawesomelist.com/2021/05/04/</id>
    <title>Awesome Xai Updates on May 04, 2021</title>
    <updated>2021-05-04T13:47:03.000Z</updated>
    <published>2021-05-04T11:37:06.000Z</published>
    <content type="html"><![CDATA[<h3><p>Repositories / Critiques</p>
</h3>
<ul>
<li><a href="https://github.com/MAIF/shapash" rel="noopener noreferrer">MAIF/shapash (⭐2k)</a> - SHAP and LIME-based front-end explainer.</li>
</ul>

<ul>
<li><a href="https://github.com/slundberg/shap" rel="noopener noreferrer">slundberg/shap (⭐18k)</a> - A Python module for using Shapley Additive Explanations.</li>
</ul>
]]></content>
    <link rel="alternate" href="https://www.trackawesomelist.com/2021/05/04/"/>
    <summary>2 awesome projects updated on May 04, 2021</summary>
  </entry>
  <entry>
    <id>https://www.trackawesomelist.com/2021/04/05/</id>
    <title>Awesome Xai Updates on Apr 05, 2021</title>
    <updated>2021-04-05T14:17:46.000Z</updated>
    <published>2021-04-05T14:17:46.000Z</published>
    <content type="html"><![CDATA[<h3><p>Follow / Critiques</p>
</h3>
<ul>
<li><a href="https://www.microsoft.com/en-us/research/people/rcaruana/" rel="noopener noreferrer">Rich Caruana</a> - The man behind Explainable Boosting Machines.</li>
</ul>
]]></content>
    <link rel="alternate" href="https://www.trackawesomelist.com/2021/04/05/"/>
    <summary>1 awesome projects updated on Apr 05, 2021</summary>
  </entry>
  <entry>
    <id>https://www.trackawesomelist.com/2021/03/23/</id>
    <title>Awesome Xai Updates on Mar 23, 2021</title>
    <updated>2021-03-23T15:33:08.000Z</updated>
    <published>2021-03-23T15:33:08.000Z</published>
    <content type="html"><![CDATA[<h3><p>Papers / Interpretable Models</p>
</h3>
<ul>
<li><a href="https://christophm.github.io/interpretable-ml-book/rules.html" rel="noopener noreferrer">Decision List</a> - Like a decision tree with no branches.</li>
</ul>

<ul>
<li><a href="https://en.wikipedia.org/wiki/Decision_tree" rel="noopener noreferrer">Decision Trees</a> - The tree provides an interpretation.</li>
</ul>

<ul>
<li><a href="https://www.youtube.com/watch?v=MREiHgHgl0k" rel="noopener noreferrer">Explainable Boosting Machine</a> - Method that predicts based on learned vector graphs of features.</li>
</ul>

<ul>
<li><a href="https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm" rel="noopener noreferrer">k-Nearest Neighbors</a> - The prototypical clustering method.</li>
</ul>

<ul>
<li><a href="https://en.wikipedia.org/wiki/Linear_regression" rel="noopener noreferrer">Linear Regression</a> - Easily plottable and understandable regression.</li>
</ul>

<ul>
<li><a href="https://en.wikipedia.org/wiki/Logistic_regression" rel="noopener noreferrer">Logistic Regression</a> - Easily plottable and understandable classification.</li>
</ul>

<ul>
<li><a href="https://en.wikipedia.org/wiki/Naive_Bayes_classifier" rel="noopener noreferrer">Naive Bayes</a> - Good classification, poor estimation using conditional probabilities.</li>
</ul>

<ul>
<li><a href="https://christophm.github.io/interpretable-ml-book/rulefit.html" rel="noopener noreferrer">RuleFit</a> - Sparse linear model as decision rules including feature interactions.</li>
</ul>
<h3><p>Videos / Critiques</p>
</h3>
<ul>
<li><a href="https://www.youtube.com/watch?v=93Xv8vJ2acI" rel="noopener noreferrer">Debate: Interpretability is necessary for ML</a> - A debate on whether interpretability is necessary for ML with Rich Caruana and Patrice Simard for and Kilian Weinberger and Yann LeCun against.</li>
</ul>
]]></content>
    <link rel="alternate" href="https://www.trackawesomelist.com/2021/03/23/"/>
    <summary>9 awesome projects updated on Mar 23, 2021</summary>
  </entry>
  <entry>
    <id>https://www.trackawesomelist.com/2021/03/17/</id>
    <title>Awesome Xai Updates on Mar 17, 2021</title>
    <updated>2021-03-17T17:37:11.000Z</updated>
    <published>2021-03-17T14:38:10.000Z</published>
    <content type="html"><![CDATA[<h3><p>Papers / Landmarks</p>
</h3>
<ul>
<li><a href="https://arxiv.org/abs/1706.07269" rel="noopener noreferrer">Explanation in Artificial Intelligence: Insights from the Social Sciences</a> - This paper provides an introduction to the social science research into explanations. The author provides 4 major findings: (1) explanations are constrastive, (2) explanations are selected, (3) probabilities probably don't matter, (4) explanations are social. These fit into the general theme that explanations are -contextual-.</li>
</ul>

<ul>
<li><a href="https://arxiv.org/abs/1810.03292" rel="noopener noreferrer">Sanity Checks for Saliency Maps</a> - An important read for anyone using saliency maps. This paper proposes two experiments to determine whether saliency maps are useful: (1) model parameter randomization test compares maps from trained and untrained models, (2) data randomization test compares maps from models trained on the original dataset and models trained on the same dataset with randomized labels. They find that "some widely deployed saliency methods are independent of both the data the model was trained on, and the model parameters".</li>
</ul>
<h3><p>Papers / Surveys</p>
</h3>
<ul>
<li><a href="https://arxiv.org/abs/2004.14545" rel="noopener noreferrer">Explainable Deep Learning: A Field Guide for the Uninitiated</a> - An in-depth description of XAI focused on technqiues for deep learning.</li>
</ul>
<h3><p>Papers / Evaluations</p>
</h3>
<ul>
<li><a href="https://arxiv.org/abs/2009.02899" rel="noopener noreferrer">Quantifying Explainability of Saliency Methods in Deep Neural Networks</a> - An analysis of how different heatmap-based saliency methods perform based on experimentation with a generated dataset.</li>
</ul>
<h3><p>Papers / XAI Methods</p>
</h3>
<ul>
<li><a href="https://arxiv.org/abs/2102.07799" rel="noopener noreferrer">Ada-SISE</a> - Adaptive semantice inpute sampling for explanation.</li>
</ul>

<ul>
<li><a href="https://rss.onlinelibrary.wiley.com/doi/abs/10.1111/rssb.12377" rel="noopener noreferrer">ALE</a> - Accumulated local effects plot.</li>
</ul>

<ul>
<li><a href="https://link.springer.com/chapter/10.1007/978-3-030-33607-3_49" rel="noopener noreferrer">ALIME</a> - Autoencoder Based Approach for Local Interpretability.</li>
</ul>

<ul>
<li><a href="https://ojs.aaai.org/index.php/AAAI/article/view/11491" rel="noopener noreferrer">Anchors</a> - High-Precision Model-Agnostic Explanations.</li>
</ul>

<ul>
<li><a href="https://link.springer.com/article/10.1007/s10115-017-1116-3" rel="noopener noreferrer">Auditing</a> - Auditing black-box models.</li>
</ul>

<ul>
<li><a href="https://arxiv.org/abs/2012.03058" rel="noopener noreferrer">BayLIME</a> - Bayesian local interpretable model-agnostic explanations.</li>
</ul>

<ul>
<li><a href="http://ema.drwhy.ai/breakDown.html#BDMethod" rel="noopener noreferrer">Break Down</a> - Break down plots for additive attributions.</li>
</ul>

<ul>
<li><a href="https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Zhou_Learning_Deep_Features_CVPR_2016_paper.pdf" rel="noopener noreferrer">CAM</a> - Class activation mapping.</li>
</ul>

<ul>
<li><a href="https://ieeexplore.ieee.org/abstract/document/4167900" rel="noopener noreferrer">CDT</a> - Confident interpretation of Bayesian decision tree ensembles.</li>
</ul>

<ul>
<li><a href="https://christophm.github.io/interpretable-ml-book/ice.html" rel="noopener noreferrer">CICE</a> - Centered ICE plot.</li>
</ul>

<ul>
<li><a href="https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.40.2710&amp;rep=rep1&amp;type=pdf" rel="noopener noreferrer">CMM</a> - Combined multiple models metalearner.</li>
</ul>

<ul>
<li><a href="https://www.sciencedirect.com/science/article/pii/B9781558603356500131" rel="noopener noreferrer">Conj Rules</a> - Using sampling and queries to extract rules from trained neural networks.</li>
</ul>

<ul>
<li><a href="https://ieeexplore.ieee.org/abstract/document/6597214" rel="noopener noreferrer">CP</a> - Contribution propogation.</li>
</ul>

<ul>
<li><a href="https://dl.acm.org/doi/abs/10.1145/775047.775113" rel="noopener noreferrer">DecText</a> - Extracting decision trees from trained neural networks.</li>
</ul>

<ul>
<li><a href="https://ieeexplore-ieee-org.ezproxy.libraries.wright.edu/abstract/document/9352498" rel="noopener noreferrer">DeepLIFT</a> - Deep label-specific feature learning for image annotation.</li>
</ul>

<ul>
<li><a href="https://www.sciencedirect.com/science/article/pii/S0031320316303582" rel="noopener noreferrer">DTD</a> - Deep Taylor decomposition.</li>
</ul>

<ul>
<li><a href="https://www.aaai.org/Papers/IAAI/2006/IAAI06-018.pdf" rel="noopener noreferrer">ExplainD</a> - Explanations of evidence in additive classifiers.</li>
</ul>

<ul>
<li><a href="https://link.springer.com/chapter/10.1007/978-3-642-04174-7_45" rel="noopener noreferrer">FIRM</a> - Feature importance ranking measure.</li>
</ul>

<ul>
<li><a href="https://openaccess.thecvf.com/content_iccv_2017/html/Fong_Interpretable_Explanations_of_ICCV_2017_paper.html" rel="noopener noreferrer">Fong, et. al.</a> - Meaninful perturbations model.</li>
</ul>

<ul>
<li><a href="https://www.academia.edu/download/51462700/s0362-546x_2896_2900267-220170122-9600-1njrpyx.pdf" rel="noopener noreferrer">G-REX</a> - Rule extraction using genetic algorithms.</li>
</ul>

<ul>
<li><a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3977175/" rel="noopener noreferrer">Gibbons, et. al.</a> - Explain random forest using decision tree.</li>
</ul>

<ul>
<li><a href="https://link-springer-com.ezproxy.libraries.wright.edu/article/10.1007/s10618-014-0368-8" rel="noopener noreferrer">GoldenEye</a> - Exploring classifiers by randomization.</li>
</ul>

<ul>
<li><a href="https://arxiv.org/abs/0912.1128" rel="noopener noreferrer">GPD</a> - Gaussian process decisions.</li>
</ul>

<ul>
<li><a href="https://ieeexplore.ieee.org/abstract/document/4938655" rel="noopener noreferrer">GPDT</a> - Genetic program to evolve decision trees.</li>
</ul>

<ul>
<li><a href="https://openaccess.thecvf.com/content_iccv_2017/html/Selvaraju_Grad-CAM_Visual_Explanations_ICCV_2017_paper.html" rel="noopener noreferrer">GradCAM</a> - Gradient-weighted Class Activation Mapping.</li>
</ul>

<ul>
<li><a href="https://ieeexplore.ieee.org/abstract/document/8354201/" rel="noopener noreferrer">GradCAM++</a> - Generalized gradient-based visual explanations.</li>
</ul>

<ul>
<li><a href="https://arxiv.org/abs/1606.05390" rel="noopener noreferrer">Hara, et. al.</a> - Making tree ensembles interpretable.</li>
</ul>

<ul>
<li><a href="https://www.tandfonline.com/doi/abs/10.1080/10618600.2014.907095" rel="noopener noreferrer">ICE</a> - Individual conditional expectation plots.</li>
</ul>

<ul>
<li><a href="http://proceedings.mlr.press/v70/sundararajan17a/sundararajan17a.pdf" rel="noopener noreferrer">IG</a> - Integrated gradients.</li>
</ul>

<ul>
<li><a href="https://link.springer.com/article/10.1007/s41060-018-0144-8" rel="noopener noreferrer">inTrees</a> - Interpreting tree ensembles with inTrees.</li>
</ul>

<ul>
<li><a href="https://arxiv.org/abs/1611.04967" rel="noopener noreferrer">IOFP</a> - Iterative orthoganol feature projection.</li>
</ul>

<ul>
<li><a href="https://arxiv.org/abs/1703.00810" rel="noopener noreferrer">IP</a> - Information plane visualization.</li>
</ul>

<ul>
<li><a href="https://arxiv.org/abs/1810.02678" rel="noopener noreferrer">KL-LIME</a> - Kullback-Leibler Projections based LIME.</li>
</ul>

<ul>
<li><a href="https://www.sciencedirect.com/science/article/abs/pii/S0031320398001812" rel="noopener noreferrer">Krishnan, et. al.</a> - Extracting decision trees from trained neural networks.</li>
</ul>

<ul>
<li><a href="https://arxiv.org/abs/1606.04155" rel="noopener noreferrer">Lei, et. al.</a> - Rationalizing neural predictions with generator and encoder.</li>
</ul>

<ul>
<li><a href="https://dl.acm.org/doi/abs/10.1145/2939672.2939778" rel="noopener noreferrer">LIME</a> - Local Interpretable Model-Agnostic Explanations.</li>
</ul>

<ul>
<li><a href="https://amstat.tandfonline.com/doi/abs/10.1080/01621459.2017.1307116#.YEkdZ7CSmUk" rel="noopener noreferrer">LOCO</a> - Leave-one covariate out.</li>
</ul>

<ul>
<li><a href="https://arxiv.org/abs/1805.10820" rel="noopener noreferrer">LORE</a> - Local rule-based explanations.</li>
</ul>

<ul>
<li><a href="https://dl.acm.org/doi/abs/10.1145/2487575.2487579" rel="noopener noreferrer">Lou, et. al.</a> - Accurate intelligibile models with pairwise interactions.</li>
</ul>

<ul>
<li><a href="https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0130140" rel="noopener noreferrer">LRP</a> - Layer-wise relevance propogation.</li>
</ul>

<ul>
<li><a href="https://www.jmlr.org/papers/volume20/18-760/18-760.pdf" rel="noopener noreferrer">MCR</a> - Model class reliance.</li>
</ul>

<ul>
<li><a href="https://ieeexplore.ieee.org/abstract/document/7738872" rel="noopener noreferrer">MES</a> - Model explanation system.</li>
</ul>

<ul>
<li><a href="https://arxiv.org/abs/1611.07567" rel="noopener noreferrer">MFI</a> - Feature importance measure for non-linear algorithms.</li>
</ul>

<ul>
<li><a href="https://www.sciencedirect.com/science/article/abs/pii/S0304380002000649" rel="noopener noreferrer">NID</a> - Neural interpretation diagram.</li>
</ul>

<ul>
<li><a href="https://arxiv.org/abs/2006.05714" rel="noopener noreferrer">OptiLIME</a> - Optimized LIME.</li>
</ul>

<ul>
<li><a href="https://dl.acm.org/doi/abs/10.1145/3077257.3077271" rel="noopener noreferrer">PALM</a> - Partition aware local model.</li>
</ul>

<ul>
<li><a href="https://arxiv.org/abs/1702.04595" rel="noopener noreferrer">PDA</a> - Prediction Difference Analysis: Visualize deep neural network decisions.</li>
</ul>

<ul>
<li><a href="https://projecteuclid.org/download/pdf_1/euclid.aos/1013203451" rel="noopener noreferrer">PDP</a> - Partial dependence plots.</li>
</ul>

<ul>
<li><a href="https://academic.oup.com/bioinformatics/article/24/13/i6/233341" rel="noopener noreferrer">POIMs</a> - Positional oligomer importance matrices for understanding SVM signal detectors.</li>
</ul>

<ul>
<li><a href="https://arxiv.org/abs/1807.07506" rel="noopener noreferrer">ProfWeight</a> - Transfer information from deep network to simpler model.</li>
</ul>

<ul>
<li><a href="https://dl.acm.org/doi/abs/10.1145/2858036.2858529" rel="noopener noreferrer">Prospector</a> - Interactive partial dependence diagnostics.</li>
</ul>

<ul>
<li><a href="https://ieeexplore.ieee.org/abstract/document/7546525" rel="noopener noreferrer">QII</a> - Quantitative input influence.</li>
</ul>

<ul>
<li><a href="https://content.iospress.com/articles/ai-communications/aic272" rel="noopener noreferrer">REFNE</a> - Extracting symbolic rules from trained neural network ensembles.</li>
</ul>

<ul>
<li><a href="https://arxiv.org/abs/1608.05745" rel="noopener noreferrer">RETAIN</a> - Reverse time attention model.</li>
</ul>

<ul>
<li><a href="https://arxiv.org/abs/1806.07421" rel="noopener noreferrer">RISE</a> - Randomized input sampling for explanation.</li>
</ul>

<ul>
<li><a href="https://link.springer.com/article/10.1007%2Fs11063-011-9207-8" rel="noopener noreferrer">RxREN</a> - Reverse engineering neural networks for rule extraction.</li>
</ul>

<ul>
<li><a href="https://arxiv.org/abs/1705.07874" rel="noopener noreferrer">SHAP</a> - A unified approach to interpretting model predictions.</li>
</ul>

<ul>
<li><a href="https://arxiv.org/abs/2101.10710" rel="noopener noreferrer">SIDU</a> - Similarity, difference, and uniqueness input perturbation.</li>
</ul>

<ul>
<li><a href="https://arxiv.org/abs/1312.6034" rel="noopener noreferrer">Simonynan, et. al</a> - Visualizing CNN classes.</li>
</ul>

<ul>
<li><a href="https://arxiv.org/abs/1611.07579" rel="noopener noreferrer">Singh, et. al</a> - Programs as black-box explanations.</li>
</ul>

<ul>
<li><a href="https://arxiv.org/abs/1610.09036" rel="noopener noreferrer">STA</a> - Interpreting models via Single Tree Approximation.</li>
</ul>

<ul>
<li><a href="https://www.jmlr.org/papers/volume11/strumbelj10a/strumbelj10a.pdf" rel="noopener noreferrer">Strumbelj, et. al.</a> - Explanation of individual classifications using game theory.</li>
</ul>

<ul>
<li><a href="https://www.academia.edu/download/2471122/3uecwtv9xcwxg6r.pdf" rel="noopener noreferrer">SVM+P</a> - Rule extraction from support vector machines.</li>
</ul>

<ul>
<li><a href="https://openreview.net/forum?id=S1viikbCW" rel="noopener noreferrer">TCAV</a> - Testing with concept activation vectors.</li>
</ul>

<ul>
<li><a href="https://dl.acm.org/doi/abs/10.1145/3097983.3098039" rel="noopener noreferrer">Tolomei, et. al.</a> - Interpretable predictions of tree-ensembles via actionable feature tweaking.</li>
</ul>

<ul>
<li><a href="https://www.researchgate.net/profile/Edward-George-2/publication/2610587_Making_Sense_of_a_Forest_of_Trees/links/55b1085d08aec0e5f430eb40/Making-Sense-of-a-Forest-of-Trees.pdf" rel="noopener noreferrer">Tree Metrics</a> - Making sense of a forest of trees.</li>
</ul>

<ul>
<li><a href="https://arxiv.org/abs/1706.06060" rel="noopener noreferrer">TreeSHAP</a> - Consistent feature attribute for tree ensembles.</li>
</ul>

<ul>
<li><a href="https://arxiv.org/abs/1611.07429" rel="noopener noreferrer">TreeView</a> - Feature-space partitioning.</li>
</ul>

<ul>
<li><a href="http://www.inf.ufrgs.br/~engel/data/media/file/cmp121/TREPAN_craven.nips96.pdf" rel="noopener noreferrer">TREPAN</a> - Extracting tree-structured representations of trained networks.</li>
</ul>

<ul>
<li><a href="https://dl.acm.org/doi/abs/10.1145/3412815.3416893" rel="noopener noreferrer">TSP</a> - Tree space prototypes.</li>
</ul>

<ul>
<li><a href="http://www.columbia.edu/~aec2163/NonFlash/Papers/VisualBackProp.pdf" rel="noopener noreferrer">VBP</a> - Visual back-propagation.</li>
</ul>

<ul>
<li><a href="https://ieeexplore.ieee.org/abstract/document/5949423" rel="noopener noreferrer">VEC</a> - Variable effect characteristic curve.</li>
</ul>

<ul>
<li><a href="https://dl.acm.org/doi/abs/10.1145/1014052.1014122" rel="noopener noreferrer">VIN</a> - Variable interaction network.</li>
</ul>

<ul>
<li><a href="https://arxiv.org/abs/1508.07551" rel="noopener noreferrer">X-TREPAN</a> - Adapted etraction of comprehensible decision tree in ANNs.</li>
</ul>

<ul>
<li><a href="http://proceedings.mlr.press/v37/xuc15" rel="noopener noreferrer">Xu, et. al.</a> - Show, attend, tell attention model.</li>
</ul>
<h3><p>Papers / Critiques</p>
</h3>
<ul>
<li><a href="https://arxiv.org/abs/1902.10186" rel="noopener noreferrer">Attention is not Explanation</a> - Authors perform a series of NLP experiments which argue attention does not provide meaningful explanations. They also demosntrate that different attentions can generate similar model outputs.</li>
</ul>

<ul>
<li><a href="https://arxiv.org/abs/1908.04626" rel="noopener noreferrer">Attention is not --not-- Explanation</a> - This is a rebutal to the above paper. Authors argue that multiple explanations can be valid and that the and that attention can produce <em>a</em> valid explanation, if not -the- valid explanation.</li>
</ul>

<ul>
<li><a href="https://arxiv.org/abs/1903.11420" rel="noopener noreferrer">Do Not Trust Additive Explanations</a> - Authors argue that addditive explanations (e.g. LIME, SHAP, Break Down) fail to take feature ineractions into account and are thus unreliable.</li>
</ul>

<ul>
<li><a href="https://arxiv.org/abs/1905.03151" rel="noopener noreferrer">Please Stop Permuting Features An Explanation and Alternatives</a> - Authors demonstrate why permuting features is misleading, especially where there is strong feature dependence. They offer several previously described alternatives.</li>
</ul>

<ul>
<li><a href="https://www.nature.com/articles/s42256-019-0048-x?fbclid=IwAR3156gP-ntoAyw2sHTXo0Z8H9p-2wBKe5jqitsMCdft7xA0P766QvSthFs" rel="noopener noreferrer">Stop Explaining Black Box Machine Learning Models for High States Decisions and Use Interpretable Models Instead</a> - Authors present a number of issues with explainable ML and challenges to interpretable ML: (1) constructing optimal logical models, (2) constructing optimal sparse scoring systems, (3) defining interpretability and creating methods for specific methods. They also offer an argument for why interpretable models might exist in many different domains.</li>
</ul>

<ul>
<li><a href="https://link.springer.com/chapter/10.1007/978-3-030-28954-6_14" rel="noopener noreferrer">The (Un)reliability of Saliency Methods</a> - Authors demonstrate how saliency methods vary attribution when adding a constant shift to the input data. They argue that methods should fulfill <em>input invariance</em>, that a saliency method mirror the sensistivity of the model with respect to transformations of the input.</li>
</ul>
<h3><p>Follow / Critiques</p>
</h3>
<ul>
<li><a href="https://ethical.institute/index.html" rel="noopener noreferrer">The Institute for Ethical AI &amp; Machine Learning</a> - A UK-based research center that performs research into ethical AI/ML, which frequently involves XAI.</li>
</ul>

<ul>
<li><a href="https://twitter.com/tmiller_unimelb" rel="noopener noreferrer">Tim Miller</a> - One of the preeminent researchers in XAI.</li>
</ul>
]]></content>
    <link rel="alternate" href="https://www.trackawesomelist.com/2021/03/17/"/>
    <summary>87 awesome projects updated on Mar 17, 2021</summary>
  </entry>
  <entry>
    <id>https://www.trackawesomelist.com/2021/03/02/</id>
    <title>Awesome Xai Updates on Mar 02, 2021</title>
    <updated>2021-03-02T16:01:13.000Z</updated>
    <published>2021-03-02T15:56:11.000Z</published>
    <content type="html"><![CDATA[<h3><p>Repositories / Critiques</p>
</h3>
<ul>
<li><a href="https://github.com/EthicalML/xai" rel="noopener noreferrer">EthicalML/xai (⭐859)</a> - A toolkit for XAI which is focused exclusively on tabular data. It implements a variety of data and model evaluation techniques.</li>
</ul>

<ul>
<li><a href="https://github.com/PAIR-code/what-if-tool" rel="noopener noreferrer">PAIR-code/what-if-tool (⭐753)</a> - A tool for Tensorboard or Notebooks which allows investigating model performance and fairness.</li>
</ul>
]]></content>
    <link rel="alternate" href="https://www.trackawesomelist.com/2021/03/02/"/>
    <summary>2 awesome projects updated on Mar 02, 2021</summary>
  </entry>
</feed>