Top 50 Awesome List

erwanlemerrer/awesome-audit-algorithms

Theory  21 days ago  40
A curated list of algorithms and papers for auditing black-box algorithms.
View byDAY/WEEK/README
View on Github

Jun 3rd

Papers

2022

  • Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis - (NeurIPS) Sobol indices provide an efficient way to capture higher-order interactions between image regions and their contributions to a (black box) neural network’s prediction through the lens of variance.
  • May 6th

    Papers

    2022

  • Your Echos are Heard: Tracking, Profiling, and Ad Targeting in the Amazon Smart Speaker Ecosystem - (arxiv) Infers a link between the Amazon Echo system and the ad targeting algorithm.
  • Apr 4th

    Papers

    2021

  • Auditing Black-Box Prediction Models for Data Minimization Compliance - (NeurIPS) Measures the level of data minimization satisfied by the prediction model using a limited number of queries.
  • Feb 18th

    Papers

    2021

  • Bayesian Algorithm Execution: Estimating Computable Properties of Black-box Functions Using Mutual Information - (ICML) A budget constrained and Bayesian optimization procedure to extract properties out of a black-box algorithm.
  • Feb 17th

    Papers

    2020

  • Auditing radicalization pathways on YouTube - (FAT*) Studies the reachability of radical channels from each others, using random walks on static channel recommendations.
  • Feb 16th

    Papers

    2021

  • Setting the Record Straighter on Shadow Banning - (INFOCOM) (Code) Considers the possibility of shadow banning in Twitter (ie, the moderation black-box algorithm), and measures the probability of several hypothesis.
  • FairLens: Auditing black-box clinical decision support systems - (Information Processing & Management) Presents a pipeline to detect and explain potential fairness issues in Clinical DSS, by comparing different multi-label classification disparity measures.
  • Auditing Algorithmic Bias on Twitter - (WebSci).
  • Papers

    2020

  • Adversarial Model Extraction on Graph Neural Networks - (AAAI Workshop on Deep Learning on Graphs: Methodologies and Applications) Introduces GNN model extraction and presents a preliminary approach for this.
  • Remote Explainability faces the bouncer problem - (Nature Machine Intelligence volume 2, pages529–539) (Code)stars4 Shows the impossibility (with one request) or the difficulty to spot lies on the explanations of a remote AI decision.
  • GeoDA: a geometric framework for black-box adversarial attacks - (CVPR) (Code)stars25 Crafts adversarial examples to fool models, in a pure blackbox setup (no gradients, inferred class only).
  • The Imitation Game: Algorithm Selectionby Exploiting Black-Box Recommender - (Netys) (Code)stars1 Parametrize a local recommendation algorithm by imitating the decision of a remote and better trained one.
  • Auditing News Curation Systems:A Case Study Examining Algorithmic and Editorial Logic in Apple News - (ICWSM) Audit study of Apple News as a sociotechnical news curation system (trending stories section).
  • Auditing Algorithms: On Lessons Learned and the Risks of DataMinimization - (AIES) A practical audit for a well-being recommendation app developed by Telefónica (mostly on bias).
  • Extracting Training Data from Large Language Models - (arxiv) Performs a training data extraction attack to recover individual training examples by querying the language model.
  • Papers

    2019

  • Adversarial Frontier Stitching for Remote Neural Network Watermarking - (Neural Computing and Applications) (Alternative implementation)stars12 Check if a remote machine learning model is a "leaked" one: through standard API requests to a remote model, extract (or not) a zero-bit watermark, that was inserted to watermark valuable models (eg, large deep neural networks).
  • Knockoff Nets: Stealing Functionality of Black-Box Models - (CVPR) Ask to what extent can an adversary steal functionality of such "victim" models based solely on blackbox interactions: image in, predictions out.
  • Opening Up the Black Box:Auditing Google's Top Stories Algorithm - (Flairs-32) Audit of the Google's Top stories panel that pro-vides insights into its algorithmic choices for selectingand ranking news publisher
  • Making targeted black-box evasion attacks effective andefficient - (arXiv) Investigates how an adversary can optimally use its query budget for targeted evasion attacks against deep neural networks.
  • Online Learning for Measuring Incentive Compatibility in Ad Auctions - (WWW) Measures the incentive compatible- (IC) mechanisms (regret) of black-box auction platforms.
  • TamperNN: Efficient Tampering Detection of Deployed Neural Nets - (ISSRE) Algorithms to craft inputs that can detect the tampering with a remotely executed classifier model.
  • Neural Network Model Extraction Attacks in Edge Devicesby Hearing Architectural Hints - (arxiv) Through the acquisition of memory access events from bus snooping, layer sequence identification bythe LSTM-CTC model, layer topology connection according to the memory access pattern, and layer dimension estimation under data volume constraints, it demonstrates one can accurately recover the a similar network architecture as the attack starting point
  • Stealing Knowledge from Protected Deep Neural Networks Using Composite Unlabeled Data - (ICNN) Composite method which can be used to attack and extract the knowledge ofa black box model even if it completely conceals its softmaxoutput.
  • Neural Network Inversion in Adversarial Setting via Background Knowledge Alignment - (CCS) Model inversion approach in the adversary setting based on training an inversion model that acts as aninverse of the original model. With no fullknowledge about the original training data, an accurate inversion is still possible by training the inversion model on auxiliary samplesdrawn from a more generic data distribution.
  • Papers

    2018

  • Towards Reverse-Engineering Black-Box Neural Networks - (ICLR) (Code)stars45 Infer inner hyperparameters (eg number of layers, non-linear activation type) of a remote neural network model by analysing its response patterns to certain inputs.
  • Data driven exploratory attacks on black box classifiers in adversarial domains - (Neurocomputing) Reverse engineers remote classifier models (e.g., for evading a CAPTCHA test).
  • xGEMs: Generating Examplars to Explain Black-Box Models - (arXiv) Searches bias in the black box model by training an unsupervised implicit generative model. Thensummarizes the black-box model behavior quantitatively by perturbing data samples along the data manifold.
  • Learning Networks from Random Walk-Based Node Similarities - (NIPS) Reversing graphs by observing some random walk commute times.
  • Identifying the Machine Learning Family from Black-Box Models - (CAEPIA) Determines which kind of machine learning model is behind the returned predictions.
  • Stealing Neural Networks via Timing Side Channels - (arXiv) Stealing/approximating a model through timing attacks usin queries.
  • Copycat CNN: Stealing Knowledge by Persuading Confession with Random Non-Labeled Data - (IJCNN) (Code)stars13 Stealing black-box models (CNNs) knowledge by querying them with random natural images (ImageNet and Microsoft-COCO).
  • Auditing the Personalization and Composition ofPolitically-Related Search Engine Results Pages - (WWW) A Chrome extension to survey participants and collect the Search Engine Results Pages (SERPs) and autocomplete suggestions, for studying personalization and composition.
  • Papers

    2017

  • Uncovering Influence Cookbooks : Reverse Engineering the Topological Impact in Peer Ranking Services - (CSCW) Aims at identifying which centrality metrics are in use in a peer ranking service.
  • The topological face of recommendation: models and application to bias detection - (Complex Networks) Proposes a bias detection framework for items recommended to users.
  • Membership Inference Attacks Against Machine Learning Models - (Symposium on Security and Privacy) Given a machine learning model and a record, determine whether this record was used as part of the model's training dataset or not.
  • Practical Black-Box Attacks against Machine Learning - (Asia CCS) Understand how vulnerable is a remote service to adversarial classification attacks.
  • Papers

    2016

  • Bias in Online Freelance Marketplaces: Evidence from TaskRabbit - (dat workshop) Measures the TaskRabbit's search algorithm rank.
  • Stealing Machine Learning Models via Prediction APIs - (Usenix Security) (Code)stars305 Aims at extracting machine learning models in use by remote services.
  • “Why Should I Trust You?”Explaining the Predictions of Any Classifier - (arXiv) (Code)stars257 Explains a blackbox classifier model by sampling around data instances.
  • Back in Black: Towards Formal, Black Box Analysis of Sanitizers and Filters - (Security and Privacy) Black-box analysis of sanitizers and filters.
  • Algorithmic Transparency via Quantitative Input Influence: Theory and Experiments with Learning Systems - (Security and Privacy) Introduces measures that capture the degree of influence of inputs on outputs of the observed system.
  • Papers

    2015

  • Peeking Beneath the Hood of Uber - (IMC) Infer implementation details of Uber's surge price algorithm.
  • Papers

    2014

  • XRay: Enhancing the Web's Transparency with Differential Correlation - (USENIX Security) Audits which user profile data were used for targeting a particular ad, recommendation, or price.
  • Papers

    2013

  • Measuring Personalization of Web Search - (WWW) Develops a methodology for measuring personalization in Web search result.
  • Papers

    2012

  • Query Strategies for Evading Convex-Inducing Classifiers - (JMLR) Evasion methods for convex classifiers. Considers evasion complexity.
  • Papers

    2008

  • Privacy Oracle: a System for Finding Application Leakswith Black Box Differential Testing - (CCS) Privacy Oracle: a system that uncovers applications' leaks of personal information in transmissions to remoteservers.
  • Papers

    2005

  • Adversarial Learning - (KDD) Reverse engineering of remote linear classifiers, using membership queries.
  • Last Checked At: 2022-06-24T17:53:44.906Z
    Previous
    passy/awesome-recursion-schemes
    Next
    EbookFoundation/free-programming-books

    About

    Track your favorite github awesome repo, not just star it. trackawesomelist.com provides website, newsletter, RSS for tracking the popular awesome list by daily and weekly.
    Contact us: [email protected]
    Track Awesome List - Track your favorite Github awesome repos, not just star them | Product Hunt

    Subscribe

    Subscribe to our weekly newsletter to receive the awesome updates! We never send spam and you can unsubscribe instantly with one click. Here's past issues.

    Links

    Follow us on TwitterSubscribe us on TelegramSubmit awesome list repoNewsletterDonateSitemap