OpenXAI : Towards a Transparent Evaluation of Model Explanations

Overview

OpenXAI : Towards a Transparent Evaluation of Model Explanations


Website | arXiv Paper

OpenXAI is the first general-purpose lightweight library that provides a comprehensive list of functions to systematically evaluate the quality of explanations generated by attribute-based explanation methods. OpenXAI supports the development of new datasets (both synthetic and real-world) and explanation methods, with a strong bent towards promoting systematic, reproducible, and transparent evaluation of explanation methods.

OpenXAI is an open-source initiative that comprises of a collection of curated high-stakes datasets, models, and evaluation metrics, and provides a simple and easy-to-use API that enables researchers and practitioners to benchmark explanation methods using just a few lines of code.

Updates

  • 0.0.0: OpenXAI is live! Now, you can submit the result for benchmarking an post-hoc explanation method on an evaluation metric. Checkout here!
  • OpenXAI white paper is on arXiv!

Unique Features of OpenXAI

  • Diverse areas of XAI research: OpenXAI includes ready-to-use API interfaces for seven state-of-the-art feature attribution methods and 22 metrics to quantify their performance. Further, it provides a flexible synthetic data generator to synthesize datasets of varying sizes, complexity, and dimensionality that facilitate the construction of ground truth explanations and a comprehensive collection of real-world datasets.
  • Data functions: OpenXAI provides extensive data functions, including data evaluators, meaningful data splits, explanation methods, and evaluation metrics.
  • Leaderboards: OpenXAI provides the first ever public XAI leaderboards to promote transparency, and to allow users to easily compare the performance of multiple explanation methods.
  • Open-source initiative: OpenXAI is an open-source initiative and easily extensible.

Installation

Using pip

To install the core environment dependencies of OpenXAI, use pip by cloning the OpenXAI repo into your local environment:

pip install -e . 

Design of OpenXAI

OpenXAI is an open-source ecosystem comprising XAI-ready datasets, implementations of state-of-the-art explanation methods, evaluation metrics, leaderboards and documentation to promote transparency and collaboration around evaluations of post hoc explanations. OpenXAI can readily be used to benchmark new explanation methods as well as incorporate them into our framework and leaderboards. By enabling systematic and efficient evaluation and benchmarking of existing and new explanation methods, OpenXAI can inform and accelerate new research in the emerging field of XAI.

OpenXAI DataLoaders

OpenXAI provides a Dataloader class that can be used to load the aforementioned collection of synthetic and real-world datasets as well as any other custom datasets, and ensures that they are XAI-ready. More specifically, this class takes as input the name of an existing OpenXAI dataset or a new dataset (name of the .csv file), and outputs a train set which can then be used to train a predictive model, a test set which can be used to generate local explanations of the trained model, as well as any ground-truth explanations (if and when available). If the dataset already comes with pre-determined train and test splits, this class loads train and test sets from those pre-determined splits. Otherwise, it divides the entire dataset randomly into train (70%) and test (30%) sets. Users can also customize the percentages of train-test splits.

For a concrete example, the code snippet below shows how to import the Dataloader class and load an existing OpenXAI dataset:

from openxai import Dataloader
loader_train, loader_test = Dataloader.return_loaders(data_name=german’, download=True)
# get an input instance from the test dataset
inputs, labels = iter(loader_test).next()

OpenXAI Pre-trained models

We also pre-trained two classes of predictive models (e.g., deep neural networks of varying degrees of complexity, logistic regression models etc.) and incorporated them into the OpenXAI framework so that they can be readily used for benchmarking explanation methods. The code snippet below shows how to load OpenXAI’s pre-trained models using our LoadModel class.

from openxai import LoadModel
model = LoadModel(data_name=german’, ml_model=ann’)

Adding additional pre-trained models into the OpenXAI framework is as simple as uploading a file with details about model architecture and parameters in a specific template. Users can also submit requests to incorporate custom pre-trained models into the OpenXAI framework by filling a simple form and providing details about model architecture and parameters.

OpenXAI Explainers

All the explanation methods included in OpenXAI are readily accessible through the Explainer class, and users just have to specify the method name in order to invoke the appropriate method and generate explanations as shown in the above code snippet. Users can easily incorporate their own custom explanation methods into the OpenXAI framework by extending the Explainer class and including the code for their methods in the get_explanations function of this class.

from openxai import Explainer
exp_method = Explainer(method=LIME’)
explanations = exp_method.get_explanations(model, X=inputs, y=labels)

Users can then submit a request to incorporate their custom methods into OpenXAI library by filling a form and providing the GitHub link to their code as well as a summary of their explanation method.

OpenXAI Evaluation

Benchmarking an explanation method using evaluation metrics is quite simple and the code snippet below describes how to invoke the RIS metric. Users can easily incorporate their own custom evaluation metrics into OpenXAI by filling a form and providing the GitHub link to their code as well as a summary of their metric. Note that the code should be in the form of a function which takes as input data instances, corresponding model predictions and their explanations, as well as OpenXAI’s model object and returns a numerical score.

from openxai import Evaluator
metric_evaluator = Evaluator(inputs, labels, model, explanations)
score = metric_evaluator.eval(metric=RIS’)

OpenXAI Leaderboards

Every explanation method in OpenXAI is a benchmark, and we provide dataloaders, pre-trained models, together with explanation methods and performance evaluation metrics. To participate in the leaderboard for a specific benchmark, follow these steps:

  • Use the OpenXAI benchmark dataloader to retrieve a given dataset.

  • Use the OpenXAI LoadModel to load a pre-trained model.

  • Use the OpenXAI Explainer to load a post hoc explanation method.

  • Submit the performance of the explanation method for a given metric.

Cite Us

If you find OpenXAI benchmark useful, cite our paper:

@article{agarwal2022openxai,
  title={OpenXAI: Towards a Transparent Evaluation of Model Explanations},
  author={Agarwal, Chirag and Saxena, Eshika and Krishna, Satyapriya and Pawelczyk, Martin and Johnson, Nari and Puri, Isha and Zitnik, Marinka and Lakkaraju, Himabindu},
  journal={arXiv},
  year={2022}
}

Contact

Reach us at [email protected] or open a GitHub issue.

License

OpenXAI codebase is under MIT license. For individual dataset usage, please refer to the dataset license found in the website.

Comments
  • Issue with pillow library compilation during install with PIP

    Issue with pillow library compilation during install with PIP

    Hello Team,

    I am Anand from University of Stuttgart, pursuing my doctoral research. While installing the open XAI using PIP command there is a compilation error reported with respect to pillow..

    writing src/Pillow.egg-info/PKG-INFO
    writing dependency_links to src/Pillow.egg-info/dependency_links.txt
    writing top-level names to src/Pillow.egg-info/top_level.txt
    reading manifest file 'src/Pillow.egg-info/SOURCES.txt'
    reading manifest template 'MANIFEST.in'
    _warning: no files found matching '*.c'
    warning: no files found matching '*.h'
    warning: no files found matching '*.sh'
    warning: no previously-included files found matching '.appveyor.yml'
    warning: no previously-included files found matching '.clang-format'
    warning: no previously-included files found matching '.coveragerc'
    warning: no previously-included files found matching '.editorconfig'
    warning: no previously-included files found matching '.readthedocs.yml'
    warning: no previously-included files found matching 'codecov.yml'
    warning: no previously-included files matching '.git*' found anywhere in distribution
    warning: no previously-included files matching '*.pyc' found anywhere in distribution
    warning: no previously-included files matching '*.so' found anywhere in distribution
    no previously-included directories found matching '.ci'_
    writing manifest file 'src/Pillow.egg-info/SOURCES.txt'
    running build_ext
        
    The headers or library files could not be found for jpeg,
    a required dependency when compiling Pillow from source.
    

    Traceback (most recent call last): File "", line 1, in File "/tmp/pip-build-8_fu5dva/pillow/setup.py", line 1037, in raise RequiredDependencyException(msg) main.RequiredDependencyException:

    Command "/usr/bin/python3 -u -c "import setuptools, tokenize;file='/tmp/pip-build-8_fu5dva/pillow/setup.py';f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, file, 'exec'))" install --record /tmp/pip-p4qugs1n-record/install-record.txt --single-version-externally-managed --compile --user --prefix=" failed with error code 1 in /tmp/pip-build-8_fu5dva/pillow/

    Am I missing some thing else in the setup (error message says some header files and other file extensions are missing)

    Your feedback would be of great help!

    Thanks in advance

    Best regards, Anand

    opened by anand7v 2
  • Explicit versions of dependencies necessary?

    Explicit versions of dependencies necessary?

    Hi everyone, after your fix in #7 for installation, explicit versions are assigned to each of the depent packages. Is this really necessary? At the moment, some dependent package versions conflict with my existing environments. Please ignore this issue if the explicit versions are really necessary, otherwise I would recommend to rather use "lower bounds" for the package versions. Thank you!

    opened by fabiankueppers 2
  • [FEATURE REQUEST] Include the installation of dependency packages in setup.py

    [FEATURE REQUEST] Include the installation of dependency packages in setup.py

    The current setup.py does not include dependency packages such as captum. Ideally we should include the installation of dependency packages in setup.py so that 1) the end user does not need to install them manually and 2) makes it more self-contained when someone wants to include OpenXAI as a dependency in their package.

    opened by jiaqima 1
  • Add api.py for external API calls to OpenXAI

    Add api.py for external API calls to OpenXAI

    Description

    Implemented an OpenXAI class in api.py to serve for external API calls.

    Example Usage

    from openxai.api import OpenXAI
    
    oxai = OpenXAI(data_name="german", model_name="ann", explainer_name="lime")
    
    # query full data
    df_full = oxai.query()
    
    # query a batch of data with feature tensor `X` and label tensor `y`
    df_batch = oxai.query(X, y)
    

    The returned df_full or df_batch are pandas dataframes with each row corresponds to a data sample. The number of columns is 2d + 3, where d is the feature dimension.

    The columns from left to right are: features (d), feature attribution scores (d), label (1), predicted label (1), and is_test flag (1).

    Test

    The code has been successfully tested on 3 datasets ("compas", "adult", "german"), 2 models ("ann", "lr"), and 6 explainers ("grad", "sg", "itg", "ig", "shap", "lime").

    To reproduce the test, run python openxai/api.py under the root of this repo.

    Note: Currently there seems to be path issues for this API to be used externally. The issue is possibly due to the path usage by LoadModel, which should be fixed in a separate PR.

    opened by jiaqima 1
  • fix mkdir

    fix mkdir

    Description

    Changed os.mkdir(...) to os.makedirs(..., exist_ok=True) to fix the bug when pretrained folder already exists.

    Together with commit da70155402c3a3ef60a6c0293f8246e8772e40b3 by @chirag126, this should fix #4.

    Test

    Tested locally by running python openxai/api.py.

    Also tested locally by running the following piece of code outside the OpenXAI folder:

    from openxai.api import OpenXAI
    oxai = OpenXAI(data_name="german", model_name="ann", explainer_name="lime")
    df_full = oxai.query()
    print(df_full.head())
    

    and obtained the following outputs:

       duration    amount  installment-rate  ...  label  prediction  is_test
    0  0.205882  0.228094          0.666667  ...    0.0         1.0      0.0
    1  0.294118  0.072564          1.000000  ...    1.0         1.0      0.0
    2  0.294118  0.064395          1.000000  ...    1.0         1.0      0.0
    3  0.470588  0.355607          1.000000  ...    1.0         1.0      0.0
    4  0.470588  0.421916          1.000000  ...    1.0         1.0      0.0
    
    [5 rows x 123 columns]
    
    opened by jiaqima 0
  • add environment.yml

    add environment.yml

    One can use conda env create -f environment.yml to create a conda environment named "OpenXAI".

    This should be able to install all the dependencies. Tested locally.

    opened by jiaqima 0
  • [FEATURE REQUEST] Clean up hard-coded paths

    [FEATURE REQUEST] Clean up hard-coded paths

    There are some hard-coded paths such as ./openxai/ML_Models/Saved_Models/ANN/gaussian_lr_0.002_acc_0.91.pt. This is not very friendly to an external API call as one may not be running code at the root of OpenXAI folder.

    We should refactor the code to clean up such hard-coded paths.

    A good solution might be hosting these files on a Google Drive, which will also reduce the repo size.

    opened by jiaqima 0
  • [FEATURE REQUEST] Refactor `Explainer` API

    [FEATURE REQUEST] Refactor `Explainer` API

    The current Explainer class requires method, model, and dataset_tensor as required arguments. It takes quite some extra lines of code for the user to construct dataset_tensor. We should try to eliminate this required argument.

    opened by jiaqima 0
  • Not install using the pip command

    Not install using the pip command

    Hi,

    I am trying to use the OpenXAI tool in the google colab environment. But when I using the pip command the following error is show:

    "ERROR: File "setup.py" or "setup.cfg" not found. Directory cannot be installed in editable mode: /content"

    Could you please provide any solution of this problem.

    opened by ripankundu 5
Lazy evaluation list with high tree-shaking affinity and easy customization.

Lazy evaluation list with high tree-shaking affinity and easy customization. Features ?? Lazy Evaluation: The collections are only enumerated to the m

Masanori Onoue 22 Dec 28, 2022
Project of "Web Development" course for the Bachelor's degree in Computer Engineering, taken at the University of Pisa. Final evaluation: 30/30.

La battaglia della Meloria Welcome! This is the ???? version of the README file. Click here for ???? version. Introduction Historical reinterpretation

Daniel Namaki 3 Oct 6, 2022
This Lens Protocol module allows you to create a Transparent Promotion system in which the post creator can add a reward for who (ex: influencers) mirror it.

promote-module (in progress) This Lens Protocol module allows you to create a Transparent Promotion system in which the post creator can add a reward

Alessandro Manfredi 9 Oct 2, 2022
📝 Algorithms and data structures implemented in JavaScript with explanations and links to further readings

?? Algorithms and data structures implemented in JavaScript with explanations and links to further readings

Oleksii Trekhleb 157.8k Dec 29, 2022
A long list of (advanced) JavaScript questions, and their explanations

JavaScript Questions I post multiple choice JavaScript questions on my Instagram stories, which I'll also post here! Last updated: June 12th From basi

Lydia Hallie 50.9k Jan 1, 2023
The invoker based on event model provides an elegant way to call your methods in another container via promisify functions

The invoker based on event model provides an elegant way to call your methods in another container via promisify functions. (like child-processes, iframe, web worker etc).

尹挚 7 Dec 29, 2022
Base Rails app that includes login, social login, homepage, and basic model for serving as a scaffold app.

Rails7Base I created the Rails7Base as a scaffold application. Countless times, I had to create apps that must have the following features: Login syst

Chim Kan 14 Jul 2, 2022
Font-end app to test the transformer model translation from Cape Verdian Creole to English

Getting Started with Create React App This project was bootstrapped with Create React App. Available Scripts In the project directory, you can run: np

Roberto Carlos 5 Sep 28, 2022
Network physical synchronization model room based on babylonjs + ammojs

Network physical synchronization model room based on babylonjs + ammojs The red mesh calculates the physical effects locally of the current user, and

蔬菜土豆泥 8 Nov 19, 2022
Contains html file showcasing Earthquake related data generated in the form of VR model, ArcGIS API with real-time earthquake feed and video of simulation of earthquake generated in blender

Module-EADGI-Project-All about Earthquakes Introduction Contains html file showcasing Earthquake related data generated in the form of VR model, ArcGI

Abhishek Rawat 2 Jun 9, 2022
A personal school project to model the behaviour of the human immune system as a network graph with interactive visualisation.

An educational tool designed to help users understand the immune system. Made using Processing 5 for Java Script

Oscar Bullen 2 Jun 18, 2022
UAParser.js - Detect Browser, Engine, OS, CPU, and Device type/model from User-Agent data. Supports browser & node.js environment.

UAParser.js JavaScript library to detect Browser, Engine, OS, CPU, and Device type/model from User-Agent data with relatively small footprint (~17KB m

Faisal Salman 7.4k Jan 4, 2023
The Easel Javascript library provides a full, hierarchical display list, a core interaction model, and helper classes to make working with the HTML5 Canvas element much easier.

EaselJS EaselJS is a library for building high-performance interactive 2D content in HTML5. It provides a feature-rich display list to allow you to ma

CreateJS 8k Dec 29, 2022
Merge multiple Prisma schema files, model inheritance, resolving name conflicts and timings reports, all in a simple tool.

Prisma Util What is Prisma Util? • How to use? • The configuration file • Support What is Prisma Util? Prisma Util is an easy to use tool that merges

David Hancu 21 Dec 28, 2022
🏭 Framework-agnostic model factory system for clean testing

@julr/factorify Framework-agnostic model factory system for clean testing. Built-on top of Knex + Faker, and heavily inspired by Adonis.js and Laravel

Julien Ripouteau 14 Sep 29, 2022
This repository contains a fullstack chatbot project based on the ChatGPT `gpt-3.5-turbo` model.

This is a fullstack chatbot created with React, Nodejs, OpenAi, and ChatGPT while developing the following tutorial: How To Build A Chat Bot Applicati

NJOKU SAMSON EBERE 6 May 10, 2023
An interactive git visualization and tutorial. Aspiring students of git can use this app to educate and challenge themselves towards mastery of git!

LearnGitBranching LearnGitBranching is a git repository visualizer, sandbox, and a series of educational tutorials and challenges. Its primary purpose

Peter Cottle 26.4k Jan 3, 2023
An extension geared towards Spotify users with larger libraries; view all your playlists that contain a specific song with the click of a button. Designed for Spicetify (https://github.com/khanhas/spicetify-cli)

ViewPlaylistsWithSong An extension developed for Spicetify that allows you to view all the playlists in your library that contain a certain song. Idea

null 21 Dec 13, 2022
Lazy evaluation list with high tree-shaking affinity and easy customization.

Lazy evaluation list with high tree-shaking affinity and easy customization. Features ?? Lazy Evaluation: The collections are only enumerated to the m

Masanori Onoue 22 Dec 28, 2022
Project of "Web Development" course for the Bachelor's degree in Computer Engineering, taken at the University of Pisa. Final evaluation: 30/30.

La battaglia della Meloria Welcome! This is the ???? version of the README file. Click here for ???? version. Introduction Historical reinterpretation

Daniel Namaki 3 Oct 6, 2022