Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
Oct 2022 software

Reproduction Package for the article: A General Construction for Abstract Interpretation of Higher-Order Automatic Differentiation

Description

This artifact contains the implementation and experiments for the paper “A General Construction for Abstract Interpretation of Higher-Order Automatic Differentiation” (OOPSLA 2022) by Jacob Laurel, Rem Yang, Shubham Ugare, Robert Nagel, Gagandeep Singh, and Sasa Misailovic. For both the interval and zonotope domains, we implement abstract first- and second-order automatic differentiation. We use our technique to study (1) robustly explaining a neural network via their first and second derivatives and (2) computing the Lipschitz constant of neural networks.


Assets

Read Me (oopslab22main-p350-p-Artifact-accepted-readme.txt)
Artifact (oopslab22main-p350-p-Artifact-accepted.zip)

Instructions

General Installation

Software Dependencies:

The tool itself requires PyTorch and NumPy. To also plot results, Jupyter Notebook, Matplotlib, and Seaborn are also needed. We ran our experiments with the following software versions: python (3.8.8), torch (1.11.0 cpu), and numpy (1.22.4).

Experimental Installation

Work Flows:

The directory structure is given as follows:

src/: Contains the core source code. Section_7_2/: Contains the code for reproducing our results in Section 7.2 of our paper. Section_7_3/: Contains the code for reproducing our results in Section 7.3 of our paper.

Evaluation:

To reproduce the results of Section 7.2, one can enter into the Section_7_2/ directory and run “./run5.sh”.

To reproduce the results of Section 7.3, one can enter into the Section_7_3/ directory and run “./lipschitz.sh”.


Provenance

For the most recent version of this software please visit: https://github.com/uiuc-arc/AbstractAD


License


Comments