Skip to content

da2so/Interpretable-Explanations-of-Black-Boxes-by-Meaningful-Perturbation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Interpretable Explanations of Black Boxes by Meaningful Perturbation with PyTorch

Python version support PyTorch version support

⭐ Star us on GitHub — it helps!!

fig1

PyTorch implementation for Interpretable Explanations of Black Boxes by Meaningful Perturbation

Install

You will need a machine with a GPU and CUDA installed.
Then, you prepare runtime environment:

pip install -r requirements.txt

Use

This codes are baesed on ImageNet dataset

python main.py --model_path=vgg19 --img_path=examples/catdog.png

Arguments:

  • model_path - Choose a pretrained model in torchvision.models or saved model (.pt)
    • Examples of available list: ['alexnet', 'vgg19', 'resnet50', 'densenet169', 'mobilenet_v2' ,'wide_resnet50_2', ...]
  • img_path - Image Path
  • perturb - Choose a perturbation method (blur, noise)
  • tv_coeff - Coefficient of TV
  • tv_beta - TV beta value
  • l1_coeff - L1 regularization
  • factor - Factor to upsampling
  • lr - Learning rate
  • iter - Iteration

How to use your customized model

If you want to use customized model that has a type 'OrderedDict', you shoud type a code that loads model object.

Search 'load model' function in utils.py and type a code such as:

from yourNetwork import yourNetwork())
model=yourNetwork()

Understanding this Paper!

✅ Check my blog!! Here

Releases

No releases published

Packages

No packages published

Languages