A study on the interpretability of the concepts learned by Prototypical Part Networks (ProtoPNets).
This work exploits the part locations annotations available for two different datasets to provide an objective evalution of the prototypes. An additional diversity regularization is also introduced to produce more diverse concepts.
More details on the implementation can be found in the report.
- Clone the repository and install the required dependencies:
git clone https://github.com/materight/explainable-ProtoPNet.git cd explainable-ProtoPNet pip install -r requirements.txt
- Download and prepare the data, either for the Caltech-UCSD Birds-200 or the CelebAMask HQ datasets:
python prepare_data.py cub200 python prepare_data.py celeb_a
To train a new model on a dataset, run:
python train.py --dataset [data_path] --exp_name [experiment_name]
Additional options can be specified (run the script with --help
to see the available ones).
After training, the learned prototypes can be further pruned:
python prune_prototypes.py --dataset [data_path] --model [model_path]
To evaluate a trained model and the learned prototypes, run:
python evaluate.py --model [model_path] {global|local|alignment} --dataset [data_path]
global
: retrieve for each prototype the most activated patches in the whole dataset.local
: evaluate the model on a subset of samples and generate visualizations for the activated prototypes for each class.alignment
: generate plots for the alignment matrix of each class.
This implementation is based on the original ProtoPNet repository.