AML: udpating to tf 2 compatability & latest keras release (note that new keras release means you need to use the forked isolearn repo being updated to latest keras release at lafleur1/isolearn)
Code for training Scrambler networks, an interpretation method for sequence-predictive models based on deep generative masking. The Scrambler learns to predict maximal-entropy PSSMs for a given input sequence such that downstream predictions are reconstructed (the "inclusion" objective). Alternatively, the Scrambler can be trained to output minimal-entropy PSSMs such that downstream predictions are distorted (the "occlusion" objective).
Scramblers were presented in a MLCB 2020* conference paper, "Efficient inference of nonlinear feature attributions with Scrambling Neural Networks".
*2nd Conference on Machine Learning in Computational Biology, (MLCB 2020), Online.
Contact jlinder2 (at) cs.washington.edu for any questions about the code.
- Efficient interpretation of sequence-predictive neural networks.
- High-capacity interpreter based on ResNets.
- Find multiple salient feature sets with mask dropout.
- Separate maximally enhancing and repressive features.
- Fine-tune interpretations with per-example optimization.
- Supports multiple-input predictor architectures.
Install by cloning or forking the github repository:
git clone https://github.com/johli/scrambler.git
cd scrambler
python setup.py install
- Tensorflow == 1.13.1
- Keras == 2.2.4
- Scipy >= 1.2.1
- Numpy >= 1.16.2
The sub-folder analysis/ contains all the code used to produce the results of the paper.
The sub-folder examples/ contains a number of light-weight examples showing the basic usage of the Scrambler package functionality. The examples are listed below.
Interpretating predictors for images.
Notebook 1: Interpreting MNIST Images
Interpretating predictors for RNA-regulatory biology.
Notebook 2a: Interpreting APA Sequences
Notebook 2b: Interpreting APA Sequences (Custom Loss)
Notebook 3a: Interpreting 5' UTR Sequences
Notebook 3b: Optimizing individual 5' UTR Interpretations
Notebook 3c: Fine-tuning pre-trained 5' UTR Interpretations
Interpretating predictors for proteins.
Notebook 4a: Interpreting Protein-protein Interactions (inclusion)
Notebook 4b: Interpreting Protein-protein Interactions (occlusion)
Notebook 5a: Interpreting Hallucinated Protein Structures (no MSA)
Notebook 5b: Interpreting Natural Protein Structures (with MSA)
The following GIFs illustrate how the Scrambler network interpretations converge on a few select input examples during training.
WARNING: The following GIFs contain flickering pixels/colors. Do not look at them if you are sensitive to such images.
The following GIF depicts a Scrambler trained to reconstruct APA isoform predictions.
The following GIF depicts a Scrambler trained to reconstruct 5' UTR translation efficiency predictions.
The following GIF depicts a Scrambler trained to distort protein interactions predictions (siamese occlusion). Red letters correspond to designed hydrogen bond network positions. The following GIF displays the same interpretation but projected onto the 3D structure of the complex.