This repository provides the code for our accepted MICCAI2023 paper "Uncertainty-Informed Mutual Learning for Joint Medical Image Classification and Segmentation".
Official implementation: UML. The structure of this repository is as follows:
UML/
├── images # All images used in this repository.
├── UML_Framework.jpg # The Framework image.
├── datasets
├── datasets_preprocess
├── ispy_preprocess.py # The preprocess code of I-SPY1 dataset.
└── refuge_preprocess.py # The preprocess code of REFUGE Glaucoma dataset.
├── ispy_dataset.py # The torch.Dataset of I-SPY1 dataset.
└── refuge_dataset.py # The torch.Dataset of Refuge dataset.
├── models
├── uml_net.py # The Uncertainty Mutual Leaning Neural Network.
├── modules.py # The modules for UML_Net.
├── model_lib
├── pretrained_model_zoo # We suggest you download the pretrained model to this path.
└── res2net.py # The pre-trained Res2Net module.
└── loss
├── cls_loss.py # The loss function for classification.
└── seg_loss.py # The loss function for segmentation.
├── exp_refuge
├── train_uml_refuge.py # The UML_Net training code of Refuge Dataset.
└── config_refuge.py # The config file of Refuge Dataset.
└── utils.py # The util functions.
Classification and segmentation are crucial in medical image analysis as they enable accurate diagnosis and disease monitoring. However, current methods often prioritize the mutual learning features and shared model parameters, while neglecting the reliability of features and performances. In this paper, we propose a novel Uncertainty-informed Mutual Learning (UML) framework for reliable and interpretable medical image analysis. Our UML introduces reliability to joint classification and segmentation tasks, leveraging mutual learning with uncertainty to improve performance. To achieve this, we first use evidential deep learning to provide image-level and pixel-wise confidences. Then, an uncertainty navigator is constructed for better using mutual features and generating segmentation results. Besides, an uncertainty instructor is proposed to screen reliable masks for classification. Overall, UML could produce confidence estimation in features and performance for each link (classification and segmentation). The experiments on the public datasets demonstrate that our UML outperforms existing methods in terms of both accuracy and robustness. Our UML has the potential to explore the development of more reliable and explainable medical image analysis models.
We use 2 Datasets to test our UML network. You can DOWNLOAD the raw dataset from the following links.
- I-SPY1 Trail Dataset. Could be downloaded from HERE !
- Refuge Glaucoma. Could be downloaded from HERE !
After downloading the datasets following the Dataset Acquisition, data preprocessing is needed which is to reformat the directory structure of datasets. We have released Pre-Process code for datasets, please read them carefully and follow the guidelines in the comment ! Also we released torch.Dataset
code for datasets,
- I-SPY1 Trail Dataset.
- Refuge Glaucoma.
We use a Pretrained Res2Net: Res2Net-50-26w-4s, you can download the .pth
file from HERE ! The path of this .pth
file is an important parameter in the __init__()
function of UML_Net
.
There are too many differences between the different datasets, so we built separate training and prediction code for each one to run:
- I-SPY1 Trail Dataset.
- Refuge Glaucoma.
- The training code is in
train_uml_refuge.py
, HERE ! You can RUN it usingpython3 train_uml_refuge.py
- The training code is in
If you find our work is HELPFUL for your research, please consider to CITE:
@inproceedings{ren2023UML,
title={Uncertainty-informed mutual learning for joint medical image classification and segmentation},
author={Ren, Kai and Zou, Ke and Liu, Xianjie and Chen, Yidi and Yuan, Xuedong and Shen, Xiaojing and Wang, Meng and Fu, Huazhu},
booktitle={International Conference on Medical Image Computing and Computer-Assisted Intervention},
pages={35--45},
year={2023},
organization={Springer}
}
If you have any problems about our work, please contact us.