Skip to content

Latest commit

 

History

History
65 lines (45 loc) · 4.33 KB

README.md

File metadata and controls

65 lines (45 loc) · 4.33 KB

Memory-Augmented Deep Unfolding Network for Compressive Sensing (ACM MM 2021)

This repository is for MADUN introduced in the following paper:

Jiechong Song, Bin Chen and Jian Zhang, "Memory-Augmented Deep Unfolding Network for Compressive Sensing ", in the 29th ACM International Conference on Multimedia (ACM MM), 2021. PDF

🚩 News(2023-11-24)

✅ 2023-11-24

We release MAPUN code. And the test command is python TEST_CS_MAPUN.py --cs_ratio 10/25/30/40/50 --test_name Set11/CBSD68/Urban100

✅ 2023-3-2

Our extended version has been accepted by IJCV (International Journal of Computer Vision)!

Jiechong Song, Bin Chen and Jian Zhang, "Deep Memory-Augmented Proximal Unrolling Network for Compressive Sensing", in the International Journal of Computer Vision (IJCV), 2023. PDF

🔧 Requirements

  • Python == 3.8
  • Pytorch == 1.8.0

🎨 Abstract

Mapping a truncated optimization method into a deep neural network, deep unfolding network (DUN) has attracted growing attention in compressive sensing (CS) due to its good interpretability and high performance. Each stage in DUNs corresponds to one iteration in optimization. By understanding DUNs from the perspective of the human brain’s memory processing, we find there exists two issues in existing DUNs. One is the information between every two adjacent stages, which can be regarded as short-term memory, is usually lost seriously. The other is no explicit mechanism to ensure that the previous stages affect the current stage, which means memory is easily forgotten. To solve these issues, in this paper, a novel DUN with persistent memory for CS is proposed, dubbed Memory-Augmented Deep Unfolding Network (MADUN). We design a memory-augmented proximal mapping module (MAPMM) by combining two types of memory augmentation mechanisms, namely High-throughput Short-term Memory (HSM) and Cross-stage Long-term Memory (CLM). HSM is exploited to allow DUNs to transmit multi-channel short-term memory, which greatly reduces information loss between adjacent stages. CLM is utilized to develop the dependency of deep information across cascading stages, which greatly enhances network representation capability. Extensive CS experiments on natural and MR images show that with the strong ability to maintain and balance information our MADUN outperforms existing state-of-the-art methods by a large margin.

PMM_MAPMM

👀 Datasets

💻 Command

Train

python Train_CS_MADUN.py --cs_ratio 10/25/30/40/50

Test

python TEST_CS_MADUN.py --cs_ratio 10/25/30/40/50 --test_name Set11/CBSD68/Urban100

📑 Citation

If you find our work helpful in your resarch or work, please cite the following paper.

@inproceedings{song2021memory,
  title={Memory-Augmented Deep Unfolding Network for Compressive Sensing},
  author={Song, Jiechong and Chen, Bin and Zhang, Jian},
  booktitle={Proceedings of the ACM International Conference on Multimedia (ACM MM)},
  year={2021}
}
@article{song2023deep,
  title={Deep Memory-Augmented Proximal Unrolling Network for Compressive Sensing},
  author={Song, Jiechong and Chen, Bin and Zhang, Jian},
  journal={International Journal of Computer Vision},
  pages={1--20},
  year={2023},
  publisher={Springer}
}

📧 Contact

If you have any question, please email [email protected].

🤗 Acknowledgements

This code is built on ISTA-Net-PyTorch. We thank the authors for sharing their codes.