Code and data for paper CLoG: Benchmarking Continual Learning of Image Generation Models
-
[Jun. 7, 2024]: We launch the first version of code for label-conditioned CLoG. Our codebase is still in development, please stay tuned for the comprehensive version.
-
[Aug. 22, 2024]: Optimized version of CoLG. At this stage, the main improvements involve refining the original code's workflow and removing redundant parts, including the integration of training and testing, dataset construction, and testing scripts. In this version, testing is no longer performed after the completion of all tasks; instead, each task is tested immediately after it is completed, followed by testing the current and previous tasks, with all relevant metrics saved. However, this version still has four major drawbacks: (1) The training scripts for various methods and the GAN components are missing. (2) The testing process does not follow class-guided generation but instead follows task-based generation, which means the obtained metrics are not strictly accurate. (3) There is an issue with the sample process in the C-LoRA method; the task ID should not be directly accessed but rather inferred through various means to obtain a pseudo-task ID. (4) The training process of C-LoRA aligns with Few-shot Class Incremental Learning, where pretraining is done on the first task, followed by LoRA fine-tuning on subsequent tasks. This approach is suitable for few-shot scenarios but not for LoRA-based methods, which should ideally use a pretrained model on a large dataset. This approach should now be referred to as the Few-shot C-LoRA setting.
-
[Aug. 23, 2024]: We have resolved the second limitation. Now, the testing phase follows a class-guided generation approach.
-
[Aug. 25, 2024]: Experiments are conducted on the MNIST dataset using the NonCL, Ensemble, Naive, ER, and AGEM methods.
-
[Sept. 08, 2024]: We have implemented all methods on the MNIST dataset, except for Few-shot C-LoRA, and conducted detailed hyperparameter experiments. The optimal Mean Incremental FIDs are shown below:
C-LoRA Non-CL Ensemble NCL GR ER_512 ER_5120 A-GEM_512 A-GEM_5120 KD EWC L2 SI MAS - 4.22 6.63 60.64 7.53 65.88 42.54 64.54 49.13 61.48 66.76 78.26 86.49 297.22 It is evident that certain continual classification methods do not yield significant positive effects in generative scenarios.
To run CLoG from source, follow these steps:
- Clone this repository locally
cd
into the repository.- Run
conda env create -f environment.yml
to created a conda environment namedCLoG
. - Activate the environment with
conda activate CLoG
.
If you find our work helpful, please use the following citations.
@article{
zhang2024clog,
title={CLoG: Benchmarking Continual Learning of Image Generation Models},
author={Haotian Zhang and Junting Zhou and Haowei Lin and Hang Ye and Jianhua Zhu and Zihao Wang and Liangcai Gao and Yizhou Wang and Yitao Liang},
booktitle={arxiv},
year={2024}
}
MIT. Check LICENSE.md
.