-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reproducing the result of cifar100 #8
Comments
Thank you for reaching out. I'd like to clarify that the hyperparameters for the cifar100 dataset differ slightly from Cifar10. We've observed that a duration of only 10 epochs is insufficient for effective unlearning on cifar100 by FT. Consequently, we've adjusted the training to span 20 epochs. Additionally, we used the learning rate to 0.01 for both class and random forgetting methods on cifar100. If you encounter any further issues or have more questions, please don't hesitate to get in touch. |
Thank you for the new info. Thank you. |
I've tested using the suggested hyperparameters above. still, the result differs by a margin from Table A3. I've used the below script to create the base model for unlearning. Are there any hyperparameters specific for cifar100 when creating these models?
|
Thank you for your diligence in reviewing our table. Upon re-examination, we've discovered an typo in the CIFAR-100 random forgetting results; the correct MIA is 11.13, not 1.13. We've also verified that for class-wise forgetting on CIFAR-100, 10 epochs with a learning rate of 0.01 suffices, and we used this setting in our latest version. These corrections have been made in the latest version of our paper on arXiv. The differences of your results with ours in the class-wise forgetting may because of its large variance. Regarding SVHN, we consistently use 10 epochs for both class-wise and random forgetting, with learning rates set to 0.01 for class-wise and 0.04 for random forgetting. If you encounter any issues with the updated results, please do not hesitate to reach out. |
Thank you for your kind reply! Thank you. |
Hello, first and foremost, I would like to express my gratitude for the work you have done. Thank you.
I was trying to reproduce the result of cifar100 in the appendix using retrain and FT. But for some reason, I couldn't reproduce the result of unlearning by FT.
I tested cifar100 under 6 different independent trials. As you can see FT with class-wise forgetting has quite a low score of forget efficacy on Sparse(pruned model)/Dense(unpruned) class-wise forgetting and Sparse random data forgetting.
The hyperparameters of these experiments are the same as ciafar10.
#4
I'm aware that you've used different learning rates of cifar10 for class-wise forgetting(0.01) and random data forgetting(0.04). Is learning rates for cifar100 different?
Thank you.
The text was updated successfully, but these errors were encountered: