-
Notifications
You must be signed in to change notification settings - Fork 152
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
训练到后面 loss都变为0.1 an,ap变为0 #46
Comments
hi, triplet loss is an embedding method for metric optimization. So it needs a well trained classification model first. |
好的,谢谢您的回答 我试一下先进行预训练的效果。 |
@Johere hello, do you feel convenience to tell me your email? I am also using the triplet loss for solving fine grained classification and i was a new one. Hoping to communicate with you! Thanks! |
您好,請問加上預訓練模型,最後的結果怎麽樣,我也是遇到同樣的問題,不知道怎麽解決呢 @luhaofang @Johere |
@zhangxiaopang88 |
我认为如果batch的样本很小就会出现这种情况,原文一次batch需要1600以上的样本。感觉以caffe的显存利用效率根本无法实现这样的batch_size |
@luhaofang 您好,我用您的代码,训练集是casia webface,一共10572类,40多万张图片,因为数据量比较大因此不使用softmax进行预训练,而是直接用triplet loss从头训练。训练到后面所有loss都是0.1,an=0,ap=0,并且使用中间的caffemodel进行测试,所有图片的128维特征向量都一样。。请问这是为什么呢?谢谢!
The text was updated successfully, but these errors were encountered: