You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Dear author, thank you for your code and paper. I noticed that in your paper ‘Multi-Head Self-Attention via Vision Transformer for Zero-Shot Learning’, the attributes dimension of the CUB (102 -> 312) and SUN (312 -> 102) in Table 1 is wrong.
The text was updated successfully, but these errors were encountered:
Dear author, thank you for your code and paper. I noticed that in your paper ‘Multi-Head Self-Attention via Vision Transformer for Zero-Shot Learning’, the attributes dimension of the CUB (102 -> 312) and SUN (312 -> 102) in Table 1 is wrong.
Dear vivi, Do you have any idea how to visualize attention maps as shown in the paper?
Dear author, thank you for your code and paper. I noticed that in your paper ‘Multi-Head Self-Attention via Vision Transformer for Zero-Shot Learning’, the attributes dimension of the CUB (102 -> 312) and SUN (312 -> 102) in Table 1 is wrong.
The text was updated successfully, but these errors were encountered: