You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In Table 17 of the paper, it is mentioned that SwiGLU is used as the FFN layer for ViT-L/14 when training from scratch. In the model card, only the ViT-G model trained from scratch with SwiGLU is available (the ViT-L uses MLP FFN and seems to be the distilled version). Is it possible to add the checkpoint for the ViT-L/14 trained from scratch?
In Table 17 of the paper, it is mentioned that SwiGLU is used as the FFN layer for ViT-L/14 when training from scratch. In the model card, only the ViT-G model trained from scratch with SwiGLU is available (the ViT-L uses MLP FFN and seems to be the distilled version). Is it possible to add the checkpoint for the ViT-L/14 trained from scratch?
I.e. matching this config: https://github.com/facebookresearch/dinov2/blob/main/dinov2/configs/train/vitl14.yaml
The text was updated successfully, but these errors were encountered: