-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MVbench的测试情况 #9
Comments
你好,可能是两个方面
|
您好,感谢您的回答。 请问,在训练qa的过程中,都具体用到了哪些训练集呢? |
训练qa的数据集都在config/instructblipbase_stllm_qa.yaml里。确定每个数据集都可以正常load吗。如果还是解决不掉可以把训练log邮我看一下 |
感谢邮件回复。 |
是的。conversation_videochatgpt和caption_videochatgpt是同一个数据。loss最后在0.3-0.5是正常的。另外在128的batch下epoch数是多少呢 |
感谢回复~ |
作者好,我在本地复现了模型的训练过程。采用videochat2相同的训练集,并且修改了你所提到的两个数据集(videochat1 videochatgpt)的标注内容。
采用4个epoch,在mvbench上的性能大概是51.2%(开源模型本地复现性能54.85%)
基于存在差异较大,
请问,在训练过程中有什么需要注意的事项吗?
The text was updated successfully, but these errors were encountered: