You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After downloading the LLM model, running the LLMchat node did not get any results, and the problems of dependent components, model integrity, and environment variables were not solved, and it was found that the path to download the model was ComfyUImodelshubcache instead of C:Usersadmin.cachehuggingfacehub, and then the model was deleted and comfyui's run_nvidia_ gpu.bat starts downloading the model again and returns to normal. The model downloaded again is located in the hub folder. It may be caused by the use of a third-party launcher.
The text was updated successfully, but these errors were encountered:
下载好LLM模型后运行LLMchat节点得不到任何结果,排查了依赖组件、模型完整性、环境变量等问题均没有得到解决,后来发现模型下载的路径是ComfyUI\models\hubcache而不是C:\Users\admin.cache\huggingface\hub,随后将模型删除,使用comfyui的run_nvidia_gpu.bat启动再次下载模型后恢复正常。再次下载的模型位于hub文件夹。可能是使用了第三方启动器而导致的。
After downloading the LLM model, running the LLMchat node did not get any results, and the problems of dependent components, model integrity, and environment variables were not solved, and it was found that the path to download the model was ComfyUImodelshubcache instead of C:Usersadmin.cachehuggingfacehub, and then the model was deleted and comfyui's run_nvidia_ gpu.bat starts downloading the model again and returns to normal. The model downloaded again is located in the hub folder. It may be caused by the use of a third-party launcher.
The text was updated successfully, but these errors were encountered: