Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

运行LLMchat无结果Running LLMchat yields no results #57

Open
Eikwang opened this issue Aug 21, 2024 · 0 comments
Open

运行LLMchat无结果Running LLMchat yields no results #57

Eikwang opened this issue Aug 21, 2024 · 0 comments

Comments

@Eikwang
Copy link

Eikwang commented Aug 21, 2024

下载好LLM模型后运行LLMchat节点得不到任何结果,排查了依赖组件、模型完整性、环境变量等问题均没有得到解决,后来发现模型下载的路径是ComfyUI\models\hubcache而不是C:\Users\admin.cache\huggingface\hub,随后将模型删除,使用comfyui的run_nvidia_gpu.bat启动再次下载模型后恢复正常。再次下载的模型位于hub文件夹。可能是使用了第三方启动器而导致的。

After downloading the LLM model, running the LLMchat node did not get any results, and the problems of dependent components, model integrity, and environment variables were not solved, and it was found that the path to download the model was ComfyUImodelshubcache instead of C:Usersadmin.cachehuggingfacehub, and then the model was deleted and comfyui's run_nvidia_ gpu.bat starts downloading the model again and returns to normal. The model downloaded again is located in the hub folder. It may be caused by the use of a third-party launcher.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant