Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

using a custom model #174

Open
jeyhunbay opened this issue Dec 10, 2024 · 2 comments
Open

using a custom model #174

jeyhunbay opened this issue Dec 10, 2024 · 2 comments

Comments

@jeyhunbay
Copy link

I have a fine-tuned model that converted it to faster-whisper format using the instructions in the faster-whisper repo. I can load that model with faster-whisper and it works. How to add that model to the server?
I tried changing the env variable WHISPER__MODEL to point to my model's path, but the model is not being picked up.
Any suggestions?
Thanks!

@jeyhunbay
Copy link
Author

for now I just patched list_whisper_models in faster_whisper_server/hf_utils.py to inject my custom model to the list of available models. Probably not the most elegant solution, but it at least it works. But still looking for a better way of doing this.

@fedirz
Copy link
Owner

fedirz commented Dec 17, 2024

If you set the HF_TOKEN environment variable on HuggingFace, your private model should appear in the model list. However, loading a local model (not on HuggingFace) is currently unsupported. I plan to support this, but I am unsure when that will happen.

Thanks for using the project and creating an issue!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants