You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a fine-tuned model that converted it to faster-whisper format using the instructions in the faster-whisper repo. I can load that model with faster-whisper and it works. How to add that model to the server?
I tried changing the env variable WHISPER__MODEL to point to my model's path, but the model is not being picked up.
Any suggestions?
Thanks!
The text was updated successfully, but these errors were encountered:
for now I just patched list_whisper_models in faster_whisper_server/hf_utils.py to inject my custom model to the list of available models. Probably not the most elegant solution, but it at least it works. But still looking for a better way of doing this.
If you set the HF_TOKEN environment variable on HuggingFace, your private model should appear in the model list. However, loading a local model (not on HuggingFace) is currently unsupported. I plan to support this, but I am unsure when that will happen.
Thanks for using the project and creating an issue!
I have a fine-tuned model that converted it to faster-whisper format using the instructions in the faster-whisper repo. I can load that model with faster-whisper and it works. How to add that model to the server?
I tried changing the env variable WHISPER__MODEL to point to my model's path, but the model is not being picked up.
Any suggestions?
Thanks!
The text was updated successfully, but these errors were encountered: