You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This commit was created on GitHub.com and signed with GitHub’s verified signature.
DragonianVoice - 1.0.1 - Native api
Most of the places where specific system APIs are used have been separated.
The Tensor library and ONNX library have been decoupled, allowing you to choose which one to compile.
Smart pointers are now used instead of raw pointers.
The Logger system has been rewritten. The current Logger allows you to specify the LoggerID and LoggerLevel. The log text format has been updated.
The exception structure has also been rewritten. The current exception can track the call stack and the location where the exception is thrown to a certain extent.
Encoder (Hubert), Vocoder (Hifigan), and PE (F0Extractor - RMVPE, FCPE) have been replaced by global references. This allows you to use the same model in different Svc models to avoid loading the same model multiple times or just changing the Svc model. The API for loading these models returns a smart pointer, and the API for unloading these models only reduces the global reference count of the specified model by one. Therefore, you don't need to worry about issues when calling the unloading API while your Svc model is in use, as these models are actually unloaded when all Svc models that use the specified model are unloaded after you call the unloading API.
The default audio format has been changed to PCM-float32-le, but you can still use the API with the I16 suffix to use audio in PCM-int16-le format.
Function parameter types have been replaced with some renamed empty structures to enable the type detection system of the C/C++ language and IDE.
Now, shallow diffusion requires you to manually call the corresponding function, instead of directly setting the corresponding item to true in the inference parameters and adding the pointer to the corresponding model.
Now, vocoder enhancement requires you to manually call the corresponding function, instead of directly setting the corresponding item to true in the inference parameters and adding the pointer to the corresponding model.