1.3.6 - Bug Fixes and Enhancement
It seems that we have reached 7.4k download and almost 300 stars as of the time i release this 🚀
Thank you very much everyone for the participation in downloading, starring, forking, opening discussion, and submitting bug reports or feature request.
What's Changed
- Fixed translate with whisper not showing result in record
- Fixed continue after model download prompt
- Fixed random crashing when using faster whisper
- Fixed getting frame window in record
- Fixed log format when clear or change mode
- Changed some default option
- FFmpeg is now bundled with the app using static-ffmpeg #56 thanks to @zackees for the suggestion
- Added ways to filter result to counter hallucination
- Added optional silero vad that can be used alongside webrtcvad if possible on record
- Added setting for record min input length
Full Changelog: 1.3.5...1.3.6
Notes
- Before downloading / installing please take a look at the wiki and read the getting started section.
- If you previously installed
speech translate
as a module, you can update by doingpip install -U git+https://github.com/Dadangdut33/Speech-Translate.git --upgrade --force-reinstall
- If you install from installer, you can download and launch the installer below to update
- If you have any suggestions or found any bugs please feel free to open a disccussion or open an issue
Requirements
- Compatible OS:
OS | Prebuilt binary | As a module |
---|---|---|
Windows | ✔️ | ✔️ |
MacOS | ❌ | ✔️ |
Linux | ❌ | ✔️ |
* Python 3.8 or later (3.11 is recommended) for installation as module.
- Speaker input only work on windows 8 and above.
- Internet connection (for translation with API)
- Recommended to have capable GPU with CUDA compatibility (prebuilt version is using CUDA 11.8) to run each model. Each whisper model has different requirements, for more information you can check it directly at the whisper repository.
Size | Parameters | English-only model | Multilingual model | Required VRAM | Relative speed |
---|---|---|---|---|---|
tiny | 39 M | tiny.en |
tiny |
~1 GB | ~32x |
base | 74 M | base.en |
base |
~1 GB | ~16x |
small | 244 M | small.en |
small |
~2 GB | ~6x |
medium | 769 M | medium.en |
medium |
~5 GB | ~2x |
large | 1550 M | N/A | large |
~10 GB | 1x |
* This information is also available in the app (hover over the model selection in the app and there will be a tooltip about the model info). Also note that when using faster-whisper, the speed will be significantly faster and the required vram size will be reduced depending on the usage, for more information about this please visit faster-whisper repository