-
Notifications
You must be signed in to change notification settings - Fork 326
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
multiple speaker compatability #191
Comments
the speaker count in the config is for evaluation purposes only, in order to tune the diarization performance, you need to play with |
thanks, i'll give that a try :) |
Hi, Thanks in advance. |
Hi @famda , I haven't tinkered with these before, but it's totally trial and error |
No worries, I can make some tests with it. I just need to understand where to start. Can you guide me a little so I can play around with it? 😀 |
From just playing around with the shift_length and lowering it to about 0.25 I was able to detect 7 speakers when running it on an audiofile it only detected 3 before (theres 9 actual people speaking in it). Going Lower than that, didn't make much of a difference, but increased inference time dramatically (running locally on CPU). So i would start there. Changing the sigmoid_threshold didn't do much, but you can try that as well. |
You should also try playing with scale windows and weights |
Hi, how do you configure the possibility to separate the speaker? |
Hello, what do you mean exactly by separating the speaker? |
Sorry for my bad english! both sentences are categorized as “speaker0,” but the first sentence is spoken by a woman and the second by a male speaker. |
Hi, I want know can i use task='translate' in Whisper_Transcription_+_NeMo_Diarization.ipynb file. actually i want to pass a non-english (hindi) audio in model and then after getting english transcription(by using task = translate). I want to perform speaker diarization. but i think because my translated transcript and audio both have different language english and hindi respectively I am not able to achieve this. can somebody help me. How can i perform speaker diarization and translation of transcription both. |
@francescocassini usually the default settings work as expected, but you can check my second comment about what to change if they dont @01Ashish I haven't tested it for translate task yet, but as a starter, you should enable word timestamps in whisper and remove the alignment model and see how it goes |
I want to use max_speaker parameter as cli argument like whisperx. Do you have any plan or solution? Our case is clear for how many peoples speak in audio, so I expect model perform well by assigning correct max_speaker number for each execution. However, I don't know well how to do that. |
You can modify this parameter in the telephonic YAML config found in the configs folder, can you try it on an audio that predicts the wrong number of speakers and see if it makes a difference? |
@MahmoudAshraf97 I use this repository for creating transcription with docker on cloud services. So..., changing yaml is bit difficult for my case. That kind of operation should be supported in this repository in future? If no plan, I will use larger speaker count |
It's easy to add, not a big deal, but I want to make sure first that it does affect the inference, because it was reported earlier that this parameter has no effect on inference, only evaluation which is not used here |
Hi so first of all great work,
the diarization works great for me on audio files with less than 3 speakers. Given an audio file with more than or close to 8 speakers, results in a very good transcription, but there are still only 3 speakers assigned to more than 8 people (so it just assigns them all as speaker 0 to 2). Changing the max_num_speakers variable in the config yaml doesnt seem to change that, so does creating a num_speakers variable in the manifest file. Is there something i'm missing/ how would adjusting the speaker count work?
The text was updated successfully, but these errors were encountered: