-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix the decoding issues #1768
base: master
Are you sure you want to change the base?
Fix the decoding issues #1768
Conversation
revert change
I think that to completely avoid hallucination, the best approach is similar to using DTW to calculate token timestamps. By comparing these with cross-attention weights, we can definitely identify anomalies if there are any hallucinations. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks good to me, but I think @ggerganov needs to approve it
Did you run some tests? |
I've done some initial testing, and the results are promising. However, I need a bit more time to conduct a comprehensive analysis. You can already notice the difference by testing a few audio files. Currently, I'm downloading the Common Voice Corpus 15.0, which is over 100GB, so completing the testing will take a little while. There is a person who sent me a test file via Discord. Running large-v2 with master will generate a lot of duplicate content, but using this PR will be much better. The file is copyrighted, so I cannot make it public, but you can ask him for it privately. |
@bobqianic I'm very appreciative of this work and very excited to see this branch implemented, but getting some bad results with weird non-speech tokens at the beginning of many files, this problem does not happen in master branch. Example 1:Command:
Output of master branch @ 434b8f3 (current):[00:00:00.000 --> 00:00:09.000] [music] Output of this PR @ c0277e3:[00:00:00.000 --> 00:00:07.000] Transcriber's Name Reviewer's Name Example 2 with translate fr to en:wav file: https://www.dropbox.com/scl/fi/1go0yxkr10vwhfyxs76vz/french.wav?rlkey=312gc5qmw3r31ovh003410hyb&dl=0 Command:
Output of master branch @ 434b8f3 (current):[00:00:00.000 --> 00:00:04.000] (Music) Output of this PR @ c0277e3:[00:00:00.000 --> 00:00:17.000] Translation & subtitling by Quentin Dewaghe Traduction & sous-titrage par Quentin Dewaghe q.dewaghe.com Any idea why these non-speech tokens like "Transcriber's Name Reviewer's Name" are being output as speech at the beginning? Thanks again. |
Thank you for letting me know. It seems the primary issue stems from my having suppressed non-speech tokens, which has resulted in symbols like |
@jettoblack I've added a heuristic for detecting repetitive hallucinations, which you can disable via parameters if you prefer. Additionally, I've removed the tokens Output of this PR @ 476dff4:[00:00:00.000 --> 00:00:17.000] [Music] |
@bobqianic The repetition heuristic seems to be working well so far. I'm seeing fewer hallucinations on silent intervals. I looked at the code and this is unrelated to the non-speech token changes, right? I'm not so sure about the non-speech token changes. With your latest commit I see fewer cases of the problem I mentioned above, but it's still happening a lot. One example I got just now in the sg1.wav file I sent you previously on Discord: [00:57:11.700 --> 00:57:14.700] (c) 2014 University of Georgia College of Agricultural and Environmental Sciences UGA Extension Office of Communications and Creative Services A hallucination like ♪♪ or repeated text is far less objectionable than someone else's copyright notice or translator's notes which is what I'm getting a lot of. This change also removes many useful tokens from the output, like quotation marks and music notes. Using the -nsnst option restores these tokens but that causes this issue to be much worse, and I've caught a lot more cases of it occurring in many files, including in the middle of files not just the beginning. If these were the only two options I'd leave suppression enabled, but master branch includes these useful tokens without this hallucination problem. It might be helpful to compare the output of a branch with the other fixes of this PR excluding the non-speech token changes, or at least have a way to turn those completely off and go back to master branch behavior. |
Yes. In situations where the model exhibits hallucinations with high confidence (avg_log_probs), this
Which branch are you using? I can't find the hallucinations you mentioned.
|
I was using this PR @ 476dff4, unless I did something wrong, but this was on a Mac using the Metal gpu backend so that could make a difference. I'll retest on CPU and CUDA shortly and let you know. |
Hi! @bobqianic new version is very robust! On my test files, main branch emit 10 hallucinations on 26 WAV files (model But Also it give the error on specific file: What can we do for fix it, how do you think? Run server command: /usr/src/whisper.cpp-bobqianic/server -m ../../models/ggml-large-v2.bin -l ru --print-progress --print-realtime -nt -nf
whisper_init_from_file_with_params_no_state: loading model from '../../models/ggml-large-v2.bin'
whisper_model_load: loading model
whisper_model_load: n_vocab = 51865
whisper_model_load: n_audio_ctx = 1500
whisper_model_load: n_audio_state = 1280
whisper_model_load: n_audio_head = 20
whisper_model_load: n_audio_layer = 32
whisper_model_load: n_text_ctx = 448
whisper_model_load: n_text_state = 1280
whisper_model_load: n_text_head = 20
whisper_model_load: n_text_layer = 32
whisper_model_load: n_mels = 80
whisper_model_load: ftype = 1
whisper_model_load: qntvr = 0
whisper_model_load: type = 5 (large)
whisper_model_load: adding 1608 extra tokens
whisper_model_load: n_langs = 99
ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 4070, compute capability 8.9, VMM: yes
whisper_backend_init: using CUDA backend
whisper_model_load: CUDA0 total size = 3094.49 MB (3 buffers)
whisper_model_load: model size = 3093.99 MB
whisper_backend_init: using CUDA backend
whisper_init_state: kv self size = 220.20 MB
whisper_init_state: kv cross size = 245.76 MB
whisper_init_state: compute buffer (conv) = 33.91 MB
whisper_init_state: compute buffer (encode) = 233.50 MB
whisper_init_state: compute buffer (cross) = 10.15 MB
whisper_init_state: compute buffer (decode) = 108.99 MB
whisper server listening at http://127.0.0.1:8080
Received request: 0f3657ce-6352-4cbb-a88f-b39dc6a37a34-1-14.wav
Successfully loaded 0f3657ce-6352-4cbb-a88f-b39dc6a37a34-1-14.wav
system_info: n_threads = 4 / 12 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | METAL = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | CUDA = 1 | COREML = 0 | OPENVINO = 0 |
operator(): processing '0f3657ce-6352-4cbb-a88f-b39dc6a37a34-1-14.wav' (168960 samples, 10.6 sec), 4 threads, 1 processors, lang = ru, task = transcribe, timestamps = 0 ...
Running whisper.cpp inference on 0f3657ce-6352-4cbb-a88f-b39dc6a37a34-1-14.wav
Received request: 0f3657ce-6352-4cbb-a88f-b39dc6a37a34-1-15.wav
Successfully loaded 0f3657ce-6352-4cbb-a88f-b39dc6a37a34-1-15.wav
system_info: n_threads = 4 / 12 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | METAL = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | CUDA = 1 | COREML = 0 | OPENVINO = 0 |
operator(): processing '0f3657ce-6352-4cbb-a88f-b39dc6a37a34-1-15.wav' (235200 samples, 14.7 sec), 4 threads, 1 processors, lang = ru, task = transcribe, timestamps = 0 ...
Running whisper.cpp inference on 0f3657ce-6352-4cbb-a88f-b39dc6a37a34-1-15.wav
whisper_print_progress_callback: progress = 204%
Received request: 0f3657ce-6352-4cbb-a88f-b39dc6a37a34-1-16.wav
Successfully loaded 0f3657ce-6352-4cbb-a88f-b39dc6a37a34-1-16.wav
system_info: n_threads = 4 / 12 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | METAL = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | CUDA = 1 | COREML = 0 | OPENVINO = 0 |
operator(): processing '0f3657ce-6352-4cbb-a88f-b39dc6a37a34-1-16.wav' (512000 samples, 32.0 sec), 4 threads, 1 processors, lang = ru, task = transcribe, timestamps = 0 ...
Running whisper.cpp inference on 0f3657ce-6352-4cbb-a88f-b39dc6a37a34-1-16.wav
whisper_print_progress_callback: progress = 93%
whisper_print_progress_callback: progress = 187%
Received request: 0f3657ce-6352-4cbb-a88f-b39dc6a37a34-1-18.wav
Successfully loaded 0f3657ce-6352-4cbb-a88f-b39dc6a37a34-1-18.wav
system_info: n_threads = 4 / 12 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | METAL = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | CUDA = 1 | COREML = 0 | OPENVINO = 0 |
operator(): processing '0f3657ce-6352-4cbb-a88f-b39dc6a37a34-1-18.wav' (115520 samples, 7.2 sec), 4 threads, 1 processors, lang = ru, task = transcribe, timestamps = 0 ...
Running whisper.cpp inference on 0f3657ce-6352-4cbb-a88f-b39dc6a37a34-1-18.wav
whisper_print_progress_callback: progress = 416%
... Send file command: curl localhost:8080/inference -H "Content-Type: multipart/form-data" -F file="@${filename}"
Thank you! |
Hello @ukolovda I took a look at this yesterday evening. Whats missing in server.cpp is what you mentioned:
I got an output in the terminal by circumventing the print_realtime flag(instead of using a callback segment). So the model does in fact generate the output string but for some unknown reason |
Hello, @felrock ! Thank you! |
Append issue with zero-filled WAV. |
File from #1881 (zero filled WAV) give a gallucination in this version too. $ ../whisper.cpp-bobqianic/main -m ./models/ggml-large-v3.bin -l ru --threads 8 -mc 0 samples/zeroes.wav
whisper_init_from_file_with_params_no_state: loading model from './models/ggml-large-v3.bin'
whisper_model_load: loading model
whisper_model_load: n_vocab = 51866
whisper_model_load: n_audio_ctx = 1500
whisper_model_load: n_audio_state = 1280
whisper_model_load: n_audio_head = 20
whisper_model_load: n_audio_layer = 32
whisper_model_load: n_text_ctx = 448
whisper_model_load: n_text_state = 1280
whisper_model_load: n_text_head = 20
whisper_model_load: n_text_layer = 32
whisper_model_load: n_mels = 128
whisper_model_load: ftype = 1
whisper_model_load: qntvr = 0
whisper_model_load: type = 5 (large v3)
whisper_model_load: adding 1609 extra tokens
whisper_model_load: n_langs = 100
ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 4070, compute capability 8.9, VMM: yes
whisper_backend_init: using CUDA backend
whisper_model_load: CUDA0 total size = 3094,86 MB (3 buffers)
whisper_model_load: model size = 3094,36 MB
whisper_backend_init: using CUDA backend
whisper_init_state: kv self size = 220,20 MB
whisper_init_state: kv cross size = 245,76 MB
whisper_init_state: compute buffer (conv) = 35,50 MB
whisper_init_state: compute buffer (encode) = 233,50 MB
whisper_init_state: compute buffer (cross) = 10,15 MB
whisper_init_state: compute buffer (decode) = 108,99 MB
system_info: n_threads = 8 / 12 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | METAL = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | CUDA = 1 | COREML = 0 | OPENVINO = 0 |
run: processing 'samples/zeroes.wav' (19200 samples, 1,2 sec), 8 threads, 1 processors, 5 beams + best of 5, lang = ru, task = transcribe, timestamps = 1 ...
[00:00:00.000 --> 00:00:29.980] Продолжение следует...
whisper_print_timings: load time = 781,61 ms
whisper_print_timings: fallbacks = 0 p / 0 h
whisper_print_timings: mel time = 4,81 ms
whisper_print_timings: sample time = 28,10 ms / 79 runs ( 0,36 ms per run)
whisper_print_timings: encode time = 162,31 ms / 1 runs ( 162,31 ms per run)
whisper_print_timings: decode time = 0,00 ms / 1 runs ( 0,00 ms per run)
whisper_print_timings: batchd time = 482,89 ms / 77 runs ( 6,27 ms per run)
whisper_print_timings: prompt time = 0,00 ms / 1 runs ( 0,00 ms per run)
whisper_print_timings: total time = 1502,74 ms |
What's the status of this PR? is it safe to use? |
I'm thinking about including this pull request in the R wrapper at audio.whisper . There the current approach to handle some of the hallucinations is to use R packages audio.vadwebrtc or audio.vadsilero to detect silences or general non-voiced signals and either
I haven't looked into the extreme details on this pull request (only skimmed through the logic which was changed in main.cpp and whisper.cpp) but would it make sense already to incorporate this pull request in audio.whisper or are there a lot of changes to be expected here or is this pull request going to be split into a BPE change (#1854) and a change regarding how to handle non-speech? |
@bobqianic are you pursuing this at the moment? |
No, at least not in May. I'm really tied up with a lot of things this month. |
The best way to include Silero Voice Activity into whisper.cpp is to add thirdparty package of onnxruntime1.12.1 dll, then call silero onnx model. My branch had added it. Even VAD, the hallucinations on silent intervals is also happenning. |
I recommend considering a previous Silero VAD version, namely v3.1. The current version v4 (at the moment of writing) often hallucinates speech on lengthy chunks of silent or near-silent audio segments. But you have to add a heavyweight dependency like onnxruntime just to run a 750KB model. The smallest size I could possibly reduce onnxruntime.dll to was about 2.2MB, which is still 3x the size of silero weights, and requires a lengthy custom build of onnxruntime from source with reduced operator set configs and other size reduction options. And prebuilt redistributables are easily 5-9 MB or more. I have a working Silero v3.1 implementation in pure C, but as much as I would like to suggest it as an option, the code is quite bad, I wrote it as a personal project for learning low level neural nets. |
whisper_wrap_segment
RemoveThis is too trickyprint_realtime
token_nosp
UseWill be addressed in separate PRscompression ratio
instead ofentropy