We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
on the version 0.4.1, CPU transcription consume less cpu than GPU +CPU.
with: docker run -it -p 9090:9090 ghcr.io/collabora/whisperlive-cpu:latest
With GPU: docker run -it --gpus all -p 9090:9090 ghcr.io/collabora/whisperlive-gpu:latest
The called model was french / small So I don't understand why ? And with such results why still using GPU ;-)
so seriously seems threads not sleeping correctly during GPU processing.
The text was updated successfully, but these errors were encountered:
I have the same issue
Sorry, something went wrong.
Successfully merging a pull request may close this issue.
on the version 0.4.1, CPU transcription consume less cpu than GPU +CPU.
with:
docker run -it -p 9090:9090 ghcr.io/collabora/whisperlive-cpu:latest
With GPU:
docker run -it --gpus all -p 9090:9090 ghcr.io/collabora/whisperlive-gpu:latest
The called model was french / small
So I don't understand why ? And with such results why still using GPU ;-)
so seriously seems threads not sleeping correctly during GPU processing.
The text was updated successfully, but these errors were encountered: