Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for Faster Whisper #693

Open
wants to merge 5 commits into
base: master
Choose a base branch
from

Conversation

joshuaboniface
Copy link

@joshuaboniface joshuaboniface commented Jun 5, 2023

The Faster Whisper library purports to be significantly faster than the original OpenAI Whisper library, so add support for it as well.

It is, unfortunately, not completely identical so we need this new module to support it. That said 90% of it is still the same as the original whisper, so most of that was just copied from the existing function in __init__.py.

Note that I found in testing, that on a Pi4 with a decent SD card, importing the "torch" library took upwards of 4 seconds, wasting valuable time just to detect CUDA support. So instead of trying to do this here, I added several configurable parameters from upstream to set CUDA, etc. support if the user wants it. It will default to "auto" mode and defer this to faster_whisper itself unless set.

The print statement is there for debugging to confirm the sent options, but I don't see any sort of logger or printing inside this library. It can be removed if desired.

While testing this change, I found a major memory leak caused somehow by faster_whisper here; each run of this function would balloon memory usage in the parent process by anywhere from 10 to 100MB and quickly result in an OOM. Despite several attempts I wasn't able to get the Python garbage collector to free this memory, so I decided instead to move the actual work to a multiprocessing (sub)Process so that it would truly be freed after each run. The result is passed back via a multiprocessing Queue. This seems to completely solve the memory leak and does not seem to harm performance or functionality.

The Faster Whisper library purports to be significantly faster than the
original OpenAI Whisper library, so add support for it as well.

It is, unfortunately, not completely identical so we need this new
module to support it. That said 90% of it is still the same as the
original whisper, so most of that was just copied from the existing
function in __init__.py.

Note that I found in testing, that on a Pi4 with a decent SD card,
importing the "torch" library took upwards of 4 seconds, wasting
valuable time just to detect CUDA support. So instead of trying to do
this here, I added several configurable parameters from upstream to set
CUDA, etc. support if the user wants it. It will default to CPU mode
unless set.

The print statement is there for debugging to confirm the sent options,
but I don't see any sort of logger or printing inside this library.
It can be removed if desired.
@joshuaboniface joshuaboniface marked this pull request as ready for review June 5, 2023 16:57
I found during testing that faster_whisper, or at least the combination
of faster_whisper and how it is being called here, had a major memory
leak, causing the parent process to consume anywhere from 10-100MB of
memory for each run until it OOM'd. Despite multiple attempts I also
could not get the garbage collector to clean this up even by deleting
every possible object.

So to avoid this problem, I split the work into two functions. The main
recognize_faster_whisper is a stub function which simply sets up the
subprocess and a queue for interchanging data, calls and joins the
subprocess, and then returns the data passed via the queue once
completed. The bulk of the work, including the import of faster_whisper,
numpy, etc., is now performed in the subprocess function, allowing it to
be fully terminated after each run and thus truly free the memory it was
using.

In testing this works precisely as expected, with each run resulting in
zero increased memory utilization in the parent process.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant