Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed Whisper-TensorRT engine build in docker #152

Open
erik-sv opened this issue Feb 24, 2024 · 4 comments
Open

Failed Whisper-TensorRT engine build in docker #152

erik-sv opened this issue Feb 24, 2024 · 4 comments

Comments

@erik-sv
Copy link

erik-sv commented Feb 24, 2024

When following the docs for Whisper-TensorRT to build the engine, I am running into the error below when running the command in docker to build the model. It is saying that it cannot detect a CUDA-capable device, but when I run nvidia-smi (in docker terminal), my graphics card appears as expected. I have CUDA working with faster-whisper on Windows, but this seems to fail. I am on Windows 11 using WSL2, and WSL2 has everything installed and can also see the GPU.

root@87f65ec0c271:/home/WhisperLive# bash scripts/build_whisper_tensorrt.sh /root/TensorRT-LLM-examples small
Requirement already satisfied: tiktoken in /usr/local/lib/python3.10/dist-packages (from -r requirements.txt (line 1)) (0.6.0)
Requirement already satisfied: datasets in /usr/local/lib/python3.10/dist-packages (from -r requirements.txt (line 2)) (2.16.1)
Requirement already satisfied: kaldialign in /usr/local/lib/python3.10/dist-packages (from -r requirements.txt (line 3)) (0.7.2)
Requirement already satisfied: openai-whisper in /usr/local/lib/python3.10/dist-packages (from -r requirements.txt (line 4)) (20231117)
Requirement already satisfied: soundfile in /usr/local/lib/python3.10/dist-packages (from -r requirements.txt (line 5)) (0.12.1)
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
Downloading small...
--2024-02-24 06:27:20--  https://openaipublic.azureedge.net/main/whisper/models/9ecf779972d90ba49c06d968637d720dd632c55bbf19d441fb42bf17a411e794/small.pt
Resolving openaipublic.azureedge.net (openaipublic.azureedge.net)... 13.107.213.70, 13.107.246.70, 13.107.246.70, ...
Connecting to openaipublic.azureedge.net (openaipublic.azureedge.net)|13.107.213.70|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 483617219 (461M) [application/octet-stream]
Saving to: 'assets/small.pt'

small.pt                       100%[=================================================>] 461.21M  42.0MB/s    in 7.8s

2024-02-24 06:27:28 (58.8 MB/s) - 'assets/small.pt' saved [483617219/483617219]

Download completed: small.pt
whisper_small
Running build script for small with output directory whisper_small
[02/24/2024-06:27:30] [TRT-LLM] [I] plugin_arg is None, setting it as float16 automatically.
[02/24/2024-06:27:30] [TRT-LLM] [I] plugin_arg is None, setting it as float16 automatically.
[02/24/2024-06:27:30] [TRT] [W] Unable to determine GPU memory usage: no CUDA-capable device is detected
[02/24/2024-06:27:30] [TRT] [W] Unable to determine GPU memory usage: no CUDA-capable device is detected
[02/24/2024-06:27:30] [TRT] [I] [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 558, GPU 0 (MiB)
[02/24/2024-06:27:30] [TRT] [E] 6: CUDA initialization failure with error: 100. Please check your CUDA installation: http://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html
Traceback (most recent call last):
  File "/root/TensorRT-LLM-examples/whisper/build.py", line 384, in <module>
    run_build(args)
  File "/root/TensorRT-LLM-examples/whisper/build.py", line 378, in run_build
    build_encoder(model, args)
  File "/root/TensorRT-LLM-examples/whisper/build.py", line 188, in build_encoder
    builder = Builder()
  File "/usr/local/lib/python3.10/dist-packages/tensorrt_llm/builder.py", line 82, in __init__
    self._trt_builder = trt.Builder(logger.trt_logger)
TypeError: pybind11::init(): factory function returned nullptr
Whisper small TensorRT engine built.
=========================================
Model is located at: /root/TensorRT-LLM-examples/whisper/whisper_small
root@87f65ec0c271:/home/WhisperLive# nvidia-smi
Sat Feb 24 06:27:46 2024
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 545.37.02              Driver Version: 546.65       CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA GeForce RTX 4090        On  | 00000000:01:00.0  On |                  Off |
| 42%   34C    P0              60W / 315W |   2743MiB / 24564MiB |     20%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|    0   N/A  N/A        20      G   /Xwayland                                 N/A      |
|    0   N/A  N/A        20      G   /Xwayland                                 N/A      |
|    0   N/A  N/A        23      G   /Xwayland                                 N/A      |
+---------------------------------------------------------------------------------------+
@ssifood
Copy link

ssifood commented Feb 26, 2024

me too. let me know how to docker compose build this whisper TensorRT

@makaveli10
Copy link
Collaborator

Hello @ssifood @erik-sv , we havent tested tensorrt backend on windows yet, we will do that and get back to you. Thanks

@makaveli10
Copy link
Collaborator

@ssifood @erik-sv would be really helpful if you can test the docker compose setup from #177 and see if it works.

@makaveli10
Copy link
Collaborator

@ssifood @erik-sv did you guys get a chance to test the PR if it solves the issue onyour end?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

3 participants