-
Notifications
You must be signed in to change notification settings - Fork 174
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tensor backend core dumped #208
Comments
fixed with
but now I am getting TensorRT-LLM not supported:
no kernel image is available for execution on the device (/root/TensorRT-LLM/cpp/tensorrt_llm/kernels/samplingTopPKernels.cu:322) |
if I try to build TensorRT-LLM container manually eventually I got
GPU: Tesla T4, Note: and it throws exception
|
Thanks for reporting and tracking the issue, we are looking into this at our end as well. |
I also ran into those issues. When you stick to TensorRT LLM 0.7.1, you neither get model config error (I applied the same fix as you), nor the negative dimension error (I didn't have the time to look deeper into that). I have a working build in #221, feel free to give it a try. |
here is my nvidia-smi result
python -c "import torch; import tensorrt; import tensorrt_llm"
working wellWhen a client is connected server is getting core dumped related to libcudnn_cnn_infer library.
here is the related part of the log
what could be the reason ?
my ubutu version is
your docker image ubuntu version is (running on 20.04)
can it be related to Ubuntu 22.04?
The text was updated successfully, but these errors were encountered: