You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've set up my notebook on Paperspace as per the instructions in TheLastBen/PPS, aiming to run StableDiffusion XL on a P4000 GPU. However, when attempting to generate an image, I encounter a CUDA out of memory error:
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 10.00 MiB (GPU 0; 7.92 GiB total capacity; 6.79 GiB already allocated; 5.69 MiB free; 7.04 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
I've followed all setup instructions to the letter and haven't deviated from the recommended settings. Despite the detailed error message, I'm unsure how to proceed to resolve this.
Has anyone encountered a similar issue or has suggestions what I should do?
I previously tried to:
delete pytorch and update to a newer version that supports the CUDA Version running on the GPU (12.0)
use a higher PyTorch version
use SD 1.5 instead of SDXL, which seems to work
Thank you very much!
The text was updated successfully, but these errors were encountered:
Thanks for the quick response and for trying to resolve the issue, but unfortunately it still doesn't work right.
Now I don't get an error message, but when I open the WebUI and try to generate an image it shows "In queue 1/1" for a moment and then just loads forever but without generating any image.
If I try to restart the Web UI it again goes into "Reloading..." state and then nothing happens.
Hello,
I've set up my notebook on Paperspace as per the instructions in TheLastBen/PPS, aiming to run StableDiffusion XL on a P4000 GPU. However, when attempting to generate an image, I encounter a CUDA out of memory error:
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 10.00 MiB (GPU 0; 7.92 GiB total capacity; 6.79 GiB already allocated; 5.69 MiB free; 7.04 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
I've followed all setup instructions to the letter and haven't deviated from the recommended settings. Despite the detailed error message, I'm unsure how to proceed to resolve this.
Has anyone encountered a similar issue or has suggestions what I should do?
I previously tried to:
Thank you very much!
The text was updated successfully, but these errors were encountered: