Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RAM Not Freed on CPU After Moving Model with Multiple Transformers to CUDA #126388

Open
qqlzfmn opened this issue May 16, 2024 · 0 comments
Open
Labels
module: memory usage PyTorch is using more memory than it should, or it is leaking memory module: nn Related to torch.nn needs reproduction Someone else needs to try reproducing the issue given the instructions. No action needed from user oncall: transformer/mha Issues related to Transformers and MultiheadAttention triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@qqlzfmn
Copy link

qqlzfmn commented May 16, 2024

Describe the bug

I encountered a memory issue when moving a model to CUDA using model.to('cuda'). Specifically, when using torch.nn.ModuleList or torch.nn.Sequential containing multiple instances of torch.nn.Transformer, the model appears to move to CUDA but does not free up the corresponding CPU memory. This results in significant RAM leakage.

To Reproduce

Here is a minimal example to reproduce the issue:

import torch

# Model with multiple Transformer instances
kwargs = {
    'd_model': 512,
    'nhead': 8,
    'num_encoder_layers': 6,
    'num_decoder_layers': 6,
    'dim_feedforward': 2048,
}
model = torch.nn.ModuleList([
    torch.nn.Transformer(**kwargs),
    torch.nn.Transformer(**kwargs),
    torch.nn.Transformer(**kwargs),
    torch.nn.Transformer(**kwargs),
    torch.nn.Transformer(**kwargs),
    torch.nn.Transformer(**kwargs),
    torch.nn.Transformer(**kwargs),
    torch.nn.Transformer(**kwargs),
])

# Move model to CUDA
model = model.to('cuda')

Expected behavior

When calling model.to('cuda'), all parameters and buffers should be moved to CUDA and the corresponding CPU memory should be freed.

Additional Findings

  1. The issue occurs with both torch.nn.ModuleList and torch.nn.Sequential.
  2. If there is only one Transformer module in the ModuleList, the issue is significantly reduced, with only around 200+ MB of context left in RAM.
  3. When there are more than two Transformer modules in the ModuleList, there is significant RAM leakage.
  4. The issue was first discovered while using the Whisper model. Initially, it was thought to be an issue with HuggingFace's transformer library, but it was later found to occur with any complex PyTorch model.
  5. The issue does not occur with sequences containing more than two Linear or LSTM layers.
  6. Using gc.collect() does not resolve the issue.
  7. sys.getrefcount(model) is not 1 after the model is created, indicating additional references.

Additional context

It seems that the issue might be related to the way torch.nn.ModuleList and torch.nn.Sequential handle multiple torch.nn.Transformer instances and their memory management when moved to CUDA.

Versions

PyTorch version: 2.2.1
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: Could not collect
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.35

Python version: 3.10.13 (main, Sep 11 2023, 13:44:35) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.5.0-25-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2080 Ti
Nvidia driver version: 525.147.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 3900X 12-Core Processor
CPU family: 23
Model: 113
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU max MHz: 4672.0698
CPU min MHz: 2200.0000
BogoMIPS: 7585.91
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 384 KiB (12 instances)
L1i cache: 384 KiB (12 instances)
L2 cache: 6 MiB (12 instances)
L3 cache: 64 MiB (4 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-23
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected

Versions of relevant libraries:
[pip3] numpy==1.26.3
[pip3] optree==0.10.0
[pip3] torch==2.2.1
[pip3] torchaudio==2.2.1
[pip3] torchelastic==0.2.2
[pip3] torchvision==0.17.1
[pip3] triton==2.2.0
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-service 2.4.0 py310h5eee18b_1
[conda] mkl_fft 1.3.8 py310h5eee18b_0
[conda] mkl_random 1.2.4 py310hdb19cb5_0
[conda] numpy 1.26.3 py310h5f9d8c6_0
[conda] numpy-base 1.26.3 py310hb5e798b_0
[conda] optree 0.10.0 pypi_0 pypi
[conda] pytorch 2.2.1 py3.10_cuda11.8_cudnn8.7.0_0 pytorch
[conda] pytorch-cuda 11.8 h7e8668a_5 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.2.1 py310_cu118 pytorch
[conda] torchelastic 0.2.2 pypi_0 pypi
[conda] torchtriton 2.2.0 py310 pytorch
[conda] torchvision 0.17.1 py310_cu118 pytorch

cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @bhosmer @cpuhrsch @erichan1 @drisspg

@mikaylagawarecki mikaylagawarecki added module: nn Related to torch.nn module: memory usage PyTorch is using more memory than it should, or it is leaking memory triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module oncall: transformer/mha Issues related to Transformers and MultiheadAttention labels May 20, 2024
@albanD albanD added the needs reproduction Someone else needs to try reproducing the issue given the instructions. No action needed from user label May 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: memory usage PyTorch is using more memory than it should, or it is leaking memory module: nn Related to torch.nn needs reproduction Someone else needs to try reproducing the issue given the instructions. No action needed from user oncall: transformer/mha Issues related to Transformers and MultiheadAttention triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

No branches or pull requests

3 participants