Issues: pytorch/pytorch
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Add
str
type to device
parameter of torch.cuda.get_device_name()
#126400
opened May 16, 2024 by
hyperkai
RAM Not Freed on CPU After Moving Model with Multiple Transformers to CUDA
#126388
opened May 16, 2024 by
qqlzfmn
Segmentation Faults loading Models pytorch v2.3.0 Apple M2
#126385
opened May 16, 2024 by
abiesylvera
All processes running torch.distributed.destroy_process_group() create CUDA context on device 0
oncall: distributed
Add this issue/PR to distributed oncall triage queue
#126381
opened May 16, 2024 by
szmigacz
DISABLED test_ring_attention_native_transformer_is_causal_True (__main__.RingAttentionTest)
module: rocm
AMD GPU support for Pytorch
skipped
Denotes a (flaky) test currently skipped in CI.
#126380
opened May 16, 2024 by
ramcherukuri
torch.distributed sys.excepthook crashes if distributed backend was deinitialized
#126379
opened May 16, 2024 by
szmigacz
When test the scalar version, test_open_device_registration will fail
oncall: pt2
#126372
opened May 16, 2024 by
CaoE
Broken Link and unfinished sentence in Frequently Asked Questions
#126367
opened May 16, 2024 by
angelica-moreira
Unexpected MYPY linter errors on CI
module: ci
Related to continuous integration
module: devx
Related to PyTorch contribution experience (HUD, pytorchbot)
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#126361
opened May 16, 2024 by
huydhn
Inlining nn modules for test dynamo/test_misc.py
module: nn
Related to torch.nn
oncall: pt2
#126355
opened May 15, 2024 by
laithsakka
Break the dependency between torch.nn and torch.distributed
oncall: distributed
Add this issue/PR to distributed oncall triage queue
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#126347
opened May 15, 2024 by
fegin
PyTorch: RuntimeError: device >= 0 && device < num_gpus INTERNAL ASSERT FAILED
module: cuda
Related to torch.cuda, and CUDA support in general
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#126344
opened May 15, 2024 by
tjasmin111
memory leak when compiling collective + view + wait()
oncall: pt2
#126338
opened May 15, 2024 by
bdhirsh
Improve discoverability of meta function registration in documentation
module: dynamic shapes
oncall: pt2
#126337
opened May 15, 2024 by
david20571015
Previous Next
ProTip!
Adding no:label will show everything without a label.