-
Notifications
You must be signed in to change notification settings - Fork 409
Issues: pytorch/xla
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Saving checkpoint silently hangs when including nn.Module in params
#7123
opened May 28, 2024 by
dead-water
Why does my 3-layer linear graph need to output two Transposes?
#7103
opened May 23, 2024 by
mars1248
upsample_bilinear2d
HLO returns unexpected data-type.
xla:gpu
#7095
opened May 22, 2024 by
ysiraichi
[torchbench]
timm_nfnet
training failing on non-dynamo.
xla:gpu
#7084
opened May 20, 2024 by
ysiraichi
Mismatch between XLA Tensor and PyTorch Native Tensor Results for
torch.matmul
in FP16 Precision on NVIDIA GPU
#7077
opened May 17, 2024 by
lausannel
Export nn.Module.forward with kwargs to StableHLO
stablehlo
StableHLO related work
#7056
opened May 13, 2024 by
johnmatter
The behavior of
torch.einsum
significantly differs between TPU and other devices.
#7050
opened May 13, 2024 by
jqhoogland
[torchbench] The official benchmark for performance and accuracy check
#7040
opened May 9, 2024 by
shenh10
Migrate PyTorch/XLA's gradient checkpointing to upstream one
nostale
Do not consider for staleness
#7024
opened May 3, 2024 by
JackCaoG
Encountering out-of-memory errors despite using modest model and batch sizes.
#6948
opened Apr 20, 2024 by
seanswyi
Previous Next
ProTip!
Find all open issues with in progress development work with linked:pr.