You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
nn.RNNBase.flatten_parameters function should be a no-op for export.
Otherwise, export()/dynamo_export fail inside it with this error:
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/rnn.py", line 196, in <setcomp>
unique_data_ptrs = {p.data_ptr() for p in self._flat_weights}
RuntimeError: Cannot access data pointer of Tensor (e.g. FakeTensor, FunctionalTensor). If you're using torch.compile/export/fx, it is li\
kely that we are erroneously tracing into a custom kernel. To fix this, please wrap the custom kernel into an opaque custom op. Please se\
e the following for details: https://docs.google.com/document/d/1W--T6wz8IY8fOI0Vm8BF44PdBgs283QvpelJZWieQWQ
Here is a short repro. Monkey-patching nn.RNNBase.flatten_parameters fixes the issue. Real fix should be a conditional inside flatten_parameters.
import numpy as np
import torch
import torch.nn as nn
def noop(self):
pass
# That fixes the issue
# nn.RNNBase.flatten_parameters = noop
class Model(nn.Module):
def __init__(self):
super().__init__()
# same for LSTM, GRU
self.rnn = nn.RNN(16, 16, batch_first=True)
def forward(self, x):
return self.rnn(x)
device = torch.device('cuda')
model = Model().to(device)
x = torch.rand(1024, 20, 16).to(device)
# This does not help - fails same way in export()
# model = torch.export.export(model, (x,)).run_decompositions()
onnx_program = torch.onnx.dynamo_export(model, x).save('model.onnx')
馃悰 Describe the bug
nn.RNNBase.flatten_parameters function should be a no-op for export.
Otherwise, export()/dynamo_export fail inside it with this error:
Here is a short repro. Monkey-patching nn.RNNBase.flatten_parameters fixes the issue. Real fix should be a conditional inside flatten_parameters.
Versions
PyTorch nightly 05/14
cc @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
The text was updated successfully, but these errors were encountered: