Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LogP Example: "TypeError: embedding(): argument 'indices' (position 2) must be Tensor, not list" #38

Open
gmseabra opened this issue Oct 19, 2020 · 4 comments

Comments

@gmseabra
Copy link

Hi,

I'm re-running the LogP example using current version of PyTorch, and the execution stops in the reinforcement loop due to a TypeError, as below. Are you aware of any changes in PyTorch that could be responsible for this? Is there a solution for it?

Thanks!

for i in range(n_iterations):
    for j in trange(n_policy, desc='Policy gradient...'):
        cur_reward, cur_loss = RL_logp.policy_gradient(gen_data)
        rewards.append(simple_moving_average(rewards, cur_reward)) 
        rl_losses.append(simple_moving_average(rl_losses, cur_loss))
    
    plt.plot(rewards)
    plt.xlabel('Training iteration')
    plt.ylabel('Average reward')
    plt.show()
    plt.plot(rl_losses)
    plt.xlabel('Training iteration')
    plt.ylabel('Loss')
    plt.show()
        
    smiles_cur, prediction_cur = estimate_and_update(RL_logp.generator, 
                                                     my_predictor, 
                                                     n_to_generate)
    print('Sample trajectories:')
    for sm in smiles_cur[:5]:
        print(sm)

with the error below:

Policy gradient...:   0%|          | 0/15 [00:00<?, ?it/s]./release/data.py:98: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  return torch.tensor(tensor).cuda()
Policy gradient...:   0%|          | 0/15 [00:00<?, ?it/s]

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-37-7a3a9698cf0c> in <module>
      1 for i in range(n_iterations):
      2     for j in trange(n_policy, desc='Policy gradient...'):
----> 3         cur_reward, cur_loss = RL_logp.policy_gradient(gen_data)
      4         rewards.append(simple_moving_average(rewards, cur_reward))
      5         rl_losses.append(simple_moving_average(rl_losses, cur_loss))

~/work/li/leadopt/generator/ReLeaSE/release/reinforcement.py in policy_gradient(self, data, n_batch, gamma, std_smiles, grad_clipping, **kwargs)
    117                     reward = self.get_reward(trajectory[1:-1],
    118                                              self.predictor,
--> 119                                              **kwargs)
    120 
    121             # Converting string of characters into tensor

<ipython-input-33-a8c049e9e937> in get_reward_logp(smiles, predictor, invalid_reward)
      1 def get_reward_logp(smiles, predictor, invalid_reward=0.0):
----> 2     mol, prop, nan_smiles = predictor.predict([smiles])
      3     if len(nan_smiles) == 1:
      4         return invalid_reward
      5     if (prop[0] >= 1.0) and (prop[0] <= 4.0):

~/work/li/leadopt/generator/ReLeaSE/release/rnn_predictor.py in predict(self, smiles, use_tqdm)
     62                 self.model[i]([torch.LongTensor(smiles_tensor).cuda(),
     63                                torch.LongTensor(length).cuda()],
---> 64                               eval=True).detach().cpu().numpy())
     65         prediction = np.array(prediction).reshape(len(self.model), -1)
     66         prediction = np.min(prediction, axis=0)

/opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
    720             result = self._slow_forward(*input, **kwargs)
    721         else:
--> 722             result = self.forward(*input, **kwargs)
    723         for hook in itertools.chain(
    724                 _global_forward_hooks.values(),

~/work/source/repos/OpenChem/openchem/models/Smiles2Label.py in forward(self, inp, eval)
     41         else:
     42             self.train()
---> 43         embedded = self.Embedding(inp)
     44         output, _ = self.Encoder(embedded)
     45         output = self.MLP(output)

/opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
    720             result = self._slow_forward(*input, **kwargs)
    721         else:
--> 722             result = self.forward(*input, **kwargs)
    723         for hook in itertools.chain(
    724                 _global_forward_hooks.values(),

~/work/source/repos/OpenChem/openchem/modules/embeddings/basic_embedding.py in forward(self, inp)
      7 
      8     def forward(self, inp):
----> 9         embedded = self.embedding(inp)
     10         return embedded

/opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
    720             result = self._slow_forward(*input, **kwargs)
    721         else:
--> 722             result = self.forward(*input, **kwargs)
    723         for hook in itertools.chain(
    724                 _global_forward_hooks.values(),

/opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/sparse.py in forward(self, input)
    124         return F.embedding(
    125             input, self.weight, self.padding_idx, self.max_norm,
--> 126             self.norm_type, self.scale_grad_by_freq, self.sparse)
    127 
    128     def extra_repr(self) -> str:

/opt/anaconda3/lib/python3.7/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
   1812         # remove once script supports set_grad_enabled
   1813         _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1814     return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
   1815 
   1816 

TypeError: embedding(): argument 'indices' (position 2) must be Tensor, not list


@isayev
Copy link
Owner

isayev commented Oct 19, 2020 via email

@gmseabra
Copy link
Author

gmseabra commented Oct 20, 2020 via email

@isayev
Copy link
Owner

isayev commented Oct 20, 2020 via email

@gmseabra
Copy link
Author

gmseabra commented Oct 20, 2020 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants