Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

missing stuffs in torch.nn.utils #1243

Open
7 of 16 tasks
yueyinqiu opened this issue Feb 27, 2024 · 5 comments
Open
7 of 16 tasks

missing stuffs in torch.nn.utils #1243

yueyinqiu opened this issue Feb 27, 2024 · 5 comments

Comments

@yueyinqiu
Copy link
Contributor

yueyinqiu commented Feb 27, 2024

  • clip_grad_norm_
  • clip_grad_value_
  • convert_conv2d_weight_memory_format
  • fuse_conv_bn_eval
  • fuse_conv_bn_weights
  • fuse_linear_bn_eval
  • fuse_linear_bn_weights
  • parameters_to_vector
  • parametrizations
  • remove_spectral_norm
  • remove_weight_norm
  • rnn
  • skip_init
  • spectral_norm
  • stateless
  • vector_to_parameters

  • rnn is checked because the class do exist in TorchSharp, but actually not all the methods inside have been implemented.
@shaltielshmid
Copy link
Contributor

Hi @yueyinqiu !
Thanks for writing up the diff. I went through all the methods/modules that were missing in TorchSharp, and they are all pure PyTorch code that don't appear in LibTorch (the underlying C++ library). This means that we would need to rewrite all the methods ourselves in TorchSharp, which can take time to write properly.
Do you have any specific methods that are more important for you in the short term? If so, I can dedicate some time to porting those over in the shorter term.

All contributions are more than welcome, so if you want to port some of the functions as well, that would be great!

[Side note: The clip_grad_norm function in PyTorch is deprecated and just calls the clip_grad_norm_ function, which exists in TorchSharp as well].

@yueyinqiu
Copy link
Contributor Author

yueyinqiu commented Feb 28, 2024

Hi @yueyinqiu ! Thanks for writing up the diff. I went through all the methods/modules that were missing in TorchSharp, and they are all pure PyTorch code that don't appear in LibTorch (the underlying C++ library). This means that we would need to rewrite all the methods ourselves in TorchSharp, which can take time to write properly. Do you have any specific methods that are more important for you in the short term? If so, I can dedicate some time to porting those over in the shorter term.

All contributions are more than welcome, so if you want to port some of the functions as well, that would be great!

[Side note: The clip_grad_norm function in PyTorch is deprecated and just calls the clip_grad_norm_ function, which exists in TorchSharp as well].

Well in fact I'm just a newbie to deep learning and I was trying reproducing some others' work which used spectral_norm. But I have later found that they skipped all the spectral_norms in their final configuration. So it's not that urgent and I'm just making a note here.

Actually I'd like to contribute the project. It's so great to be able to use C# instead of Python. However I'm afraid that my unfamiliarity with deep learning and PyTorch will mess everything up :(

(Please feel free to edit my list, like just remove the clip_grad_norm here if you think it's appropriate.)

@yueyinqiu
Copy link
Contributor Author

Hi @yueyinqiu ! Thanks for writing up the diff. I went through all the methods/modules that were missing in TorchSharp, and they are all pure PyTorch code that don't appear in LibTorch (the underlying C++ library). This means that we would need to rewrite all the methods ourselves in TorchSharp, which can take time to write properly. Do you have any specific methods that are more important for you in the short term? If so, I can dedicate some time to porting those over in the shorter term.

All contributions are more than welcome, so if you want to port some of the functions as well, that would be great!

[Side note: The clip_grad_norm function in PyTorch is deprecated and just calls the clip_grad_norm_ function, which exists in TorchSharp as well].

I've tried to add fuse_conv_bn_weights and fuse_linear_bn_weights in #1262 . Could you please take a look and give me some advice?

@NiklasGustafsson
Copy link
Contributor

@yueyinqiu -- I just merged your PR. Please edit the to-do list in the first comment to reflect your changes.

@yueyinqiu
Copy link
Contributor Author

@yueyinqiu -- I just merged your PR. Please edit the to-do list in the first comment to reflect your changes.

yes and thanks a lot

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants