Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About the size of the model on a different dataset #19

Open
Joscelin-666 opened this issue Jun 29, 2023 · 2 comments
Open

About the size of the model on a different dataset #19

Joscelin-666 opened this issue Jun 29, 2023 · 2 comments

Comments

@Joscelin-666
Copy link

Hi there, I read the paper and was excited to see a method that is lightweight and efficient. However, when I was trying on my own data set, the size of the model surged up to 600 Gb! Do you have any ideas why? Is it because my dataset comes with 128 channels and 2500 sampling points? I am not that familiar with how the code works so I wonder if you can help me :)

@eeyhsong
Copy link
Owner

eeyhsong commented Jul 6, 2023

hello @Joscelin-666,
Is that 600G on GPU?
You may use larger kernel of convolution module to reduce the scale of the data and also capture better features for Transformer module.

@Joscelin-666
Copy link
Author

Yes that's 600 G in my GPU. Thank you for the suggestion!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants