Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory not Freed on line 131 of mo_gaal.py #514

Open
CodeSmileBot opened this issue Jul 6, 2023 · 0 comments
Open

Memory not Freed on line 131 of mo_gaal.py #514

CodeSmileBot opened this issue Jul 6, 2023 · 0 comments

Comments

@CodeSmileBot
Copy link

Hello!

I found an AI-Specific Code smell in your project.
The smell is called: Memory not Freed

You can find more information about it in this paper: https://dl.acm.org/doi/abs/10.1145/3522664.3528620.

According to the paper, the smell is described as follows:

Problem If the machine runs out of memory while training the model, the training will fail.
Solution Some APIs are provided to alleviate the run-out-of-memory issue in deep learning libraries. TensorFlow’s documentation notes that if the model is created in a loop, it is suggested to use clear_session() in the loop. Meanwhile, the GitHub repository pytorch-styleguide recommends using .detach() to free the tensor from the graph whenever possible. The .detach() API can prevent unnecessary operations from being recorded and therefore can save memory. Developers should check whether they use this kind of APIs to free the memory whenever possible in their code.
Impact Memory Issue

Example:

### TensorFlow
import tensorflow as tf
for _ in range(100):
+  tf.keras.backend.clear_session()
  model = tf.keras.Sequential([tf.keras.layers.Dense(10) for _ in range(10)])

You can find the code related to this smell in this link:

pyod/pyod/models/mo_gaal.py

Lines 121 to 141 in 795c741

epochs = self.stop_epochs * 3
stop = 0
latent_size = X.shape[1]
data_size = X.shape[0]
# Create discriminator
self.discriminator = create_discriminator(latent_size, data_size)
self.discriminator.compile(
optimizer=SGD(lr=self.lr_d, momentum=self.momentum), loss='binary_crossentropy')
# Create k combine models
for i in range(self.k):
names['sub_generator' + str(i)] = create_generator(latent_size)
latent = Input(shape=(latent_size,))
names['fake' + str(i)] = names['sub_generator' + str(i)](latent)
self.discriminator.trainable = False
names['fake' + str(i)] = self.discriminator(names['fake' + str(i)])
names['combine_model' + str(i)] = Model(latent,
names['fake' + str(i)])
names['combine_model' + str(i)].compile(
optimizer=SGD(lr=self.lr_g,
momentum=self.momentum),
.

I also found instances of this smell in other files, such as:

.

I hope this information is helpful!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant