You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
According to the paper, the smell is described as follows:
Problem
If the machine runs out of memory while training the model, the training will fail.
Solution
Some APIs are provided to alleviate the run-out-of-memory issue in deep learning libraries. TensorFlow’s documentation notes that if the model is created in a loop, it is suggested to use clear_session() in the loop. Meanwhile, the GitHub repository pytorch-styleguide recommends using .detach() to free the tensor from the graph whenever possible. The .detach() API can prevent unnecessary operations from being recorded and therefore can save memory. Developers should check whether they use this kind of APIs to free the memory whenever possible in their code.
Impact
Memory Issue
Example:
### TensorFlow
import tensorflow as tf
for _ in range(100):
+ tf.keras.backend.clear_session()
model = tf.keras.Sequential([tf.keras.layers.Dense(10) for _ in range(10)])
You can find the code related to this smell in this link:
Hello!
I found an AI-Specific Code smell in your project.
The smell is called: Memory not Freed
You can find more information about it in this paper: https://dl.acm.org/doi/abs/10.1145/3522664.3528620.
According to the paper, the smell is described as follows:
Example:
You can find the code related to this smell in this link:
pyod/pyod/models/mo_gaal.py
Lines 121 to 141 in 795c741
I also found instances of this smell in other files, such as:
.
I hope this information is helpful!
The text was updated successfully, but these errors were encountered: