Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory usage with Detector1 pipeline #8454

Open
stscijgbot-jp opened this issue Apr 30, 2024 · 0 comments
Open

Memory usage with Detector1 pipeline #8454

stscijgbot-jp opened this issue Apr 30, 2024 · 0 comments

Comments

@stscijgbot-jp
Copy link
Collaborator

Issue JP-3610 was created on JIRA by Maria Pena-Guerrero:

Several Help Desk tickets have either directly stated or it has been found  through investigation that the Detector1 pipeline uses too much memory. 

Probably both repos the jwst and stdatamodels would have to be modified. In the stdatamodels repository, the culprit seems to be the open function, which when given a datamodel to open creates a copy or "clone" but the original is still referenced so the previous object cannot be garbage collected. Hence incrementing the number of copies and memory used in the different steps of detector1 on the jwst repo. For example, calling model2 = RampModel(model) (as is done at the beginning of detector1, and during charge_migration, etc) will create a new model ({}model2 is not model{}) however it will reference the same data ({}model2.data is model.data{}). Which links the primary memory load for both model and probably more importantly the underlying asdf objects are the same ({}model2._asdf is model._asdf{}). As the _asdf tree references effectively everything in the model their lifetimes are linked which may contribute to the growing memory load.

Here are some of the tickets in question (more may be added as they show up):

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant