STORE_RECV_CHUNKED_DATASET #770
stefanoostwegel
started this conversation in
Query/Retrieve
Replies: 1 comment 2 replies
-
Yeah, its just as simple as setting the configuration option to You may want to run some sort of cleanup action on the temporary directory used by Python as well, just in case. |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello,
We have a tool that transfers many DICOM series concurrent.
This could beup to 20 concurrent treads.
In update 2.0 i see a new configuration item:
https://pydicom.github.io/pynetdicom/dev/reference/generated/pynetdicom._config.STORE_RECV_CHUNKED_DATASET.html
`pynetdicom._config.STORE_RECV_CHUNKED_DATASET¶
pynetdicom._config.STORE_RECV_CHUNKED_DATASET: bool= False¶
Chunk a dataset file when receiving it to minimise memory usage.
New in version 2.0.
If True, then when receiving C-STORE requests as an SCP, don’t decode the dataset and instead write the raw data to a temporary file in the DICOM File Format. The path to the dataset is available to the evt.EVT_C_STORE handler using the Event.dataset_path attribute. This should minimise the amount of memory required when:
Receiving large datasets
Receiving many datasets concurrently
Default: False
Examples
from pynetdicom import _config
_config.STORE_RECV_CHUNKED_DATASET = True`
This sounds like a very nice feature for us to use since we have quite a lot of 'stress' on our SCP.
Is it just as simple as setting this flag to true? Or does it require more postprocessing or anything else after that?
Im not quite sure if i can implement it this easy, or do i need to do anything extra with the data, after recieving it?
Beta Was this translation helpful? Give feedback.
All reactions