Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add note about concurrency setting to throttle-upload setting #5137

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

bjoernmartin
Copy link

I had trouble getting the upload throttle to work. The forum post https://forum.duplicati.com/t/upload-throttle-not-working/8955/7 helped me. So I thought that information could also go into the application's documentation.

If this is accepted / acceptable I can contribute this change to the documentation as well.

@ts678
Copy link
Collaborator

ts678 commented Mar 28, 2024

I had trouble getting the upload throttle to work.

Can you provide details of your testing? It looks like the intention was to scale the throttle down by the concurrency number:

m_maxConcurrentUploads = m_options.AsynchronousConcurrentUploadLimit <= 0 ? int.MaxValue : m_options.AsynchronousConcurrentUploadLimit;
m_initialUploadThrottleSpeed = m_options.AsynchronousConcurrentUploadLimit <= 0 ? int.MaxValue : m_options.MaxUploadPrSecond / m_maxConcurrentUploads;

if (maxUploadPerSecond != m_lastUploadThrottleSpeed)
{
m_lastUploadThrottleSpeed = maxUploadPerSecond;
m_initialUploadThrottleSpeed = maxUploadPerSecond / m_maxConcurrentUploads;
m_progressUpdater.UpdateThrottleSpeeds(maxUploadPerSecond);
}

although a developer should probably render an opinion on whether the code is correct, and whatever subtle things it's doing.

@ts678
Copy link
Collaborator

ts678 commented Mar 30, 2024

Confirming that something isn't working as the above code chunks seem to intend. I backed up a 300 MB source file to local folder with throttle-upload=1MB/sec. At asynchronous-concurrent-upload-limit=1, total time was 310 seconds (seems right), however at asynchronous-concurrent-upload-limit=10, time was 73 seconds. If that's a bug (need a developer), maybe a code fix will be better.

@ts678
Copy link
Collaborator

ts678 commented Mar 30, 2024

It still looks like a bug to me. Thanks for finding it. Would you like to file an issue saying how you tested? If not, I suppose I could.
Using asynchronous-concurrent-upload-limit=1 does seem to produce reasonable result. Higher settings can hugely fail throttle.
Testing 100 Kbits/s (both up and down because they have been confused before -- and also setting backup-test-samples=0 got 2899 seconds for 1 concurrent upload (very reasonable for 286 MB destination data), 1924 for 2, 92 for 3, and (oddly) 190 for 4.

I was watching file writes with Sysinternals Process Monitor. At the higher concurrencies it seems to just blast out a whole dblock. When throttling actually works, you can see it pausing between writes. This is all it can do -- it can't finely control the output rate.

These tests have been done with a 10 MB remote volume size. One can also study a log using a tag filter or a profiling log to see:

2024-03-30 17:26:31 -04 - [Profiling-Duplicati.Library.Main.Operation.Backup.BackendUploader-UploadSpeed]: Uploaded 9.92 MB in 00:00:01.9478856, 5.09 MB/s

type lines which track individual files, and are certainly far off from 100 Kbits/second.

@kenkendk
Copy link
Member

kenkendk commented Apr 3, 2024

I think the intention is that the throttle upload should be observed across all threads. However, the implementation only looks at a single stream and limits this to a fraction of the total possible uploads.

A better solution would be to implement a shared throttle mechanism that has some kind of fair distribution of the allowed bandwidth.

Looking at the measurements done by @ts678 it seems that the throttling does work for some settings. I think there might be some lower-case limit that we hit with divided that causes unexpected throttle values to be effected.

@bjoernmartin
Copy link
Author

Thanks for your input @ts678 and @kenkendk ! And sorry for not getting back to this earlier.

I assume it would make sense to create a new issue that refers to this as a bug, right? Especially since the PR does not make any sense anymore in this case. I'd do that if you agree.

@ts678
Copy link
Collaborator

ts678 commented Apr 26, 2024

In my opinion it's a bug, although I hope I didn't just test it badly. You can see if you can reproduce the odd results that I got, possibly testing over some network destination. Possibly the workaround still helps, but the help doesn't usually cover those...

@bjoernmartin
Copy link
Author

I certainly got similar results as yours. Setting asynchronous-concurrent-upload-limit to 1 allowed me to limit upload bandwidth to the 150kb/sec I was aiming for (due to the extremely low bandwidth at one location I occasionally work from). My test was always to trigger a backup with a ~2 GB file having been added to the location that is backed up. And once that worked I was satisfied. I did not perform thorough testing or even comparison with other parameter sets, sorry.

If you think the addition to the documentation is still useful we can keep this issue. I'd still create a separate issue for the problem with asynchronous-concurrent-upload-limit > 1 not observing the bandwidth limit across all concurrent uploads.

@kenkendk
Copy link
Member

@bjoernmartin I agree. It appears the documentation is as intended, but the implementation is not doing a great job observing the limit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants