Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add max size limit to requests for bulk import #996

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

jpr5
Copy link

@jpr5 jpr5 commented Jul 19, 2021

This commit adds a new parameter max_size, in bytes, which is used to
enforce an upper limit on the overall HTTP POST size. This is useful
when trying to maximize bulk import speed by reducing roundtrips to
retrieve and send data.

This is needed for scenarios where there is no control over
Elasticsearch's maximum HTTP request payload size. For example, AWS'
elasticsearch offering has either a 10MiB or 100MiB HTTP request payload
size limit.

batch_size is good for bounding local runtime memory usage, but when
indexing large sets of big objects, it's entirely possible to hit a
service provider's underlying request size limit and biff the import
mid-run. This is even worse when force is true - then the index is
left in an incomplete state with no obvious value to adjust batch_size
down to, in order to sneak under the limit.

The max_size defaults to 10_000_000, to catch the worst-case
scenario on AWS.

This commit adds a new parameter `max_size`, in bytes, which is used to
enforce an upper limit on the overall HTTP POST size.  This is useful
when trying to maximize bulk import speed by reducing roundtrips to
retrieve and send data.

This is needed for scenarios where there is no control over
Elasticsearch's maximum HTTP request payload size.  For example, AWS'
elasticsearch offering has either a 10MiB or 100MiB HTTP request payload
size limit.

`batch_size` is good for bounding local runtime memory usage, but when
indexing large sets of big objects, it's entirely possible to hit a
service provider's underlying request size limit and biff the import
mid-run.  This is even worse when `force` is true - then the index is
left in an incomplete state with no obvious value to adjust batch_size
down to, in order to sneak under the limit.

The `max_size` defaults to `10_000_000`, to catch the worst-case
scenario on AWS.
@cla-checker-service
Copy link

cla-checker-service bot commented Jul 19, 2021

💚 CLA has been signed

@jpr5
Copy link
Author

jpr5 commented Jul 19, 2021

Signed the agreement.

@stale stale bot added the stale label Jan 8, 2022
@elastic elastic deleted a comment from stale bot Jan 12, 2022
@stale stale bot removed the stale label Jan 12, 2022
@jpr5
Copy link
Author

jpr5 commented Apr 21, 2023

Well, willing to look at/fix the failures but can't see the test detail failures anymore...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants