-
Notifications
You must be signed in to change notification settings - Fork 765
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"internal error" stream state when using gzip compression on 3.0-dev7 #2530
Comments
Just a quick note: I've tried recompiling haproxy with USE_SLZ=1 instead of USE_ZLIB=1...but didn't change anything. And for some reason, 2.9.7 is also affected by this. |
I isolated the same config on another machine and couldn't reproduce the issue. Something about our production seems to have a special fairy dust for these kinds of things :-) |
Thanks @felipewd. This kind of error is most probably reported by the H1 multiplexer because the message chunking appears as invalid. It is hard to know if the issue is on the HTTP compression filter side or the H1 multiplexer one. You may eventually enable traces on H1 to only print errors:
Don't forget to stop the traces at the end, using You have observed the issue on the 3.0-dev7 dans 2.9.7. Do you know if the 3.0-dev6 or 2.9.6 are also affected ? |
hey @capflam, thanks for the quick reply. We were able to collect these traces:
I'll test dev6 and 2.9.6 now and report back |
Just a quick follow-up: dev6 and 2.9.6 are also affected by this symptom. Seems easy enough to cause in production, if you'd like me to test something else. Don't know how to precisely collect something useful here. Even those traces I'm not 100% sure are for this specific request, since we have a ton of other requests on this server. |
Thanks @felipewd. So it is probably related to you issue because these traces will result by a |
I have no reproducer but by reading the code, I can find a way to produce this behavior. Your traces really help me to understand. Can you try the attached patch please, on top of the 3.0-dev7 ? 0001-BUG-MEDIUM-stconn-Don-t-forward-channel-data-if-inpu.patch.txt |
hey @capflam I can confirm this fixes the issue. Great and fast work, as usual :-) Thanks! |
A bit to quick, the fix was not pushed :) |
…filtered Once data are received and placed in a channel buffer, if it is possible, outgoing data are immediately forwarded. But we must take care to not do so if there is also pending input data and a filter registered on the channel. It is especially important for HTX streams because the HTX may be altered, especially the extra field. And it is indeed an issue with the HTTP compression filter and the H1 multiplexer. The wrong chunk size may be announced leading to an internal error. This patch should fix the issue #2530. It must be backported to all stable versions.
…filtered Once data are received and placed in a channel buffer, if it is possible, outgoing data are immediately forwarded. But we must take care to not do so if there is also pending input data and a filter registered on the channel. It is especially important for HTX streams because the HTX may be altered, especially the extra field. And it is indeed an issue with the HTTP compression filter and the H1 multiplexer. The wrong chunk size may be announced leading to an internal error. This patch should fix the issue haproxy#2530. It must be backported to all stable versions. (cherry picked from commit 50d8c18) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
…filtered Once data are received and placed in a channel buffer, if it is possible, outgoing data are immediately forwarded. But we must take care to not do so if there is also pending input data and a filter registered on the channel. It is especially important for HTX streams because the HTX may be altered, especially the extra field. And it is indeed an issue with the HTTP compression filter and the H1 multiplexer. The wrong chunk size may be announced leading to an internal error. This patch should fix the issue haproxy#2530. It must be backported to all stable versions. (cherry picked from commit 50d8c18) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit dcc0a40) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
…filtered Once data are received and placed in a channel buffer, if it is possible, outgoing data are immediately forwarded. But we must take care to not do so if there is also pending input data and a filter registered on the channel. It is especially important for HTX streams because the HTX may be altered, especially the extra field. And it is indeed an issue with the HTTP compression filter and the H1 multiplexer. The wrong chunk size may be announced leading to an internal error. This patch should fix the issue haproxy#2530. It must be backported to all stable versions. (cherry picked from commit 50d8c18) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit dcc0a40) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 439a41b) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
…filtered Once data are received and placed in a channel buffer, if it is possible, outgoing data are immediately forwarded. But we must take care to not do so if there is also pending input data and a filter registered on the channel. It is especially important for HTX streams because the HTX may be altered, especially the extra field. And it is indeed an issue with the HTTP compression filter and the H1 multiplexer. The wrong chunk size may be announced leading to an internal error. This patch should fix the issue haproxy#2530. It must be backported to all stable versions. (cherry picked from commit 50d8c18) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit dcc0a40) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 439a41b) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 9f0ba99) [ad: moved from stconn.c to stream_interface.c] Signed-off-by: Amaury Denoyelle <adenoyelle@haproxy.com>
Detailed Description of the Problem
We have a backend generating a huge almost-40MB m3u8 file for streaming. So we've tried using the
compression
filter, which seems easy enough.The response without compression seem ideal, with content-length declared:
When using the compression filter, we get truncated response with:
We were able to pull a
httplog
with a weird stream termination state:I'm a complete novice regarding this filter, but this state
ID--
seems to suggest something isn't right.This backend is using stick-tables to prevent abusers to access this huge URL many times, other than that, is pretty straightforward.
Expected Behavior
Compression to deliver a full valid response.
Steps to Reproduce the Behavior
Not sure...we're using a backend server that doesn't reply with a compressed response (kind of like if a
compression offload
is used) and using the compression filter on that response.Do you have any idea what may have caused this?
Nope
Do you have an idea how to solve the issue?
Not at all
What is your configuration?
Output of
haproxy -vv
Last Outputs and Backtraces
No response
Additional Information
No response
The text was updated successfully, but these errors were encountered: