Skip to content
This repository has been archived by the owner on Feb 24, 2024. It is now read-only.

timeout on request? max post data? #939

Open
u007 opened this issue Feb 25, 2018 · 10 comments
Open

timeout on request? max post data? #939

u007 opened this issue Feb 25, 2018 · 10 comments
Labels
proposal A suggestion for a change, feature, enhancement, etc s: triage Some tests need to be run to confirm the issue
Milestone

Comments

@u007
Copy link
Member

u007 commented Feb 25, 2018

hi

is it possible to set time out on every request?
can this be done in a middleware?
example will be highly appreciated :)

and also post form size limit?

thank you

@stanislas-m stanislas-m added the question The issue's author needs more information label Feb 25, 2018
@markbates
Copy link
Member

markbates commented Feb 25, 2018 via email

@u007
Copy link
Member Author

u007 commented Feb 26, 2018

i think there should be a default timeout settable in the context.
currently due to db pool issue (like google cloud sql, there are 100 max concurrent connections), it will make the server hang forever when there are not enough db connections. and even when apache bench or client stopped, the server still stuck there, with all db connection not being release

@markbates
Copy link
Member

markbates commented Feb 26, 2018 via email

@u007
Copy link
Member Author

u007 commented Feb 26, 2018

im glad to help, ive created a middleware , and it works perfect when it does not hit db pool limit.
putting sleep in handler indicate it timedout correctly.

i can see timeout activating in the middleware, but i could not release the db connection. do you have any idea how i could release the db connection?
the code below does not seems to work.

                duration := 5 * time.Second
                ctx, cancel := context.WithTimeout(c.Request().Context(), duration)
		defer cancel()

		go func() error {
			for {
				select {
				case <-time.After(duration):
					//c.Logger().Debugf("dead line reached!")
					if c.Request().Close {
						c.Logger().Debugf("Connection already closed")
						tx, ok := c.Value("tx").(*pop.Connection)
						if ok {
							tx.Close()
						}
						models.DB.Rollback(func(tx *pop.Connection) {

						})
						return nil
					}
					// err := ctx.Err()
					// if err != nil {
					// 	c.Logger().Debugf("error in timeout context: %v", err.Error())
					// 	return nil
					// }
					if wr, ok := c.Response().(http.Hijacker); ok {
						conn, _, err := wr.Hijack()
						if err != nil {
							c.Logger().Errorf("unable to hijack connection: %v", err.Error())
							return err
						}
						c.Logger().Debugf("Closing Connection - timeout")
						conn.Close()
						tx, ok := c.Value("tx").(*pop.Connection)
						if ok {
							tx.Close()
						}
						models.DB.Rollback(func(tx *pop.Connection) {

						})
						return nil
					} //if hijac ok
					return nil
				case <-ctx.Done():
					return nil
				} //select
			} //for
		}()

@markbates
Copy link
Member

I don't know why you're having connection limit problems. looking at that code snippet I'm guessing you're using websockets, or something based on the Hijack code. It's probably related to that. You might want to stop using the pop transaction middleware, assuming you are, and manage those things by yourself.

@u007
Copy link
Member Author

u007 commented May 16, 2018

@markbates you are right, the pop middleware causes the problem. no matter if there is transaction or not in any action, it seems to contribute to a new connection request and therefore hang it

im wondering how come there are not timeout for transaction request

@homanchou
Copy link

It is because statement timeouts need to be set inside postgres?

@CWharton
Copy link

The statement timeout will not fix this issue. I had to set idle_in_transaction_session_timeout in order to keep buffalo from eating up all the connections. Also I only have this issue in production.

@github-actions
Copy link

github-actions bot commented Aug 7, 2021

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

@github-actions
Copy link

This issue was closed because it has been stalled for 5 days with no activity.

@sio4 sio4 added proposal A suggestion for a change, feature, enhancement, etc # chkme s: triage Some tests need to be run to confirm the issue and removed question The issue's author needs more information # chkme labels Sep 26, 2022
@sio4 sio4 added this to the Backlog milestone Sep 26, 2022
@sio4 sio4 reopened this Sep 26, 2022
@sio4 sio4 modified the milestones: Backlog, Proposal Sep 26, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
proposal A suggestion for a change, feature, enhancement, etc s: triage Some tests need to be run to confirm the issue
Projects
None yet
Development

No branches or pull requests

6 participants