Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BCA method and alpha defaults #26

Open
mwort opened this issue Aug 3, 2020 · 2 comments
Open

BCA method and alpha defaults #26

mwort opened this issue Aug 3, 2020 · 2 comments

Comments

@mwort
Copy link

mwort commented Aug 3, 2020

Hi @cgevans, thanks for this great scikit. I have questions regarding the implementation of the BCA method and the alpha argument to the ci function and I apologize if they might stem from an insufficient understanding of bootstrapping.

The bca method seems to take N+n_samples function evaluation rather than just n_samples, I believe because it calculates the jackknife mean with N function calls. This gets annoying if you have many empirical observations (e.g. N ~ n_samples). Efron and Tibshirani (1994)[0] state that their implementation uses "little more effort than for the percentile intervals" (p.178). A bit of a longshot but isnt there a way of reusing the previous evaluations?

And how come the percentiles default to alpha/2, 1-alpha/2 rather than just alpha, 1-alpha (e.g. as on E&T eq. 13.5, p.171)? Wouldn't typical 5-95% CI have an alpha of 0.05 (rather than 0.1)?

[0] https://books.google.de/books?id=gLlpIUxRntoC&lpg=PR14&ots=A9DxX6J6G6&dq=efron%20tibshirani%20bootstrap&lr&pg=PA171#v=onepage&q&f=false

@aizvorski
Copy link

@mwort "typical 5-95% CI" - I believe "95%CI" is 95% wide ie between 2.5% and 97.5%. Is there a reference that points to the other convention?

@cgevans
Copy link
Owner

cgevans commented May 26, 2022

@aizvorski It does appear that that part of Efron and Tibshirani uses $1-2\alpha$ for the percentile interval, not $1-\alpha$. The R boot package avoids confusion here by using conf instead of alpha as a parameter name.

A bit of a longshot but isnt there a way of reusing the previous evaluations?

I'm reasonably sure this isn't possible. E&T are likely saying that for the more usual case where $n_{samples} \gg N$.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants