bcapar computes parametric bootstrap confidence intervals for a real-valued parameter theta in a p-parameter exponential family. It is described in Section 4 of the reference below.
bcapar(t0, tt, bb, alpha = c(0.025, 0.05, 0.1, 0.16), J = 10, K = 6, trun = 0.001, pct = 0.333, cd = 0, func)
| t0 | Observed estimate of theta, usually by maximum likelihood. |
|---|---|
| tt | A vector of parametric bootstrap replications of theta of
length |
| bb | A |
| alpha | percentiles desired for the bca confidence limits. One
only needs to provide |
| J, K | Parameters controlling the jackknife estimates of Monte
Carlo error: |
| trun | Truncation parameter used in the calculation of the
acceleration |
| pct | Proportion of "nearby" b vectors used in the calculation
of |
| cd | If cd is 1 the bca confidence density is also returned; see Section 11.6 in reference Efron and Hastie (2016) below |
| func | Function \(\hat{\theta} = func(b)\). If this is not missing then output includes abc estimates; see reference DiCiccio and Efron (1992) below |
a named list of several items:
lims : Bca confidence limits (first column) and the standard
limits (fourth column). Also the abc limits (fifth column) if
func is provided. The second column, jacksd, are the
jackknife estimates of Monte Carlo error; pct, the third
column are the proportion of the replicates tt less than each
bcalim value
stats : Estimates and their jackknife Monte Carlo errors:
theta = \(\hat{\theta}\); sd, the bootstrap standard deviation
for \(\hat{\theta}\); a the acceleration estimate; az another
acceleration estimate that depends less on extreme values of tt;
z0 the bias-correction estimate; A the big-A measure of raw
acceleration; sdd delta method estimate for standard deviation of
\(\hat{\theta}\); mean the average of tt
abcstats : The abc estimates of a and z0, returned if func was provided
ustats : The bias-corrected estimator 2 * t0 - mean(tt). ustats
gives ustat, an estimate sdu of its sampling error, and jackknife
estimates of monte carlo error for both ustat and sdu. Also given
is B, the number of bootstrap replications
seed : The random number state for reproducibility
DiCiccio T and Efron B (1996). Bootstrap confidence intervals. Statistical Science 11, 189-228
T. DiCiccio and B. Efron. More accurate confidence intervals in exponential families. Biometrika (1992) p231-245.
Efron B (1987). Better bootstrap confidence intervals. JASA 82, 171-200
B. Efron and T. Hastie. Computer Age Statistical Inference. Cambridge University Press, 2016.
B. Efron and B. Narasimhan. Automatic Construction of Bootstrap Confidence Intervals, 2018.
data(diabetes, package = "bcaboot") X <- diabetes$x y <- scale(diabetes$y, center = TRUE, scale = FALSE) lm.model <- lm(y ~ X - 1) mu.hat <- lm.model$fitted.values sigma.hat <- stats::sd(lm.model$residuals) t0 <- summary(lm.model)$adj.r.squared y.star <- sapply(mu.hat, rnorm, n = 1000, sd = sigma.hat) tt <- apply(y.star, 1, function(y) summary(lm(y ~ X - 1))$adj.r.squared) b.star <- y.star %*% X set.seed(1234) bcapar(t0 = t0, tt = tt, bb = b.star)#> $call #> bcapar(t0 = t0, tt = tt, bb = b.star) #> #> $lims #> bca jacksd pct std #> 0.025 0.4495533 0.004360959 0.01027297 0.4478806 #> 0.05 0.4547124 0.003702610 0.02087321 0.4573189 #> 0.1 0.4664863 0.001751501 0.04392895 0.4682007 #> 0.16 0.4716968 0.001860690 0.07467737 0.4767998 #> 0.5 0.4998849 0.001698831 0.31227571 0.5065862 #> 0.84 0.5305638 0.003062928 0.69854528 0.5363726 #> 0.9 0.5395649 0.002293021 0.79476844 0.5449718 #> 0.95 0.5516410 0.002763821 0.88762824 0.5558536 #> 0.975 0.5604463 0.004183640 0.94064895 0.5652919 #> #> $stats #> theta sd a az z0 A #> est 0.5065862 0.0299524125 0.02954242 0.02339717 -0.24558952 0.03906097 #> jsd 0.0000000 0.0006807075 0.01156657 0.01858144 0.04102731 0.03315569 #> sdd mean #> est 0.027584401 0.5147768144 #> jsd 0.002478029 0.0009644795 #> #> $ustats #> ustat sdu B #> est 0.4983956209 0.031430197 1000 #> jsd 0.0009644795 0.004770472 0 #> #> attr(,"class") #> [1] "bcaboot"