rma.uni.Rd
Function to fit meta-analytic equal-, fixed-, and random-effects models and (mixed-effects) meta-regression models using a linear (mixed-effects) model framework. See below and the introduction to the metafor-package for more details on these models.
rma.uni(yi, vi, sei, weights, ai, bi, ci, di, n1i, n2i, x1i, x2i, t1i, t2i,
m1i, m2i, sd1i, sd2i, xi, mi, ri, ti, fi, pi, sdi, r2i, ni, mods, scale,
measure="GEN", data, slab, subset,
add=1/2, to="only0", drop00=FALSE, intercept=TRUE,
method="REML", weighted=TRUE, test="z",
level=95, btt, att, tau2, verbose=FALSE, digits, control, ...)
rma(yi, vi, sei, weights, ai, bi, ci, di, n1i, n2i, x1i, x2i, t1i, t2i,
m1i, m2i, sd1i, sd2i, xi, mi, ri, ti, fi, pi, sdi, r2i, ni, mods, scale,
measure="GEN", data, slab, subset,
add=1/2, to="only0", drop00=FALSE, intercept=TRUE,
method="REML", weighted=TRUE, test="z",
level=95, btt, att, tau2, verbose=FALSE, digits, control, ...)
These arguments pertain to data input:
vector of length \(k\) with the observed effect sizes or outcomes. See ‘Details’.
vector of length \(k\) with the corresponding sampling variances. See ‘Details’.
vector of length \(k\) with the corresponding standard errors (only relevant when not using vi
). See ‘Details’.
optional argument to specify a vector of length \(k\) with user-defined weights. See ‘Details’.
see below and the documentation of the escalc
function for more details.
see below and the documentation of the escalc
function for more details.
see below and the documentation of the escalc
function for more details.
see below and the documentation of the escalc
function for more details.
see below and the documentation of the escalc
function for more details.
see below and the documentation of the escalc
function for more details.
see below and the documentation of the escalc
function for more details.
see below and the documentation of the escalc
function for more details.
see below and the documentation of the escalc
function for more details.
see below and the documentation of the escalc
function for more details.
see below and the documentation of the escalc
function for more details.
see below and the documentation of the escalc
function for more details.
see below and the documentation of the escalc
function for more details.
see below and the documentation of the escalc
function for more details.
see below and the documentation of the escalc
function for more details.
see below and the documentation of the escalc
function for more details.
see below and the documentation of the escalc
function for more details.
see below and the documentation of the escalc
function for more details.
see below and the documentation of the escalc
function for more details.
see below and the documentation of the escalc
function for more details.
see below and the documentation of the escalc
function for more details.
see below and the documentation of the escalc
function for more details.
see below and the documentation of the escalc
function for more details.
optional argument to include one or more moderators in the model. A single moderator can be given as a vector of length \(k\) specifying the values of the moderator. Multiple moderators are specified by giving a matrix with \(k\) rows and as many columns as there are moderator variables. Alternatively, a model formula
can be used to specify the model. See ‘Details’.
optional argument to include one or more predictors for the scale part in a location-scale model. See ‘Details’.
character string to specify the type of data supplied to the function. When measure="GEN"
(default), the observed effect sizes or outcomes and corresponding sampling variances should be supplied to the function via the yi
and vi
arguments, respectively (instead of the sampling variances, one can supply the standard errors via the sei
argument). Alternatively, one can set measure
to one of the effect sizes or outcome measures described under the documentation for the escalc
function in which case one must specify the required data via the appropriate arguments (see escalc
).
optional data frame containing the data supplied to the function.
optional vector with labels for the \(k\) studies.
optional (logical or numeric) vector to specify the subset of studies that should be used for the analysis.
These arguments pertain to handling of zero cells/counts/frequencies:
see the documentation of the escalc
function.
see the documentation of the escalc
function.
see the documentation of the escalc
function.
These arguments pertain to the model / computations and output:
logical to specify whether an intercept should be added to the model (the default is TRUE
). Ignored when mods
is a formula.
character string to specify whether an equal- or a random-effects model should be fitted. An equal-effects model is fitted when using method="EE"
. A random-effects model is fitted by setting method
equal to one of the following: "DL"
, "HE"
, "HS"
, "HSk"
, "SJ"
, "ML"
, "REML"
, "EB"
, "PM"
, "GENQ"
, "PMM"
, or "GENQM"
. The default is "REML"
. See ‘Details’.
logical to specify whether weighted (default) or unweighted estimation should be used to fit the model (the default is TRUE
).
character string to specify how test statistics and confidence intervals for the fixed effects should be computed. By default (test="z"
), Wald-type tests and CIs are obtained, which are based on a standard normal distribution. When test="t"
, a t-distribution is used instead. When test="knha"
, the method by Knapp and Hartung (2003) is used. See ‘Details’ and also here for some recommended practices.
numeric value between 0 and 100 to specify the confidence interval level (the default is 95; see here for details).
optional vector of indices to specify which coefficients to include in the omnibus test of moderators. Can also be a string to grep
for. See ‘Details’.
optional vector of indices to specify which scale coefficients to include in the omnibus test. Only relevant for location-scale models. See ‘Details’.
optional numeric value to specify the amount of (residual) heterogeneity in a random- or mixed-effects model (instead of estimating it). Useful for sensitivity analyses (e.g., for plotting results as a function of \(\tau^2\)). If unspecified, the value of \(\tau^2\) is estimated from the data.
logical to specify whether output should be generated on the progress of the model fitting (the default is FALSE
). Can also be an integer. Values > 1 generate more verbose output. See ‘Note’.
optional integer to specify the number of decimal places to which the printed results should be rounded. If unspecified, the default is 4. See also here for further details on how to control the number of digits in the output.
optional list of control values for the iterative estimation algorithms. If unspecified, default values are defined inside the function. See ‘Note’.
additional arguments.
The function can be used in combination with any of the usual effect sizes or outcome measures used in meta-analyses (e.g., log risk ratios, log odds ratios, risk differences, mean differences, standardized mean differences, log transformed ratios of means, raw correlation coefficients, correlation coefficients transformed with Fisher's r-to-z transformation), or, more generally, any set of estimates (with corresponding sampling variances) one would like to analyze. Simply specify the observed effect sizes or outcomes via the yi
argument and the corresponding sampling variances via the vi
argument. Instead of specifying vi
, one can specify the standard errors (the square root of the sampling variances) via the sei
argument. The escalc
function can be used to compute a wide variety of effect sizes or outcome measures (and the corresponding sampling variances) based on summary statistics.
Alternatively, the function can automatically calculate the values of a chosen effect size or outcome measure (and the corresponding sampling variances) when supplied with the necessary data. The escalc
function describes which effect sizes or outcome measures are currently implemented and what data/arguments should then be specified/used. The measure
argument should then be set to the desired effect size or outcome measure.
The function can be used to fit equal-, fixed-, and random-effects models, as well as (mixed-effects) meta-regression models including one or multiple moderators (the difference between the various models is described in detail on the introductory metafor-package help page).
Assuming the observed effect sizes or outcomes and corresponding sampling variances are supplied via the yi
and vi
arguments, an equal-effects model can be fitted with rma(yi, vi, method="EE")
. Setting method="FE"
fits a fixed-effects model (see here for a discussion of this model and how the interpretation of these models differ despite yielding identical results). Weighted estimation (with inverse-variance weights) is used by default. User-defined weights can be supplied via the weights
argument. Unweighted estimation can be used by setting weighted=FALSE
(which is the same as setting the weights equal to a constant).
A random-effects model can be fitted with the same code but setting the method
argument to one of the various estimators for the amount of heterogeneity:
method="DL"
= DerSimonian-Laird estimator (DerSimonian & Laird, 1986; Raudenbush, 2009),
method="HE"
= Hedges estimator (Hedges, 1983, 1992),
method="HS"
= Hunter-Schmidt estimator (Hunter & Schmidt, 1990; Viechtbauer et al., 2015),
method="HSk"
= Hunter-Schmidt estimator with a small sample-size correction (Brannick et al., 2019),
method="SJ"
= Sidik-Jonkman estimator (Sidik & Jonkman, 2005b, 2007),
method="ML"
= maximum likelihood estimator (Hardy & Thompson, 1996; Raudenbush, 2009),
method="REML"
= restricted maximum likelihood estimator (Viechtbauer, 2005; Raudenbush, 2009)
method="EB"
= empirical Bayes estimator (Morris, 1983; Berkey et al. 1995),
method="PM"
= Paule-Mandel estimator (Paule & Mandel, 1982; Viechtbauer et al., 2015),
method="GENQ"
= generalized Q-statistic estimator (DerSimonian & Kacker, 2007; Jackson et al., 2014),
method="PMM"
= median-unbiased Paule-Mandel estimator (Viechtbauer, 2021),
method="GENQM"
= median-unbiased generalized Q-statistic estimator (Viechtbauer, 2021).
For a description of the various estimators, see Brannick et al. (2019), DerSimonian and Kacker (2007), Raudenbush (2009), Veroniki et al. (2016), Viechtbauer (2005), and Viechtbauer et al. (2015). Note that the Hedges estimator is also called the ‘variance component estimator’ or ‘Cochran estimator’, the Sidik-Jonkman estimator is also called the ‘model error variance estimator’, the empirical Bayes estimator is actually identical to the Paule-Mandel estimator (Viechtbauer et al., 2015), and the generalized Q-statistic estimator is a general method-of-moments estimator (DerSimonian & Kacker, 2007) requiring the specification of weights (the HE and DL estimators are just special cases with equal and inverse sampling variance weights, respectively). Finally, the two median-unbiased estimators are versions of the Paule-Mandel and generalized Q-statistic estimators that equate the respective estimating equations not to their expected values, but to the medians of their theoretical distributions (Viechtbauer, 2021).
One or more moderators can be included in a model via the mods
argument. A single moderator can be given as a (row or column) vector of length \(k\) specifying the values of the moderator. Multiple moderators are specified by giving an appropriate model matrix (i.e., \(X\)) with \(k\) rows and as many columns as there are moderator variables (e.g., mods = cbind(mod1, mod2, mod3)
, where mod1
, mod2
, and mod3
correspond to the names of the variables for three moderator variables). The intercept is added to the model matrix by default unless intercept=FALSE
.
Alternatively, one can use standard formula
syntax to specify the model. In this case, the mods
argument should be set equal to a one-sided formula of the form mods = ~ model
(e.g., mods = ~ mod1 + mod2 + mod3
). Interactions, polynomial/spline terms, and factors can be easily added to the model in this manner. When specifying a model formula via the mods
argument, the intercept
argument is ignored. Instead, the inclusion/exclusion of the intercept is controlled by the specified formula (e.g., mods = ~ 0 + mod1 + mod2 + mod3
or mods = ~ mod1 + mod2 + mod3 - 1
would lead to the removal of the intercept).
When the observed effect sizes or outcomes and corresponding sampling variances are supplied via the yi
and vi
(or sei
) arguments, one can also specify moderators via the yi
argument (e.g., rma(yi ~ mod1 + mod2 + mod3, vi)
). In that case, the mods
argument is ignored and the inclusion/exclusion of the intercept is again controlled by the specified formula.
For models including moderators, an omnibus test of all model coefficients is conducted that excludes the intercept (the first coefficient) if it is included in the model. If no intercept is included in the model, then the omnibus test includes all coefficients in the model including the first. Alternatively, one can manually specify the indices of the coefficients to test via the btt
(‘betas to test’) argument (i.e., to test \(\text{H}_0{:}\; \beta_{j \in \texttt{btt}} = 0\), where \(\beta_{j \in \texttt{btt}}\) is the set of coefficients to be tested). For example, with btt=c(3,4)
, only the third and fourth coefficients from the model are included in the test (if an intercept is included in the model, then it corresponds to the first coefficient in the model). Instead of specifying the coefficient numbers, one can specify a string for btt
. In that case, grep
will be used to search for all coefficient names that match the string. The omnibus test is called the \(Q_M\)-test and follows asymptotically a chi-square distribution with \(m\) degrees of freedom (with \(m\) denoting the number of coefficients tested) under the null hypothesis (that the true value of all coefficients tested is equal to 0).
Categorical moderator variables can be included in the model via the mods
argument in the same way that appropriately (dummy) coded categorical variables can be included in linear models. One can either do the dummy coding manually or use a model formula together with the factor
function to automate the coding (note that string/character variables in a model formula are automatically converted to factors). An example to illustrate these different approaches is provided below.
By default, tests of individual coefficients in the model (and the corresponding confidence intervals) are based on a standard normal distribution, while the omnibus test is based on a chi-square distribution (see above). As an alternative, one can set test="t"
, in which case tests of individual coefficients and confidence intervals are based on a t-distribution with \(k-p\) degrees of freedom, while the omnibus test then uses an F-distribution with \(m\) and \(k-p\) degrees of freedom (with \(k\) denoting the total number of estimates included in the analysis and \(p\) the total number of model coefficients including the intercept if it is present). Furthermore, when test="knha"
(or equivalently, test="hksj"
), the method by Hartung (1999), Sidik and Jonkman (2002), and Knapp and Hartung (2003) (the Knapp-Hartung method; also referred to as the Hartung-Knapp-Sidik-Jonkman method) is used, which applies an adjustment to the standard errors of the estimated coefficients (to account for the uncertainty in the estimate of the amount of (residual) heterogeneity) and uses t- and F-distributions as described above (see also here). Finally, one can set test="adhoc"
, in which case the Knapp-Hartung method is used, but with the restriction that the adjustment to the standard errors can never result in adjusted standard errors that are smaller than the unadjusted ones (see Jackson et al., 2017, section 4.3).
A test for (residual) heterogeneity is automatically carried out by the function. Without moderators in the model, this is simply Cochran's \(Q\)-test (Cochran, 1954), which tests whether the variability in the observed effect sizes or outcomes is larger than would be expected based on sampling variability alone. A significant test suggests that the true effects/outcomes are heterogeneous. When moderators are included in the model, this is the \(Q_E\)-test for residual heterogeneity, which tests whether the variability in the observed effect sizes or outcomes not accounted for by the moderators included in the model is larger than would be expected based on sampling variability alone.
The function can also be used to fit so-called ‘location-scale models’ (Viechtbauer & López-López, 2022). In such models, one can specify not only predictors for the size of the average true outcome (i.e., for their ‘location’), but also predictors for the amount of heterogeneity in the outcomes (i.e., for their ‘scale’). The model is given by \[y_i = \beta_0 + \beta_1 x_{i1} + \beta_2 x_{i2} + \ldots + \beta_{p'} x_{ip'} + u_i + \varepsilon_i,\] \[u_i \sim N(0, \tau_i^2), \; \varepsilon_i \sim N(0, v_i),\] \[\log(\tau_i^2) = \alpha_0 + \alpha_1 z_{i1} + \alpha_2 z_{i2} + \ldots + \alpha_{q'} z_{iq'},\] where \(x_{i1}, \ldots, x_{ip'}\) are the values of the \(p'\) predictor variables that may be related to the size of the average true outcome (letting \(p = p' + 1\) denote the total number of location coefficients in the model including the model intercept \(\beta_0\)) and \(z_{i1}, \ldots, z_{iq'}\) are the values of the \(q'\) scale variables that may be related to the amount of heterogeneity in the outcomes (letting \(q = q' + 1\) denote the total number of scale coefficients in the model including the model intercept \(\alpha_0\)). Location variables can be specified via the mods
argument as described above (e.g., mods = ~ mod1 + mod2 + mod3
). Scale variables can be specified via the scale
argument (e.g., scale = ~ var1 + var2 + var3
). A log link is used for specifying the relationship between the scale variables and the amount of heterogeneity so that \(\tau_i^2\) is guaranteed to be non-negative (one can also set (the undocumented) argument link="identity"
to use an identity link, but this is more likely to lead to estimation problems). Estimates of the location and scale coefficients can be obtained either with maximum likelihood (method="ML"
) or restricted maximum likelihood (method="REML"
) estimation. An omnibus test of the scale coefficients is conducted as described above (where the att
argument can be used to specify which scale coefficients to include in the test).
An object of class c("rma.uni","rma")
. The object is a list containing the following components:
estimated coefficients of the model.
standard errors of the coefficients.
test statistics of the coefficients.
corresponding p-values.
lower bound of the confidence intervals for the coefficients.
upper bound of the confidence intervals for the coefficients.
variance-covariance matrix of the estimated coefficients.
estimated amount of (residual) heterogeneity. Always 0
when method="EE"
.
standard error of the estimated amount of (residual) heterogeneity.
number of studies included in the analysis.
number of coefficients in the model (including the intercept).
number of coefficients included in the omnibus test of moderators.
test statistic of the test for (residual) heterogeneity.
corresponding p-value.
test statistic of the omnibus test of moderators.
corresponding p-value.
value of \(I^2\). See print
for more details.
value of \(H^2\). See print
for more details.
value of \(R^2\). See print
for more details.
logical that indicates whether the model is an intercept-only model.
the vector of outcomes, the corresponding sampling variances, and the model matrix.
a list with the log-likelihood, deviance, AIC, BIC, and AICc values under the unrestricted and restricted likelihood.
some additional elements/values.
For location-scale models, the object is of class c("rma.ls","rma.uni","rma")
and includes the following components in addition to the ones listed above:
estimated scale coefficients of the model.
standard errors of the coefficients.
test statistics of the coefficients.
corresponding p-values.
lower bound of the confidence intervals for the coefficients.
upper bound of the confidence intervals for the coefficients.
variance-covariance matrix of the estimated coefficients.
as above, but now a vector of values.
number of scale coefficients in the model (including the intercept).
test statistic of the omnibus test of the scale coefficients.
corresponding p-value.
some additional elements/values.
The results of the fitted model are formatted and printed with the print
function. If fit statistics should also be given, use summary
(or use the fitstats
function to extract them). Full versus reduced model comparisons in terms of fit statistics and likelihood ratio tests can be obtained with anova
. Wald-type tests for sets of model coefficients or linear combinations thereof can be obtained with the same function. Permutation tests for the model coefficient(s) can be obtained with permutest
. Tests and confidence intervals based on (cluster) robust methods can be obtained with robust
.
Predicted/fitted values can be obtained with predict
and fitted
. For best linear unbiased predictions, see blup
and ranef
.
The residuals
, rstandard
, and rstudent
functions extract raw and standardized residuals. Additional model diagnostics (e.g., to determine influential studies) can be obtained with the influence
function. For models without moderators, leave-one-out diagnostics can also be obtained with leave1out
. For models with moderators, variance inflation factors can be obtained with vif
.
A confidence interval for the amount of (residual) heterogeneity in the random/mixed-effects model can be obtained with confint
. For location-scale models, confint
can provide confidence intervals for the scale coefficients.
Forest, funnel, radial, L'Abbé, and Baujat plots can be obtained with forest
, funnel
, radial
, labbe
, and baujat
(radial and L'Abbé plots only for models without moderators). The qqnorm
function provides normal QQ plots of the standardized residuals. One can also call plot
on the fitted model object to obtain various plots at once. For random/mixed-effects models, the profile
function can be used to obtain a plot of the (restricted) log-likelihood as a function of \(\tau^2\). For location-scale models, profile
draws analogous plots based on the scale coefficients. For models with moderators, regplot
draws scatter plots / bubble plots, showing the (marginal) relationship between the observed outcomes and a selected moderator from the model.
Tests for funnel plot asymmetry (which may be indicative of publication bias) can be obtained with ranktest
and regtest
. For models without moderators, the trimfill
method can be used to carry out a trim and fill analysis and hc
provides a random-effects model analysis that is more robust to publication bias (based on the method by Henmi & Copas, 2010). The test of ‘excess significance’ can be carried out with the tes
function. The fail-safe N (based on a file drawer analysis) can be computed using fsn
. Selection models can be fitted with the selmodel
function.
For models without moderators, a cumulative meta-analysis (i.e., adding one observation at a time) can be obtained with cumul
.
Other extractor functions include coef
, vcov
, logLik
, deviance
, AIC
, BIC
, hatvalues
, and weights
.
While the HS, HSk, HE, DL, SJ, and GENQ estimators of \(\tau^2\) are based on closed-form solutions, the ML, REML, and EB estimators must be obtained iteratively. For this, the function makes use of the Fisher scoring algorithm, which is robust to poor starting values and usually converges quickly (Harville, 1977; Jennrich & Sampson, 1976). By default, the starting value is set equal to the value of the Hedges (HE) estimator and the algorithm terminates when the change in the estimated value of \(\tau^2\) is smaller than \(10^{-5}\) from one iteration to the next. The maximum number of iterations is 100 by default (which should be sufficient in most cases). Information on the progress of the algorithm can be obtained by setting verbose=TRUE
. One can also set verbose
to an integer (verbose=2
yields even more information and verbose=3
also sets option(warn=1)
temporarily).
A different starting value, threshold, and maximum number of iterations can be specified via the control
argument by setting control=list(tau2.init=value, threshold=value, maxiter=value)
. The step length of the Fisher scoring algorithm can also be adjusted by a desired factor with control=list(stepadj=value)
(values below 1 will reduce the step length). If using verbose=TRUE
shows the estimate jumping around erratically (or cycling through a few values), decreasing the step length (and increasing the maximum number of iterations) can often help with convergence (e.g., control=list(stepadj=0.5, maxiter=1000)
).
The PM, PMM, and GENQM estimators also involve iterative algorithms, which make use of the uniroot
function. By default, the desired accuracy (tol
) is set equal to .Machine$double.eps^0.25
and the maximum number of iterations (maxiter
) to 100
(as above). The upper bound of the interval searched (tau2.max
) is set to the larger of 100 and 10*mad(yi)^2
(i.e., 10 times the squared median absolute deviation of the observed effect sizes or outcomes computed with the mad
function). These values can be adjusted with control=list(tol=value, maxiter=value, tau2.max=value)
.
All of the heterogeneity estimators except SJ can in principle yield negative estimates for the amount of (residual) heterogeneity. However, negative estimates of \(\tau^2\) are outside of the parameter space. For the HS, HSk, HE, DL, and GENQ estimators, negative estimates are therefore truncated to zero. For the ML, REML, and EB estimators, the Fisher scoring algorithm makes use of step halving (Jennrich & Sampson, 1976) to guarantee a non-negative estimate. Finally, for the PM, PMM, and GENQM estimators, the lower bound of the interval searched is set to zero by default. For those brave enough to step into risky territory, there is the option to set the lower bound for all these estimators to some other value besides zero (even a negative one) with control=list(tau2.min=value)
, but the lowest value permitted is -min(vi)
(to ensure that the marginal variances are always non-negative).
The Hunter-Schmidt estimator for the amount of heterogeneity is defined in Hunter and Schmidt (1990) only in the context of the random-effects model when analyzing correlation coefficients. A general version of this estimator for random- and mixed-effects models not specific to any particular outcome measure is described in Viechtbauer (2005) and Viechtbauer et al. (2015) and is implemented here.
The Sidik-Jonkman estimator starts with a crude estimate of \(\tau^2\), which is then updated as described in Sidik and Jonkman (2005b, 2007). If, instead of the crude estimate, one wants to use a better a priori estimate, one can do so by passing this value via control=list(tau2.init=value)
.
One can also specify a vector of estimators via the method
argument (e.g., rma(yi, vi, method=c("REML","DL"))
). The various estimators are then applied in turn until one converges. This is mostly useful for simulation studies where an estimator (like the REML estimator) is not guaranteed to converge and one can then substitute one (like the DL estimator) that does not involve iterative methods and is guaranteed to provide an estimate.
Outcomes with non-positive sampling variances are problematic. If a sampling variance is equal to zero, then its weight will be \(1/0\) for equal-effects models when using weighted estimation. Switching to unweighted estimation is a possible solution then. For random/mixed-effects model, some estimators of \(\tau^2\) are undefined when there is at least one sampling variance equal to zero. Other estimators may work, but it may still be necessary to switch to unweighted model fitting, especially when the estimate of \(\tau^2\) converges to zero.
When including moderators in the model, it is possible that the model matrix is not of full rank (i.e., there is a linear relationship between the moderator variables included in the model). The function automatically tries to reduce the model matrix to full rank by removing redundant predictors, but if this fails the model cannot be fitted and an error will be issued. Deleting (redundant) moderator variables from the model as needed should solve this problem.
Some general words of caution about the assumptions underlying the models:
The sampling variances (i.e., the vi
values) are treated as if they are known constants, even though in practice they are usually estimates themselves. This implies that the distributions of the test statistics and corresponding confidence intervals are only exact and have nominal coverage when the within-study sample sizes are large (i.e., when the error in the sampling variance estimates is small). Certain outcome measures (e.g., the arcsine square root transformed risk difference and Fisher's r-to-z transformed correlation coefficient) are based on variance stabilizing transformations that also help to make the assumption of known sampling variances much more reasonable.
When fitting a mixed/random-effects model, \(\tau^2\) is estimated and then treated as a known constant thereafter. This ignores the uncertainty in the estimate of \(\tau^2\). As a consequence, the standard errors of the parameter estimates tend to be too small, yielding test statistics that are too large and confidence intervals that are not wide enough. The Knapp and Hartung (2003) adjustment (i.e., using test="knha"
) can be used to counter this problem, yielding test statistics and confidence intervals whose properties are closer to nominal.
Most effect sizes or outcome measures do not have exactly normal sampling distributions as assumed under the various models. However, the normal approximation usually becomes more accurate for most effect sizes or outcome measures as the within-study sample sizes increase. Therefore, sufficiently large within-study sample sizes are (usually) needed to be certain that the tests and confidence intervals have nominal levels/coverage. Again, certain outcome measures (e.g., Fisher's r-to-z transformed correlation coefficient) may be preferable from this perspective as well.
For location-scale models, model fitting is done via numerical optimization over the model parameters. By default, nlminb
is used for the optimization. One can also chose a different optimizer from optim
via the control
argument (e.g., control=list(optimizer="BFGS")
or control=list(optimizer="Nelder-Mead")
). Besides nlminb
and one of the methods from optim
, one can also choose one of the optimizers from the minqa
package (i.e., uobyqa
, newuoa
, or bobyqa
), one of the (derivative-free) algorithms from the nloptr
package, the Newton-type algorithm implemented in nlm
, the various algorithms implemented in the dfoptim
package (hjk
for the Hooke-Jeeves, nmk
for the Nelder-Mead, and mads
for the Mesh Adaptive Direct Searches algorithm), the quasi-Newton type optimizers ucminf
and lbfgsb3c
and the subspace-searching simplex algorithm subplex
from the packages of the same name, the Barzilai-Borwein gradient decent method implemented in BBoptim
, the Rcgmin
and Rvmmin
optimizers, or the parallelized version of the L-BFGS-B algorithm implemented in optimParallel
from the package of the same name. When using an identity link with link="identity"
, constrained optimization (to ensure non-negative \(\tau_i^2\) values) as implemented in constrOptim
is used by default. Alternative optimizers in this case are the solnp
solver from the Rsolnp
package, nloptr
, or the augmented Lagrangian adaptive barrier minimization algorithm constrOptim.nl
from the alabama
package.
The optimizer name must be given as a character string (i.e., in quotes). Additional control parameters can be specified via the control
argument (e.g., control=list(iter.max=1000, rel.tol=1e-8)
). For nloptr
, the default is to use the BOBYQA implementation from that package with a relative convergence criterion of 1e-8
on the function value (i.e., log-likelihood), but this can be changed via the algorithm
and ftop_rel
arguments (e.g., control=list(optimizer="nloptr", algorithm="NLOPT_LN_SBPLX", ftol_rel=1e-6)
) (note: when using optimizer="nloptr"
in combination with an identity link, the "NLOPT_LN_COBYLA"
algorithm is automatically used, since it allows for inequality constraints). For optimParallel
, the control argument ncpus
can be used to specify the number of cores to use for the parallelization (e.g., control=list(optimizer="optimParallel", ncpus=2)
). With parallel::detectCores()
, one can check on the number of available cores on the local machine.
Under certain circumstances (e.g., when the amount of heterogeneity is very small for certain combinations of values for the scale variables and scale coefficients), the values of the scale coefficients may try to drift towards minus or plus infinity, which can lead to problems with the optimization. One can impose constraints on the scale coefficients via control=list(alpha.min=minval, alpha.max=maxval)
where minval
and maxval
are either scalars or vectors of the appropriate length.
Finally, for location-scale models, the standard errors of the scale coefficients are obtained by inverting the Hessian, which is numerically approximated using the hessian
function from the numDeriv
package. This may fail (especially when using an identity link), leading to NA
values for the standard errors and hence test statistics, p-values, and confidence interval bounds. One can set control argument hessianCtrl
to a list of named arguments to be passed on to the method.args
argument of the hessian
function (the default is control=list(hessianCtrl=list(r=8))
). One can also set control=list(hesspack="pracma")
or control=list(hesspack="calculus")
in which case the pracma::hessian
or calculus::hessian
functions from the respective packages are used instead for approximating the Hessian.
Even if the Hessian can be approximated and inverted, the standard errors may be unreasonably large when the likelihood surface is very flat around the estimated scale coefficients. This is more likely to happen when \(k\) is small and when the amount of heterogeneity is very small under some conditions as defined by the scale coefficients/variables. Setting constraints on the scale coefficients as described above can also help to mitigate this issue.
Berkey, C. S., Hoaglin, D. C., Mosteller, F., & Colditz, G. A. (1995). A random-effects regression model for meta-analysis. Statistics in Medicine, 14(4), 395–411. https://doi.org/10.1002/sim.4780140406
Brannick, M. T., Potter, S. M., Benitez, B., & Morris, S. B. (2019). Bias and precision of alternate estimators in meta-analysis: Benefits of blending Schmidt–Hunter and Hedges approaches. Organizational Research Methods, 22(2), 490–514. https://doi.org/10.1177/1094428117741966
Cochran, W. G. (1954). The combination of estimates from different experiments. Biometrics, 10(1), 101–129. https://doi.org/10.2307/3001666
DerSimonian, R., & Laird, N. (1986). Meta-analysis in clinical trials. Controlled Clinical Trials, 7(3), 177–188. https://doi.org/10.1016/0197-2456(86)90046-2
DerSimonian, R., & Kacker, R. (2007). Random-effects model for meta-analysis of clinical trials: An update. Contemporary Clinical Trials, 28(2), 105–114. https://doi.org/10.1016/j.cct.2006.04.004
Hardy, R. J. & Thompson, S. G. (1996). A likelihood approach to meta-analysis with random effects. Statistics in Medicine, 15(6), 619–629. https://doi.org/10.1002/(SICI)1097-0258(19960330)15:6<619::AID-SIM188>3.0.CO;2-A
Hartung, J. (1999). An alternative method for meta-analysis. Biometrical Journal, 41(8), 901–916. https://doi.org/10.1002/(SICI)1521-4036(199912)41:8<901::AID-BIMJ901>3.0.CO;2-W
Harville, D. A. (1977). Maximum likelihood approaches to variance component estimation and to related problems. Journal of the American Statistical Association, 72(358), 320–338. https://doi.org/10.2307/2286796
Hedges, L. V. (1983). A random effects model for effect sizes. Psychological Bulletin, 93(2), 388–395. https://doi.org/10.1037/0033-2909.93.2.388
Hedges, L. V. (1992). Meta-analysis. Journal of Educational Statistics, 17(4), 279–296. https://doi.org/10.3102/10769986017004279
Hedges, L. V., & Olkin, I. (1985). Statistical methods for meta-analysis. San Diego, CA: Academic Press.
Henmi, M., & Copas, J. B. (2010). Confidence intervals for random effects meta-analysis and robustness to publication bias. Statistics in Medicine, 29(29), 2969–2983. https://doi.org/10.1002/sim.4029
Hunter, J. E., & Schmidt, F. L. (1990). Methods of meta-analysis: Correcting error and bias in research findings. Thousand Oaks, CA: Sage.
Jackson, D., Turner, R., Rhodes, K. & Viechtbauer, W. (2014). Methods for calculating confidence and credible intervals for the residual between-study variance in random effects meta-regression models. BMC Medical Research Methodology, 14, 103. https://doi.org/10.1186/1471-2288-14-103
Jackson, D., Law, M., Rücker, G., & Schwarzer, G. (2017). The Hartung-Knapp modification for random-effects meta-analysis: A useful refinement but are there any residual concerns? Statistics in Medicine, 36(25), 3923–3934. https://doi.org/10.1002/sim.7411
Jennrich, R. I., & Sampson, P. F. (1976). Newton-Raphson and related algorithms for maximum likelihood variance component estimation. Technometrics, 18(1), 11–17. https://doi.org/10.2307/1267911
Knapp, G., & Hartung, J. (2003). Improved tests for a random effects meta-regression with a single covariate. Statistics in Medicine, 22(17), 2693–2710. https://doi.org/10.1002/sim.1482
Morris, C. N. (1983). Parametric empirical Bayes inference: Theory and applications. Journal of the American Statistical Association, 78(381), 47–55. https://doi.org/10.2307/2287098
Paule, R. C., & Mandel, J. (1982). Consensus values and weighting factors. Journal of Research of the National Bureau of Standards, 87(5), 377–385. https://doi.org/10.6028/jres.087.022
Raudenbush, S. W. (2009). Analyzing effect sizes: Random effects models. In H. Cooper, L. V. Hedges, & J. C. Valentine (Eds.), The handbook of research synthesis and meta-analysis (2nd ed., pp. 295–315). New York: Russell Sage Foundation.
Sidik, K. & Jonkman, J. N. (2002). A simple confidence interval for meta-analysis. Statistics in Medicine, 21(21), 3153–3159. https://doi.org/10.1002/sim.1262
Sidik, K., & Jonkman, J. N. (2005a). A note on variance estimation in random effects meta-regression. Journal of Biopharmaceutical Statistics, 15(5), 823–838. https://doi.org/10.1081/BIP-200067915
Sidik, K., & Jonkman, J. N. (2005b). Simple heterogeneity variance estimation for meta-analysis. Journal of the Royal Statistical Society, Series C, 54(2), 367–384. https://doi.org/10.1111/j.1467-9876.2005.00489.x
Sidik, K., & Jonkman, J. N. (2007). A comparison of heterogeneity variance estimators in combining results of studies. Statistics in Medicine, 26(9), 1964–1981. https://doi.org/10.1002/sim.2688
Veroniki, A. A., Jackson, D., Viechtbauer, W., Bender, R., Bowden, J., Knapp, G., Kuss, O., Higgins, J. P., Langan, D., & Salanti, G. (2016). Methods to estimate the between-study variance and its uncertainty in meta-analysis. Research Synthesis Methods, 7(1), 55–79. https://doi.org/10.1002/jrsm.1164
Viechtbauer, W. (2005). Bias and efficiency of meta-analytic variance estimators in the random-effects model. Journal of Educational and Behavioral Statistics, 30(3), 261–293. https://doi.org/10.3102/10769986030003261
Viechtbauer, W. (2010). Conducting meta-analyses in R with the metafor package. Journal of Statistical Software, 36(3), 1–48. https://doi.org/10.18637/jss.v036.i03
Viechtbauer, W. (2021). Median-unbiased estimators for the amount of heterogeneity in meta-analysis. European Congress of Methodology, Valencia, Spain. https://www.wvbauer.com/lib/exe/fetch.php/talks:2021_viechtbauer_eam_median_tau2.pdf
Viechtbauer, W., & López-López, J. A. (2022). Location-scale models for meta-analysis. Research Synthesis Methods. 13(6), 697–715. https://doi.org/10.1002/jrsm.1562
Viechtbauer, W., López-López, J. A., Sánchez-Meca, J., & Marín-Martínez, F. (2015). A comparison of procedures to test for moderators in mixed-effects meta-regression models. Psychological Methods, 20(3), 360–374. https://doi.org/10.1037/met0000023
### calculate log risk ratios and corresponding sampling variances
dat <- escalc(measure="RR", ai=tpos, bi=tneg, ci=cpos, di=cneg, data=dat.bcg)
### fit a random-effects model using the log risk ratios and sampling variances as input
### note: method="REML" is the default, so one could leave this out
rma(yi, vi, data=dat, method="REML")
#>
#> Random-Effects Model (k = 13; tau^2 estimator: REML)
#>
#> tau^2 (estimated amount of total heterogeneity): 0.3132 (SE = 0.1664)
#> tau (square root of estimated tau^2 value): 0.5597
#> I^2 (total heterogeneity / total variability): 92.22%
#> H^2 (total variability / sampling variability): 12.86
#>
#> Test for Heterogeneity:
#> Q(df = 12) = 152.2330, p-val < .0001
#>
#> Model Results:
#>
#> estimate se zval pval ci.lb ci.ub
#> -0.7145 0.1798 -3.9744 <.0001 -1.0669 -0.3622 ***
#>
#> ---
#> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#>
### fit a random-effects model using the log risk ratios and standard errors as input
### note: the second argument of rma() is for the *sampling variances*, so we use the
### named argument 'sei' to supply the standard errors to the function
dat$sei <- sqrt(dat$vi)
rma(yi, sei=sei, data=dat)
#>
#> Random-Effects Model (k = 13; tau^2 estimator: REML)
#>
#> tau^2 (estimated amount of total heterogeneity): 0.3132 (SE = 0.1664)
#> tau (square root of estimated tau^2 value): 0.5597
#> I^2 (total heterogeneity / total variability): 92.22%
#> H^2 (total variability / sampling variability): 12.86
#>
#> Test for Heterogeneity:
#> Q(df = 12) = 152.2330, p-val < .0001
#>
#> Model Results:
#>
#> estimate se zval pval ci.lb ci.ub
#> -0.7145 0.1798 -3.9744 <.0001 -1.0669 -0.3622 ***
#>
#> ---
#> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#>
### fit a random-effects model supplying the 2x2 table cell frequencies to the function
rma(measure="RR", ai=tpos, bi=tneg, ci=cpos, di=cneg, data=dat)
#>
#> Random-Effects Model (k = 13; tau^2 estimator: REML)
#>
#> tau^2 (estimated amount of total heterogeneity): 0.3132 (SE = 0.1664)
#> tau (square root of estimated tau^2 value): 0.5597
#> I^2 (total heterogeneity / total variability): 92.22%
#> H^2 (total variability / sampling variability): 12.86
#>
#> Test for Heterogeneity:
#> Q(df = 12) = 152.2330, p-val < .0001
#>
#> Model Results:
#>
#> estimate se zval pval ci.lb ci.ub
#> -0.7145 0.1798 -3.9744 <.0001 -1.0669 -0.3622 ***
#>
#> ---
#> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#>
### fit a mixed-effects model with two moderators (absolute latitude and publication year)
rma(yi, vi, mods=cbind(ablat, year), data=dat)
#>
#> Mixed-Effects Model (k = 13; tau^2 estimator: REML)
#>
#> tau^2 (estimated amount of residual heterogeneity): 0.1108 (SE = 0.0845)
#> tau (square root of estimated tau^2 value): 0.3328
#> I^2 (residual heterogeneity / unaccounted variability): 71.98%
#> H^2 (unaccounted variability / sampling variability): 3.57
#> R^2 (amount of heterogeneity accounted for): 64.63%
#>
#> Test for Residual Heterogeneity:
#> QE(df = 10) = 28.3251, p-val = 0.0016
#>
#> Test of Moderators (coefficients 2:3):
#> QM(df = 2) = 12.2043, p-val = 0.0022
#>
#> Model Results:
#>
#> estimate se zval pval ci.lb ci.ub
#> intrcpt -3.5455 29.0959 -0.1219 0.9030 -60.5724 53.4814
#> ablat -0.0280 0.0102 -2.7371 0.0062 -0.0481 -0.0080 **
#> year 0.0019 0.0147 0.1299 0.8966 -0.0269 0.0307
#>
#> ---
#> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#>
### using a model formula to specify the same model
rma(yi, vi, mods = ~ ablat + year, data=dat)
#>
#> Mixed-Effects Model (k = 13; tau^2 estimator: REML)
#>
#> tau^2 (estimated amount of residual heterogeneity): 0.1108 (SE = 0.0845)
#> tau (square root of estimated tau^2 value): 0.3328
#> I^2 (residual heterogeneity / unaccounted variability): 71.98%
#> H^2 (unaccounted variability / sampling variability): 3.57
#> R^2 (amount of heterogeneity accounted for): 64.63%
#>
#> Test for Residual Heterogeneity:
#> QE(df = 10) = 28.3251, p-val = 0.0016
#>
#> Test of Moderators (coefficients 2:3):
#> QM(df = 2) = 12.2043, p-val = 0.0022
#>
#> Model Results:
#>
#> estimate se zval pval ci.lb ci.ub
#> intrcpt -3.5455 29.0959 -0.1219 0.9030 -60.5724 53.4814
#> ablat -0.0280 0.0102 -2.7371 0.0062 -0.0481 -0.0080 **
#> year 0.0019 0.0147 0.1299 0.8966 -0.0269 0.0307
#>
#> ---
#> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#>
### using a model formula as part of the yi argument
rma(yi ~ ablat + year, vi, data=dat)
#>
#> Mixed-Effects Model (k = 13; tau^2 estimator: REML)
#>
#> tau^2 (estimated amount of residual heterogeneity): 0.1108 (SE = 0.0845)
#> tau (square root of estimated tau^2 value): 0.3328
#> I^2 (residual heterogeneity / unaccounted variability): 71.98%
#> H^2 (unaccounted variability / sampling variability): 3.57
#> R^2 (amount of heterogeneity accounted for): 64.63%
#>
#> Test for Residual Heterogeneity:
#> QE(df = 10) = 28.3251, p-val = 0.0016
#>
#> Test of Moderators (coefficients 2:3):
#> QM(df = 2) = 12.2043, p-val = 0.0022
#>
#> Model Results:
#>
#> estimate se zval pval ci.lb ci.ub
#> intrcpt -3.5455 29.0959 -0.1219 0.9030 -60.5724 53.4814
#> ablat -0.0280 0.0102 -2.7371 0.0062 -0.0481 -0.0080 **
#> year 0.0019 0.0147 0.1299 0.8966 -0.0269 0.0307
#>
#> ---
#> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#>
### manual dummy coding of the allocation factor
alloc.random <- ifelse(dat$alloc == "random", 1, 0)
alloc.alternate <- ifelse(dat$alloc == "alternate", 1, 0)
alloc.systematic <- ifelse(dat$alloc == "systematic", 1, 0)
### test the allocation factor (in the presence of the other moderators)
### note: 'alternate' is the reference level of the allocation factor,
### since this is the dummy/level we leave out of the model
### note: the intercept is the first coefficient, so with btt=2:3 we test
### coefficients 2 and 3, corresponding to the coefficients for the
### allocation factor
rma(yi, vi, mods = ~ alloc.random + alloc.systematic + year + ablat, data=dat, btt=2:3)
#>
#> Mixed-Effects Model (k = 13; tau^2 estimator: REML)
#>
#> tau^2 (estimated amount of residual heterogeneity): 0.1796 (SE = 0.1425)
#> tau (square root of estimated tau^2 value): 0.4238
#> I^2 (residual heterogeneity / unaccounted variability): 73.09%
#> H^2 (unaccounted variability / sampling variability): 3.72
#> R^2 (amount of heterogeneity accounted for): 42.67%
#>
#> Test for Residual Heterogeneity:
#> QE(df = 8) = 26.2030, p-val = 0.0010
#>
#> Test of Moderators (coefficients 2:3):
#> QM(df = 2) = 1.3663, p-val = 0.5050
#>
#> Model Results:
#>
#> estimate se zval pval ci.lb ci.ub
#> intrcpt -14.4984 38.3943 -0.3776 0.7057 -89.7498 60.7531
#> alloc.random -0.3421 0.4180 -0.8183 0.4132 -1.1613 0.4772
#> alloc.systematic 0.0101 0.4467 0.0226 0.9820 -0.8654 0.8856
#> year 0.0075 0.0194 0.3849 0.7003 -0.0306 0.0456
#> ablat -0.0236 0.0132 -1.7816 0.0748 -0.0495 0.0024 .
#>
#> ---
#> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#>
### using a model formula to specify the same model
rma(yi, vi, mods = ~ factor(alloc) + year + ablat, data=dat, btt=2:3)
#>
#> Mixed-Effects Model (k = 13; tau^2 estimator: REML)
#>
#> tau^2 (estimated amount of residual heterogeneity): 0.1796 (SE = 0.1425)
#> tau (square root of estimated tau^2 value): 0.4238
#> I^2 (residual heterogeneity / unaccounted variability): 73.09%
#> H^2 (unaccounted variability / sampling variability): 3.72
#> R^2 (amount of heterogeneity accounted for): 42.67%
#>
#> Test for Residual Heterogeneity:
#> QE(df = 8) = 26.2030, p-val = 0.0010
#>
#> Test of Moderators (coefficients 2:3):
#> QM(df = 2) = 1.3663, p-val = 0.5050
#>
#> Model Results:
#>
#> estimate se zval pval ci.lb ci.ub
#> intrcpt -14.4984 38.3943 -0.3776 0.7057 -89.7498 60.7531
#> factor(alloc)random -0.3421 0.4180 -0.8183 0.4132 -1.1613 0.4772
#> factor(alloc)systematic 0.0101 0.4467 0.0226 0.9820 -0.8654 0.8856
#> year 0.0075 0.0194 0.3849 0.7003 -0.0306 0.0456
#> ablat -0.0236 0.0132 -1.7816 0.0748 -0.0495 0.0024 .
#>
#> ---
#> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#>
### factor() is not needed as character variables are automatically converted to factors
res <- rma(yi, vi, mods = ~ alloc + year + ablat, data=dat, btt=2:3)
res
#>
#> Mixed-Effects Model (k = 13; tau^2 estimator: REML)
#>
#> tau^2 (estimated amount of residual heterogeneity): 0.1796 (SE = 0.1425)
#> tau (square root of estimated tau^2 value): 0.4238
#> I^2 (residual heterogeneity / unaccounted variability): 73.09%
#> H^2 (unaccounted variability / sampling variability): 3.72
#> R^2 (amount of heterogeneity accounted for): 42.67%
#>
#> Test for Residual Heterogeneity:
#> QE(df = 8) = 26.2030, p-val = 0.0010
#>
#> Test of Moderators (coefficients 2:3):
#> QM(df = 2) = 1.3663, p-val = 0.5050
#>
#> Model Results:
#>
#> estimate se zval pval ci.lb ci.ub
#> intrcpt -14.4984 38.3943 -0.3776 0.7057 -89.7498 60.7531
#> allocrandom -0.3421 0.4180 -0.8183 0.4132 -1.1613 0.4772
#> allocsystematic 0.0101 0.4467 0.0226 0.9820 -0.8654 0.8856
#> year 0.0075 0.0194 0.3849 0.7003 -0.0306 0.0456
#> ablat -0.0236 0.0132 -1.7816 0.0748 -0.0495 0.0024 .
#>
#> ---
#> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#>
### test all pairwise differences between the 'alloc' levels
anova(res, X=pairmat(btt="alloc"))
#>
#> Hypothesis:
#> 1: -allocrandom + allocsystematic = 0
#>
#> Results:
#> estimate se zval pval
#> 1: 0.3522 0.3357 1.0489 0.2942
#>
#> Test of Hypothesis:
#> QM(df = 1) = 1.1002, p-val = 0.2942
#>
### subgrouping versus using a single model with a factor (subgrouping provides
### an estimate of tau^2 within each subgroup, but the number of studies in each
### subgroup is quite small; the model with the allocation factor provides a
### single estimate of tau^2 based on a larger number of studies, but assumes
### that tau^2 is the same within each subgroup)
res.a <- rma(yi, vi, data=dat, subset=(alloc=="alternate"))
res.r <- rma(yi, vi, data=dat, subset=(alloc=="random"))
res.s <- rma(yi, vi, data=dat, subset=(alloc=="systematic"))
res.a
#>
#> Random-Effects Model (k = 2; tau^2 estimator: REML)
#>
#> tau^2 (estimated amount of total heterogeneity): 0.1326 (SE = 0.2286)
#> tau (square root of estimated tau^2 value): 0.3641
#> I^2 (total heterogeneity / total variability): 82.02%
#> H^2 (total variability / sampling variability): 5.56
#>
#> Test for Heterogeneity:
#> Q(df = 1) = 5.5625, p-val = 0.0183
#>
#> Model Results:
#>
#> estimate se zval pval ci.lb ci.ub
#> -0.5408 0.2816 -1.9204 0.0548 -1.0927 0.0111 .
#>
#> ---
#> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#>
res.r
#>
#> Random-Effects Model (k = 7; tau^2 estimator: REML)
#>
#> tau^2 (estimated amount of total heterogeneity): 0.3925 (SE = 0.3029)
#> tau (square root of estimated tau^2 value): 0.6265
#> I^2 (total heterogeneity / total variability): 89.93%
#> H^2 (total variability / sampling variability): 9.93
#>
#> Test for Heterogeneity:
#> Q(df = 6) = 110.2133, p-val < .0001
#>
#> Model Results:
#>
#> estimate se zval pval ci.lb ci.ub
#> -0.9710 0.2760 -3.5186 0.0004 -1.5118 -0.4301 ***
#>
#> ---
#> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#>
res.s
#>
#> Random-Effects Model (k = 4; tau^2 estimator: REML)
#>
#> tau^2 (estimated amount of total heterogeneity): 0.4003 (SE = 0.4199)
#> tau (square root of estimated tau^2 value): 0.6327
#> I^2 (total heterogeneity / total variability): 86.42%
#> H^2 (total variability / sampling variability): 7.36
#>
#> Test for Heterogeneity:
#> Q(df = 3) = 16.5919, p-val = 0.0009
#>
#> Model Results:
#>
#> estimate se zval pval ci.lb ci.ub
#> -0.4242 0.3597 -1.1792 0.2383 -1.1293 0.2809
#>
#> ---
#> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#>
res <- rma(yi, vi, mods = ~ 0 + factor(alloc), data=dat)
res
#>
#> Mixed-Effects Model (k = 13; tau^2 estimator: REML)
#>
#> tau^2 (estimated amount of residual heterogeneity): 0.3615 (SE = 0.2111)
#> tau (square root of estimated tau^2 value): 0.6013
#> I^2 (residual heterogeneity / unaccounted variability): 88.77%
#> H^2 (unaccounted variability / sampling variability): 8.91
#>
#> Test for Residual Heterogeneity:
#> QE(df = 10) = 132.3676, p-val < .0001
#>
#> Test of Moderators (coefficients 1:3):
#> QM(df = 3) = 15.9842, p-val = 0.0011
#>
#> Model Results:
#>
#> estimate se zval pval ci.lb ci.ub
#> factor(alloc)alternate -0.5180 0.4412 -1.1740 0.2404 -1.3827 0.3468
#> factor(alloc)random -0.9658 0.2672 -3.6138 0.0003 -1.4896 -0.4420 ***
#> factor(alloc)systematic -0.4289 0.3449 -1.2434 0.2137 -1.1050 0.2472
#>
#> ---
#> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#>
############################################################################
### demonstrating that Q_E + Q_M = Q_Total for fixed-effects models
### note: this does not work for random/mixed-effects models, since Q_E and
### Q_Total are calculated under the assumption that tau^2 = 0, while the
### calculation of Q_M incorporates the estimate of tau^2
res <- rma(yi, vi, data=dat, method="FE")
res ### this gives Q_Total
#>
#> Fixed-Effects Model (k = 13)
#>
#> I^2 (total heterogeneity / total variability): 92.12%
#> H^2 (total variability / sampling variability): 12.69
#>
#> Test for Heterogeneity:
#> Q(df = 12) = 152.2330, p-val < .0001
#>
#> Model Results:
#>
#> estimate se zval pval ci.lb ci.ub
#> -0.4303 0.0405 -10.6247 <.0001 -0.5097 -0.3509 ***
#>
#> ---
#> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#>
res <- rma(yi, vi, mods = ~ ablat + year, data=dat, method="FE")
res ### this gives Q_E and Q_M
#>
#> Fixed-Effects with Moderators Model (k = 13)
#>
#> I^2 (residual heterogeneity / unaccounted variability): 64.70%
#> H^2 (unaccounted variability / sampling variability): 2.83
#> R^2 (amount of heterogeneity accounted for): 77.67%
#>
#> Test for Residual Heterogeneity:
#> QE(df = 10) = 28.3251, p-val = 0.0016
#>
#> Test of Moderators (coefficients 2:3):
#> QM(df = 2) = 123.9079, p-val < .0001
#>
#> Model Results:
#>
#> estimate se zval pval ci.lb ci.ub
#> intrcpt 17.1518 10.8321 1.5834 0.1133 -4.0786 38.3822
#> ablat -0.0339 0.0040 -8.4766 <.0001 -0.0417 -0.0260 ***
#> year -0.0085 0.0055 -1.5518 0.1207 -0.0192 0.0022
#>
#> ---
#> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#>
res$QE + res$QM
#> [1] 152.233
### decomposition of Q_E into subgroup Q-values
res <- rma(yi, vi, mods = ~ factor(alloc), data=dat)
res
#>
#> Mixed-Effects Model (k = 13; tau^2 estimator: REML)
#>
#> tau^2 (estimated amount of residual heterogeneity): 0.3615 (SE = 0.2111)
#> tau (square root of estimated tau^2 value): 0.6013
#> I^2 (residual heterogeneity / unaccounted variability): 88.77%
#> H^2 (unaccounted variability / sampling variability): 8.91
#> R^2 (amount of heterogeneity accounted for): 0.00%
#>
#> Test for Residual Heterogeneity:
#> QE(df = 10) = 132.3676, p-val < .0001
#>
#> Test of Moderators (coefficients 2:3):
#> QM(df = 2) = 1.7675, p-val = 0.4132
#>
#> Model Results:
#>
#> estimate se zval pval ci.lb ci.ub
#> intrcpt -0.5180 0.4412 -1.1740 0.2404 -1.3827 0.3468
#> factor(alloc)random -0.4478 0.5158 -0.8682 0.3853 -1.4588 0.5632
#> factor(alloc)systematic 0.0890 0.5600 0.1590 0.8737 -1.0086 1.1867
#>
#> ---
#> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#>
res.a <- rma(yi, vi, data=dat, subset=(alloc=="alternate"))
res.r <- rma(yi, vi, data=dat, subset=(alloc=="random"))
res.s <- rma(yi, vi, data=dat, subset=(alloc=="systematic"))
res.a$QE ### Q-value within subgroup "alternate"
#> [1] 5.562514
res.r$QE ### Q-value within subgroup "random"
#> [1] 110.2133
res.s$QE ### Q-value within subgroup "systematic"
#> [1] 16.59186
res$QE
#> [1] 132.3676
res.a$QE + res.r$QE + res.s$QE
#> [1] 132.3676
############################################################################
### an example of a location-scale model
dat <- dat.bangertdrowns2004
### fit a standard random-effects model
res <- rma(yi, vi, data=dat)
res
#>
#> Random-Effects Model (k = 48; tau^2 estimator: REML)
#>
#> tau^2 (estimated amount of total heterogeneity): 0.0499 (SE = 0.0197)
#> tau (square root of estimated tau^2 value): 0.2235
#> I^2 (total heterogeneity / total variability): 58.37%
#> H^2 (total variability / sampling variability): 2.40
#>
#> Test for Heterogeneity:
#> Q(df = 47) = 107.1061, p-val < .0001
#>
#> Model Results:
#>
#> estimate se zval pval ci.lb ci.ub
#> 0.2219 0.0460 4.8209 <.0001 0.1317 0.3122 ***
#>
#> ---
#> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#>
### fit the same model as a location-scale model
res <- rma(yi, vi, scale = ~ 1, data=dat)
res
#>
#> Location-Scale Model (k = 48; tau^2 estimator: REML)
#>
#> Test for Heterogeneity:
#> Q(df = 47) = 107.1061, p-val < .0001
#>
#> Model Results (Location):
#>
#> estimate se zval pval ci.lb ci.ub
#> 0.2219 0.0460 4.8210 <.0001 0.1317 0.3122 ***
#>
#> Model Results (Scale):
#>
#> estimate se zval pval ci.lb ci.ub
#> -2.9970 0.4603 -6.5107 <.0001 -3.8992 -2.0948 ***
#>
#> ---
#> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#>
### check that we obtain the same estimate for tau^2
predict(res, newscale=1, transf=exp)
#>
#> pred ci.lb ci.ub
#> 0.0499 0.0203 0.1231
#>
### add the total sample size (per 100) as a location and scale predictor
dat$ni100 <- dat$ni/100
res <- rma(yi, vi, mods = ~ ni100, scale = ~ ni100, data=dat)
res
#>
#> Location-Scale Model (k = 48; tau^2 estimator: REML)
#>
#> Test for Residual Heterogeneity:
#> QE(df = 46) = 95.1352, p-val < .0001
#>
#> Test of Location Coefficients (coefficient 2):
#> QM(df = 1) = 7.8268, p-val = 0.0051
#>
#> Model Results (Location):
#>
#> estimate se zval pval ci.lb ci.ub
#> intrcpt 0.3017 0.0661 4.5629 <.0001 0.1721 0.4313 ***
#> ni100 -0.0553 0.0198 -2.7976 0.0051 -0.0940 -0.0165 **
#>
#> Test of Scale Coefficients (coefficient 2):
#> QM(df = 1) = 3.1850, p-val = 0.0743
#>
#> Model Results (Scale):
#>
#> estimate se zval pval ci.lb ci.ub
#> intrcpt -1.9209 0.6690 -2.8713 0.0041 -3.2321 -0.6097 **
#> ni100 -0.9174 0.5141 -1.7847 0.0743 -1.9250 0.0901 .
#>
#> ---
#> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#>
### variables in the location and scale parts can differ
res <- rma(yi, vi, mods = ~ ni100 + meta, scale = ~ ni100 + imag, data=dat)
#> Warning: 2 studies with NAs omitted from model fitting.
res
#>
#> Location-Scale Model (k = 46; tau^2 estimator: REML)
#>
#> Test for Residual Heterogeneity:
#> QE(df = 43) = 82.2711, p-val = 0.0003
#>
#> Test of Location Coefficients (coefficients 2:3):
#> QM(df = 2) = 12.4826, p-val = 0.0019
#>
#> Model Results (Location):
#>
#> estimate se zval pval ci.lb ci.ub
#> intrcpt 0.2303 0.0655 3.5166 0.0004 0.1019 0.3586 ***
#> ni100 -0.0565 0.0188 -3.0113 0.0026 -0.0933 -0.0197 **
#> meta 0.1469 0.0690 2.1305 0.0331 0.0118 0.2820 *
#>
#> Test of Scale Coefficients (coefficients 2:3):
#> QM(df = 2) = 5.0289, p-val = 0.0809
#>
#> Model Results (Scale):
#>
#> estimate se zval pval ci.lb ci.ub
#> intrcpt -2.3456 0.8354 -2.8079 0.0050 -3.9829 -0.7084 **
#> ni100 -0.9995 0.6087 -1.6421 0.1006 -2.1925 0.1935
#> imag 2.1258 1.1857 1.7929 0.0730 -0.1981 4.4497 .
#>
#> ---
#> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#>