The function can be used to aggregate multiple effect sizes or outcomes belonging to the same study (or to the same level of some other clustering variable) into a single combined effect size or outcome.

# S3 method for escalc
aggregate(x, cluster, time, V, struct="CS", rho, phi,
          fun, na.rm=TRUE, subset, select, digits, ...)

Arguments

x

an object of class "escalc".

cluster

vector to specify the clustering variable (e.g., study).

time

optional vector to specify the time points (only relevant when struct="CAR" or struct="CS+CAR").

V

optional argument to specify the variance-covariance matrix of the sampling errors. If not specified, argument struct is used to specify the variance-covariance structure.

struct

character string to specify the variance-covariance structure of the sampling errors within the same cluster (either "ID", "CS", "CAR", or "CS+CAR"). See ‘Details’.

rho

value of the correlation of the sampling errors within clusters (when struct="CS" or struct="CS+CAR"). Can also be a vector with the value of the correlation for each cluster.

phi

value of the autocorrelation of the sampling errors within clusters (when struct="CAR" or struct="CS+CAR"). Can also be a vector with the value of the autocorrelation for each cluster.

fun

optional list with three functions for aggregating other variables besides the effect sizes or outcomes within clusters (for numeric/integer variables, for logicals, and for all other types, respectively).

na.rm

logical to specify whether NA values should be removed before aggregating values within clusters. Can also be a vector with two logicals (the first pertains to the effect sizes or outcomes, the second to all other variables).

subset

optional (logical or numeric) vector to specify the subset of rows to include when aggregating the effect sizes or outcomes.

select

optional vector to specify the names of the variables to include in the aggregated dataset.

digits

integer to specify the number of decimal places to which the printed results should be rounded (the default is to take the value from the object).

...

other arguments.

Details

In many meta-analyses, multiple effect size estimates or outcomes can be extracted from the same study. Ideally, such structures should be analyzed using an appropriate multilevel/multivariate model as can be fitted with the rma.mv function. However, there may occasionally be reasons for aggregating multiple effect sizes or outcomes belonging to the same study (or to the same level of some other clustering variable) into a single combined effect size or outcome. The present function can be used for this purpose.

The input must be an object of class "escalc". The error ‘Error in match.fun(FUN): argument "FUN" is missing, with no default’ indicates that a regular data frame was passed to the function, but this does not work. One can turn a regular data frame (containing the effect sizes or outcomes and the corresponding sampling variances) into an "escalc" object with the escalc function. See the ‘Examples’ below for an illustration of this.

The cluster variable is used to specify which estimates/outcomes belong to the same study/cluster.

In the simplest case, the estimates/outcomes within clusters (or, to be precise, their sampling errors) are assumed to be independent. This is usually a safe assumption as long as each study participant (or whatever the unit of analysis is) only contributes data to a single estimate/outcome. For example, if a study provides effect size estimates for male and female subjects separately, then the sampling errors can usually be assumed to be independent. In this case, one can set struct="ID" and multiple estimates/outcomes within the same cluster are combined using standard inverse-variance weighting under the assumption of independence.

In other cases, the estimates/outcomes within clusters cannot be assumed to be independent. For example, if multiple effect size estimates are computed for the same group of subjects (e.g., for different dependent variables), then the estimates are likely to be correlated. If the actual correlation between the estimates is unknown, one can often still make an educated guess and set argument rho to this value, which is then assumed to be the same for all pairs of estimates within clusters when struct="CS" (for a compound symmetric structure). Multiple estimates/outcomes within the same cluster are then combined using inverse-variance weighting taking their correlation into consideration (i.e., using generalized least squares). One can also specify a different value of rho for each cluster by passing a vector (of the same length as the number of clusters) to this argument.

If multiple effect size estimates are computed for the same group of subjects at different time points, then it may be more sensible to assume that the correlation between estimates decreases as a function of the distance between the time points. If so, one can specify struct="CAR" (for a continuous-time autoregressive structure), set phi to the autocorrelation (for two estimates one time-unit apart), and use argument time to specify the actual time points corresponding to the estimates. The correlation between two estimates, \(y_{ij}\) and \(y_{ij'}\), in the \(i\)th cluster, with time points \(t_{ij}\) and \(t_{ij'}\), is then given by \(\phi^{|t_{ij} - t_{ij'}|}\). One can also specify a different value of phi for each cluster by passing a vector (of the same length as the number of clusters) to this argument.

One can also combine the compound symmetric and autoregressive structures by specifying struct="CS+CAR". In this case, one must specify both rho and phi. The correlation between two estimates, \(y_{ij}\) and \(y_{ij'}\), in the \(i\)th cluster, with time points \(t_{ij}\) and \(t_{ij'}\), is then given by \(\rho + (1 - \rho) \phi^{|t_{ij} - t_{ij'}|}\).

Finally, if one actually knows the correlation (and hence the covariance) between each pair of estimates, one can also specify the entire variance-covariance matrix of the estimates (or more precisely, their sampling errors) via the V argument. In this case, arguments struct, rho, and phi are ignored.

Other variables (besides the estimates) will also be aggregated to the cluster level. By default, numeric/integer type variables are averaged, logicals are also averaged (yielding the proportion of TRUE values), and for all other types of variables (e.g., character variables or factors) the most frequent category/level is returned. One can also specify a list of three functions via the fun argument for aggregating variables belong to these three types.

Argument na.rm controls how missing values should be handled. By default, any missing estimates are first removed before aggregating the non-missing values within each cluster. The same applies when aggregating the other variables. One can also specify a vector with two logicals for the na.rm argument to control how missing values should be handled when aggregating the estimates and when aggregating all other variables.

Value

An object of class c("escalc","data.frame") that contains the (selected) variables aggregated to the cluster level.

The object is formatted and printed with the print.escalc function.

Author

Wolfgang Viechtbauer wvb@metafor-project.org http://www.metafor-project.org

References

Viechtbauer, W. (2010). Conducting meta-analyses in R with the metafor package. Journal of Statistical Software, 36(3), 1--48. https://doi.org/10.18637/jss.v036.i03

See also

Examples

### copy data into 'dat' and examine data dat <- dat.konstantopoulos2011 dat
#> district school study year yi vi #> 1 11 1 1 1976 -0.180 0.118 #> 2 11 2 2 1976 -0.220 0.118 #> 3 11 3 3 1976 0.230 0.144 #> 4 11 4 4 1976 -0.300 0.144 #> 5 12 1 5 1989 0.130 0.014 #> 6 12 2 6 1989 -0.260 0.014 #> 7 12 3 7 1989 0.190 0.015 #> 8 12 4 8 1989 0.320 0.024 #> 9 18 1 9 1994 0.450 0.023 #> 10 18 2 10 1994 0.380 0.043 #> 11 18 3 11 1994 0.290 0.012 #> 12 27 1 12 1976 0.160 0.020 #> 13 27 2 13 1976 0.650 0.004 #> 14 27 3 14 1976 0.360 0.004 #> 15 27 4 15 1976 0.600 0.007 #> 16 56 1 16 1997 0.080 0.019 #> 17 56 2 17 1997 0.040 0.007 #> 18 56 3 18 1997 0.190 0.005 #> 19 56 4 19 1997 -0.060 0.004 #> 20 58 1 20 1976 -0.180 0.020 #> 21 58 2 21 1976 0.000 0.018 #> 22 58 3 22 1976 0.000 0.019 #> 23 58 4 23 1976 -0.280 0.022 #> 24 58 5 24 1976 -0.040 0.020 #> 25 58 6 25 1976 -0.300 0.021 #> 26 58 7 26 1976 0.070 0.006 #> 27 58 8 27 1976 0.000 0.007 #> 28 58 9 28 1976 0.050 0.007 #> 29 58 10 29 1976 -0.080 0.007 #> 30 58 11 30 1976 -0.090 0.007 #> 31 71 1 31 1997 0.300 0.015 #> 32 71 2 32 1997 0.980 0.011 #> 33 71 3 33 1997 1.190 0.010 #> 34 86 1 34 1997 -0.070 0.001 #> 35 86 2 35 1997 -0.050 0.001 #> 36 86 3 36 1997 -0.010 0.001 #> 37 86 4 37 1997 0.020 0.001 #> 38 86 5 38 1997 -0.030 0.001 #> 39 86 6 39 1997 0.000 0.001 #> 40 86 7 40 1997 0.010 0.001 #> 41 86 8 41 1997 -0.100 0.001 #> 42 91 1 42 2000 0.500 0.010 #> 43 91 2 43 2000 0.660 0.011 #> 44 91 3 44 2000 0.200 0.010 #> 45 91 4 45 2000 0.000 0.009 #> 46 91 5 46 2000 0.050 0.013 #> 47 91 6 47 2000 0.070 0.013 #> 48 108 1 48 2000 -0.520 0.031 #> 49 108 2 49 2000 0.700 0.031 #> 50 108 3 50 2000 -0.030 0.030 #> 51 108 4 51 2000 0.270 0.030 #> 52 108 5 52 2000 -0.340 0.030 #> 53 644 1 53 1995 0.120 0.087 #> 54 644 2 54 1995 0.610 0.082 #> 55 644 3 55 1994 0.040 0.067 #> 56 644 4 56 1994 -0.050 0.067
### aggregate estimates to the district level, assuming independent sampling ### errors for multiples studies/schools within the same district agg <- aggregate(dat, cluster=district, struct="ID") agg
#> district school study year yi vi #> 1 11 2.5 2.5 1976.0 -0.126 0.032 #> 2 12 2.5 6.5 1989.0 0.067 0.004 #> 3 18 2.0 10.0 1994.0 0.350 0.007 #> 4 27 2.5 13.5 1976.0 0.500 0.001 #> 5 56 2.5 17.5 1997.0 0.051 0.002 #> 6 58 6.0 25.0 1976.0 -0.042 0.001 #> 7 71 2.0 32.0 1997.0 0.886 0.004 #> 8 86 4.5 37.5 1997.0 -0.029 0.000 #> 9 91 3.5 44.5 2000.0 0.250 0.002 #> 10 108 3.0 50.0 2000.0 0.015 0.006 #> 11 644 2.5 54.5 1994.5 0.162 0.019
### copy data into 'dat' and examine data dat <- dat.assink2016 dat
#> study esid id yi vi pubstatus year deltype #> 1 1 1 1 0.9066 0.0740 1 4.5 general #> 2 1 2 2 0.4295 0.0398 1 4.5 general #> 3 1 3 3 0.2679 0.0481 1 4.5 general #> 4 1 4 4 0.2078 0.0239 1 4.5 general #> 5 1 5 5 0.0526 0.0331 1 4.5 general #> 6 1 6 6 -0.0507 0.0886 1 4.5 general #> 7 2 1 7 0.5117 0.0115 1 1.5 general #> 8 2 2 8 0.4738 0.0076 1 1.5 general #> 9 2 3 9 0.3544 0.0065 1 1.5 general #> 10 3 1 10 2.2844 0.3325 1 -8.5 general #> 11 3 2 11 2.1771 0.3073 1 -8.5 general #> 12 3 3 12 1.7777 0.2697 1 -8.5 general #> 13 3 4 13 1.5480 0.4533 1 -8.5 general #> 14 3 5 14 1.4855 0.1167 1 -6.5 general #> 15 3 6 15 1.4836 0.1706 1 -6.5 general #> 16 3 7 16 1.2777 0.1538 1 -8.5 general #> 17 3 8 17 1.0311 0.3132 1 -8.5 general #> 18 3 9 18 0.9409 0.1487 1 -6.5 general #> 19 3 10 19 0.6263 0.2139 1 -6.5 general #> 20 4 1 20 -0.0447 0.0331 1 -7.5 general #> 21 5 1 21 1.5490 0.1384 1 -11.5 general #> 22 6 1 22 0.7516 0.0477 1 5.5 general #> 23 6 2 23 0.3979 0.0449 1 5.5 general #> 24 6 3 24 0.2062 0.3533 1 5.5 general #> 25 6 4 25 -0.2927 0.0758 1 5.5 general #> 26 6 5 26 -0.5067 0.0314 1 5.5 general #> 27 7 1 27 1.7083 0.0729 1 3.5 general #> 28 7 2 28 0.8003 0.0815 1 3.5 general #> 29 7 3 29 0.7268 0.1394 1 3.5 general #> 30 7 4 30 0.6569 0.0943 1 3.5 general #> 31 7 5 31 0.5569 0.1355 1 3.5 general #> 32 7 6 32 0.3504 0.0763 1 3.5 general #> 33 8 1 33 0.3695 0.0199 1 1.5 general #> 34 9 1 34 0.1048 0.0331 1 5.5 general #> 35 9 2 35 0.1748 0.0346 1 5.5 general #> 36 10 1 36 0.4549 0.0278 1 4.5 general #> 37 10 2 37 0.3161 0.0185 1 4.5 general #> 38 10 3 38 0.0826 0.0328 1 4.5 general #> 39 10 4 39 0.0221 0.0612 1 4.5 general #> 40 10 5 40 -0.0199 0.0127 1 4.5 general #> 41 11 1 41 2.3580 0.1226 0 -6.5 general #> 42 11 2 42 2.3580 0.1226 0 -6.5 general #> 43 11 3 43 2.3157 0.2416 0 -6.5 general #> 44 11 4 44 2.0765 0.0844 0 -6.5 general #> 45 11 5 45 2.0034 0.0780 0 -6.5 general #> 46 11 6 46 1.7716 0.0723 0 -6.5 general #> 47 11 7 47 1.4918 0.0701 0 -6.5 general #> 48 11 8 48 1.4476 0.0422 0 -6.5 general #> 49 11 9 49 1.2897 0.0663 0 -6.5 overt #> 50 11 10 50 1.2637 0.0935 0 -6.5 overt #> 51 11 11 51 1.2423 0.0392 0 -6.5 general #> 52 11 12 52 1.1769 0.0387 0 -6.5 general #> 53 11 13 53 1.0760 0.0391 0 -6.5 general #> 54 11 14 54 1.0653 0.0380 0 -6.5 overt #> 55 11 15 55 1.0379 0.1651 0 -6.5 general #> 56 11 16 56 1.0101 0.0586 0 -6.5 overt #> 57 11 17 57 1.0087 0.0386 0 -6.5 overt #> 58 11 18 58 0.8765 0.0893 0 -6.5 general #> 59 11 19 59 0.8615 0.1591 0 -6.5 overt #> 60 11 20 60 0.8531 0.0350 0 -6.5 general #> 61 11 21 61 0.6999 0.0353 0 -6.5 general #> 62 11 22 62 0.6947 0.0353 0 -6.5 general #> 63 12 1 63 0.2994 0.0041 0 4.5 overt #> 64 12 2 64 0.2992 0.0042 0 4.5 general #> 65 12 3 65 0.2989 0.0041 0 4.5 overt #> 66 12 4 66 0.2910 0.0042 0 4.5 overt #> 67 12 5 67 0.2170 0.0041 0 4.5 overt #> 68 13 1 68 0.3874 0.0138 1 -3.5 general #> 69 13 2 69 0.2178 0.0137 1 -3.5 general #> 70 14 1 70 0.4571 0.0100 1 -1.5 general #> 71 14 2 71 0.1947 0.0059 1 -1.5 general #> 72 14 3 72 0.0239 0.0025 1 -1.5 general #> 73 14 4 73 -0.0575 0.0032 1 -1.5 general #> 74 14 5 74 -0.1190 0.0789 1 -1.5 general #> 75 14 6 75 -0.2945 0.0179 1 -1.5 general #> 76 15 1 76 0.7130 0.1537 1 -13.5 general #> 77 15 2 77 0.3990 0.2334 1 -13.5 general #> 78 15 3 78 -0.7645 0.1674 1 -13.5 general #> 79 16 1 79 0.7156 0.0914 1 2.5 overt #> 80 16 2 80 0.7067 0.0875 1 2.5 covert #> 81 16 3 81 0.6475 0.0330 1 2.5 general #> 82 16 4 82 0.6428 0.0861 1 2.5 covert #> 83 16 5 83 0.6271 0.0400 1 2.5 general #> 84 16 6 84 0.6238 0.0680 1 2.5 general #> 85 16 7 85 0.6025 0.1287 1 2.5 overt #> 86 16 8 86 0.5763 0.0332 1 2.5 general #> 87 16 9 87 0.5171 0.0517 1 2.5 covert #> 88 16 10 88 -0.3797 0.0390 1 2.5 covert #> 89 16 11 89 -0.4228 0.0664 1 2.5 covert #> 90 16 12 90 -0.4245 0.0809 1 2.5 covert #> 91 16 13 91 -0.4671 0.0667 1 2.5 covert #> 92 16 14 92 -0.5230 0.0988 1 2.5 overt #> 93 16 15 93 -0.5675 0.0340 1 2.5 covert #> 94 16 16 94 -0.7586 0.0437 1 2.5 covert #> 95 17 1 95 0.3453 0.0340 1 5.5 general #> 96 17 2 96 0.1221 0.0158 1 5.5 general #> 97 17 3 97 0.0906 0.0107 1 5.5 general #> 98 17 4 98 0.0040 0.0208 1 5.5 general #> 99 17 5 99 -0.0207 0.0123 1 5.5 general #> 100 17 6 100 -0.0660 0.0100 1 5.5 general
### note: 'dat' is a regular data frame class(dat)
#> [1] "data.frame"
### turn data frame into an 'escalc' object dat <- escalc(yi=yi, vi=vi, data=dat) class(dat)
#> [1] "escalc" "data.frame"
### aggregate the estimates to the study level, assuming a CS structure for ### the sampling errors within studies with a correlation of 0.6 agg <- aggregate(dat, cluster=study, rho=0.6) agg
#> study esid id yi vi pubstatus year deltype #> 1 1 3.5 3.5 0.1629 0.0197 1 4.5 general #> 2 2 2.0 8.0 0.4060 0.0056 1 1.5 general #> 3 3 5.5 14.5 1.0790 0.0832 1 -7.7 general #> 4 4 1.0 20.0 -0.0447 0.0331 1 -7.5 general #> 5 5 1.0 21.0 1.5490 0.1384 1 -11.5 general #> 6 6 3.0 24.0 -0.0550 0.0214 1 5.5 general #> 7 7 3.5 29.5 1.0072 0.0545 1 3.5 general #> 8 8 1.0 33.0 0.3695 0.0199 1 1.5 general #> 9 9 1.5 34.5 0.1379 0.0271 1 5.5 general #> 10 10 3.0 38.0 0.1167 0.0107 1 4.5 general #> 11 11 11.5 51.5 0.5258 0.0114 0 -6.5 general #> 12 12 3.0 65.0 0.2805 0.0028 0 4.5 overt #> 13 13 1.5 68.5 0.3018 0.0110 1 -3.5 general #> 14 14 3.5 72.5 0.0356 0.0014 1 -1.5 general #> 15 15 2.0 77.0 0.0908 0.1269 1 -13.5 general #> 16 16 8.5 86.5 0.0181 0.0169 1 2.5 covert #> 17 17 3.5 97.5 -0.0552 0.0072 1 5.5 general
### reshape 'dat.ishak2007' into long format dat <- dat.ishak2007 dat <- reshape(dat.ishak2007, direction="long", idvar="study", v.names=c("yi","vi"), varying=list(c(2,4,6,8), c(3,5,7,9))) dat <- dat[order(dat$study, dat$time),] is.miss <- is.na(dat$yi) dat <- dat[!is.miss,] rownames(dat) <- NULL dat
#> study mdur mbase time yi vi #> 1 Alegret (2001) 16.1 53.6 1 -33.4 14.3 #> 2 Barichella (2003) 13.5 45.3 1 -20.0 7.3 #> 3 Barichella (2003) 13.5 45.3 3 -30.0 5.7 #> 4 Berney (2002) 13.6 45.6 1 -21.1 7.3 #> 5 Burchiel (1999) 13.6 48.0 1 -20.0 8.0 #> 6 Burchiel (1999) 13.6 48.0 2 -20.0 8.0 #> 7 Burchiel (1999) 13.6 48.0 3 -18.0 5.0 #> 8 Chen (2003) 12.1 65.7 2 -32.9 125.0 #> 9 DBS for PD Study Grp. (2001) 14.4 54.0 1 -25.6 4.2 #> 10 DBS for PD Study Grp. (2001) 14.4 54.0 2 -28.3 4.6 #> 11 Dujardin (2001) 13.1 65.0 1 -30.3 88.2 #> 12 Dujardin (2001) 13.1 65.0 3 -24.5 170.7 #> 13 Esselink (2004) 12.0 51.5 2 -25.0 17.0 #> 14 Funkiewiez (2003) 14.0 56.0 3 -36.0 5.0 #> 15 Herzog (2003) 15.0 44.9 2 -22.5 6.8 #> 16 Herzog (2003) 15.0 44.9 3 -25.2 11.0 #> 17 Herzog (2003) 15.0 44.9 4 -25.7 15.4 #> 18 Iansek (2002) 13.0 27.6 2 -8.6 41.0 #> 19 Just (2002) 14.0 44.0 1 -26.0 22.4 #> 20 Just (2002) 14.0 44.0 2 -30.0 20.6 #> 21 Kleiner-Fisman (1999) 13.4 50.1 3 -25.5 8.2 #> 22 Kleiner-Fisman (1999) 13.4 50.1 4 -19.5 13.0 #> 23 Krack (2003) 14.6 55.7 3 -36.7 5.8 #> 24 Krack (2003) 14.6 55.7 4 -32.9 6.1 #> 25 Krause (2001) 13.7 59.0 1 -27.5 3.8 #> 26 Krause (2001) 13.7 59.0 2 -23.5 3.8 #> 27 Krause (2001) 13.7 59.0 3 -29.0 3.8 #> 28 Krause (2004) 14.4 60.0 3 -25.0 13.0 #> 29 Krause (2004) 14.4 60.0 4 -23.0 15.4 #> 30 Kumar (1998) 14.3 55.7 2 -36.3 27.3 #> 31 Lagrange (2002) 14.0 53.7 3 -29.4 10.7 #> 32 Limousin (1998) 14.0 57.0 1 -31.0 2.6 #> 33 Limousin (1998) 14.0 57.0 2 -34.0 2.0 #> 34 Limousin (1998) 14.0 57.0 3 -32.5 2.0 #> 35 Linazasoro (2003) 13.7 47.7 3 -20.6 25.3 #> 36 Lopiano (2001) 15.4 59.8 1 -33.9 20.1 #> 37 Macia (2004) 15.0 55.2 3 -35.4 21.2 #> 38 Martinez-Martin (2002) 16.4 55.7 2 -34.9 18.0 #> 39 Molinuevo (2000) 15.8 49.6 2 -32.7 16.3 #> 40 Moro (1999) 15.4 67.6 1 -23.0 38.1 #> 41 Moro (1999) 15.4 67.6 2 -24.1 32.9 #> 42 Moro (1999) 15.4 67.6 3 -27.8 31.0 #> 43 Moro (1999) 15.4 67.6 4 -28.3 34.6 #> 44 Ostergaard (2002) 15.0 51.3 1 -31.2 12.7 #> 45 Ostergaard (2002) 15.0 51.3 3 -33.0 9.5 #> 46 Pahwa (2003) 12.1 41.3 1 -16.2 5.9 #> 47 Pahwa (2003) 12.1 41.3 3 -16.3 7.0 #> 48 Pahwa (2003) 12.1 41.3 4 -11.5 12.7 #> 49 Patel (2003) 10.0 47.8 3 -29.2 5.8 #> 50 Perozzo (2001) 15.4 59.7 2 -31.7 12.4 #> 51 Pinter (1999) - Long FU 11.3 60.0 1 -32.2 26.5 #> 52 Pinter (1999) - Long FU 11.3 60.0 3 -32.9 29.0 #> 53 Pinter (1999) - Short FU 11.5 59.7 1 -31.7 19.1 #> 54 Rodriguez-Oroz (2000) 16.5 51.5 1 -29.3 22.9 #> 55 Rodriguez-Oroz (2000) 16.5 51.5 2 -32.0 20.0 #> 56 Rodriguez-Oroz (2000) 16.5 51.5 3 -36.7 17.8 #> 57 Romito (2003) 13.8 63.9 1 -30.1 9.4 #> 58 Romito (2003) 13.8 63.9 2 -30.5 8.7 #> 59 Romito (2003) 13.8 63.9 3 -29.7 10.4 #> 60 Romito (2003) 13.8 63.9 4 -31.9 13.3 #> 61 Rousseaux (2004) 12.0 52.3 1 -17.6 28.4 #> 62 Russman (2004) (21m) 15.9 47.1 4 -22.9 20.0 #> 63 Schneider (2003) 17.0 51.3 3 -36.0 27.7 #> 64 Seif (2004) (17.5m) 15.0 44.2 4 -22.5 20.3 #> 65 Simuni (2002) 16.7 43.5 1 -19.4 1.6 #> 66 Simuni (2002) 16.7 43.5 2 -18.0 1.7 #> 67 Simuni (2002) 16.7 43.5 3 -20.5 1.5 #> 68 Straits-Troster (2000) 8.0 47.4 1 -9.3 85.2 #> 69 Thobois (2002) 13.5 44.9 2 -24.7 15.5 #> 70 Thobois (2002) 13.5 44.9 3 -27.9 17.1 #> 71 Troster (2003) 9.5 41.6 1 -16.7 9.8 #> 72 Valldeoriola (2002) 15.6 49.0 2 -31.2 196.0 #> 73 Valldeoriola (2002) 15.6 49.0 4 -27.4 201.6 #> 74 Vesper (2002) 14.0 53.0 1 -27.0 5.5 #> 75 Vesper (2002) 14.0 53.0 2 -30.0 3.5 #> 76 Vingerhoets (2002) 16.0 48.8 1 -19.7 18.5 #> 77 Vingerhoets (2002) 16.0 48.8 2 -22.1 18.1 #> 78 Vingerhoets (2002) 16.0 48.8 3 -24.3 18.2 #> 79 Vingerhoets (2002) 16.0 48.8 4 -21.9 16.7 #> 80 Volkman (2001) 13.1 56.4 2 -37.8 20.9 #> 81 Volkman (2001) 13.1 56.4 3 -34.0 26.4 #> 82 Weselburger (2002) 14.0 50.3 1 -22.1 40.8
### aggregate the estimates to the study level, assuming a CAR structure for ### the sampling errors within studies with an autocorrelation of 0.9 agg <- aggregate(dat, cluster=study, struct="CAR", time=time, phi=0.9) agg
#> study mdur mbase time yi vi #> 1 Alegret (2001) 16.1 53.6 1.000000 -33.4 14.3 #> 2 Barichella (2003) 13.5 45.3 2.000000 -28.1 5.6 #> 3 Berney (2002) 13.6 45.6 1.000000 -21.1 7.3 #> 4 Burchiel (1999) 13.6 48.0 2.000000 -17.2 4.6 #> 5 Chen (2003) 12.1 65.7 2.000000 -32.9 125.0 #> 6 DBS for PD Study Grp. (2001) 14.4 54.0 1.500000 -26.3 4.1 #> 7 Dujardin (2001) 13.1 65.0 2.000000 -31.4 86.1 #> 8 Esselink (2004) 12.0 51.5 2.000000 -25.0 17.0 #> 9 Funkiewiez (2003) 14.0 56.0 3.000000 -36.0 5.0 #> 10 Herzog (2003) 15.0 44.9 3.000000 -21.3 6.3 #> 11 Iansek (2002) 13.0 27.6 2.000000 -8.6 41.0 #> 12 Just (2002) 14.0 44.0 1.500000 -28.8 20.2 #> 13 Kleiner-Fisman (1999) 13.4 50.1 3.500000 -28.0 7.7 #> 14 Krack (2003) 14.6 55.7 3.500000 -35.3 5.6 #> 15 Krause (2001) 13.7 59.0 2.000000 -28.0 3.4 #> 16 Krause (2004) 14.4 60.0 3.500000 -24.8 13.0 #> 17 Kumar (1998) 14.3 55.7 2.000000 -36.3 27.3 #> 18 Lagrange (2002) 14.0 53.7 3.000000 -29.4 10.7 #> 19 Limousin (1998) 14.0 57.0 2.000000 -33.6 1.9 #> 20 Linazasoro (2003) 13.7 47.7 3.000000 -20.6 25.3 #> 21 Lopiano (2001) 15.4 59.8 1.000000 -33.9 20.1 #> 22 Macia (2004) 15.0 55.2 3.000000 -35.4 21.2 #> 23 Martinez-Martin (2002) 16.4 55.7 2.000000 -34.9 18.0 #> 24 Molinuevo (2000) 15.8 49.6 2.000000 -32.7 16.3 #> 25 Moro (1999) 15.4 67.6 2.500000 -26.5 29.8 #> 26 Ostergaard (2002) 15.0 51.3 2.000000 -32.8 9.4 #> 27 Pahwa (2003) 12.1 41.3 2.666667 -18.4 5.2 #> 28 Patel (2003) 10.0 47.8 3.000000 -29.2 5.8 #> 29 Perozzo (2001) 15.4 59.7 2.000000 -31.7 12.4 #> 30 Pinter (1999) - Long FU 11.3 60.0 2.000000 -32.5 25.0 #> 31 Pinter (1999) - Short FU 11.5 59.7 1.000000 -31.7 19.1 #> 32 Rodriguez-Oroz (2000) 16.5 51.5 2.000000 -35.3 17.5 #> 33 Romito (2003) 13.8 63.9 2.500000 -30.2 8.5 #> 34 Rousseaux (2004) 12.0 52.3 1.000000 -17.6 28.4 #> 35 Russman (2004) (21m) 15.9 47.1 4.000000 -22.9 20.0 #> 36 Schneider (2003) 17.0 51.3 3.000000 -36.0 27.7 #> 37 Seif (2004) (17.5m) 15.0 44.2 4.000000 -22.5 20.3 #> 38 Simuni (2002) 16.7 43.5 2.000000 -20.7 1.4 #> 39 Straits-Troster (2000) 8.0 47.4 1.000000 -9.3 85.2 #> 40 Thobois (2002) 13.5 44.9 2.500000 -25.5 15.3 #> 41 Troster (2003) 9.5 41.6 1.000000 -16.7 9.8 #> 42 Valldeoriola (2002) 15.6 49.0 3.000000 -29.4 179.8 #> 43 Vesper (2002) 14.0 53.0 1.500000 -31.2 3.3 #> 44 Vingerhoets (2002) 16.0 48.8 2.500000 -20.7 15.1 #> 45 Volkman (2001) 13.1 56.4 2.500000 -38.0 20.9 #> 46 Weselburger (2002) 14.0 50.3 1.000000 -22.1 40.8