dat.michael2013.Rd
Results from studies exploring how a superfluous fMRI brain image influences the persuasiveness of a scientific claim.
dat.michael2013
The data frame contains the following columns:
Study | character | name of the study: Citation - Experiment - Subgroup |
No_brain_n | numeric | sample size for no-brain-image condition |
No_brain_m | numeric | mean agreement rating for no-brain-image condition |
No_brain_s | numeric | standard deviation for no-brain-image condition |
Brain_n | numeric | sample size for brain-image condition |
Brain_m | numeric | mean agreement rating for brain-image condition |
Brain_s | numeric | standard deviation for brain-image condition |
Included_Critique | character | ‘Critique’ if article included critical commentary on conclusions, otherwise ‘No_critique’ |
Medium | character | ‘Paper’ if conducted in person; ‘Online’ if conducted online |
Compensation | character | notes on compensation provided to participants |
Participant_Pool | character | notes on where participants were recruited |
yi | numeric | raw mean difference, calculated as Brain_m - No_brain_m |
vi | numeric | corresponding sampling variance |
The dataset contains the data from the meta-analysis by Michael et al. (2013) of experiments on the persuasive power of a brain image. The meta-analysis analyzed an original study by McCabe and Castel (2008) as well as 10 replication attempts conducted by the authors of the meta-analysis.
In each study, participants read an article about using brain imaging as a lie detector. The article either included a superfluous fMRI image of a brain (brain) or not (no_brain). After reading the article, all participants responded to the statement “Do you agree or disagree with the conclusion that brain imaging can be used as a lie detector?” on a scale from 1 (strongly disagree) to 4 (strongly agree).
The original study by McCabe and Castel (2008) reported a relatively large increase in agreement due to the presence of brain images. Meta-analysis of the original study with the 10 replications suggests, however, a small, possibly null effect: an estimated average raw mean difference of 0.07 points, 95% CI [-0.00, 0.14], under a random-effects model.
In some studies, the article included a passage critiquing the primary claims made in the article; this is coded in the Included_Critique
column for analysis as a possible moderator. Note that Experiment 3 by McCabe and Castel (2008) was a 2x2 between subjects design: brain image presence was manipulated as well as the inclusion of a critique. The two different critique conditions are recorded as separate rows in this dataset. Analysis of this dataset with metafor yields the same results (given rounding) reported in the manuscript.
Michael, R. B., Newman, E. J., Vuorre, M., Cumming, G., & Garry, M. (2013). On the (non)persuasive power of a brain image. Psychonomic Bulletin & Review, 20(4), 720–-725. https://doi.org/10.3758/s13423-013-0391-6
McCabe, D. P., & Castel, A. D. (2008). Seeing is believing: The effect of brain images on judgments of scientific reasoning. Cognition, 107(1), 343–352. https://doi.org/10.1016/j.cognition.2007.07.017
psychology, persuasion, raw mean differences
### copy data into 'dat' and examine data
dat <- dat.michael2013
dat
#> Study No_brain_n No_brain_m No_brain_s Brain_n
#> 1 McCabe and Castel, 2008 - Experiment 3 - No Critique 28 2.89 0.79 26
#> 2 Michael et al., 2013 - Experiment 1 99 2.90 0.58 98
#> 3 Michael et al., 2013 - Experiment 2 42 2.62 0.54 33
#> 4 Michael et al., 2013 - Experiment 3 24 2.96 0.36 21
#> 5 Michael et al., 2013 - Experiment 4 184 2.93 0.60 184
#> 6 Michael et al., 2013 - Experiment 5 274 2.86 0.59 255
#> 7 McCabe and Castel, 2008 - Experiment 3 - Critique 26 2.69 0.55 28
#> 8 Michael et al., 2013 - Experiment 6 58 2.50 0.84 55
#> 9 Michael et al., 2013 - Experiment 7 34 2.41 0.78 34
#> 10 Michael et al., 2013 - Experiment 8 98 2.73 0.67 93
#> 11 Michael et al., 2013 - Experiment 9 99 2.54 0.66 95
#> 12 Michael et al., 2013 - Experiment 10 94 2.66 0.65 97
#> Brain_m Brain_s Included_Critique Medium Compensation Participant_Pool
#> 1 3.12 0.65 No_critique Paper Course credit Colorado State University undergraduates
#> 2 2.86 0.61 No_critique Online US$0.30 Mechanical Turk
#> 3 2.85 0.57 No_critique Online Course credit Victoria undergraduate subject pool
#> 4 3.07 0.55 No_critique Paper Movie voucher Wellington high school students
#> 5 2.89 0.60 No_critique Online US$0.50 Mechanical Turk
#> 6 2.91 0.52 No_critique Paper Course credit Victoria Intro Psyc subject pool
#> 7 3.00 0.54 Critique Paper Course credit Colorado State University undergraduates
#> 8 2.60 0.83 Critique Online US$0.50 Mechanical Turk
#> 9 2.74 0.51 Critique Paper None General Public
#> 10 2.68 0.69 Critique Online US$0.50 Mechanical Turk
#> 11 2.72 0.68 Critique Online US$0.50 Mechanical Turk
#> 12 2.64 0.71 Critique Online US$0.50 Mechanical Turk
#> yi vi
#> 1 0.23 0.038539286
#> 2 -0.04 0.007194919
#> 3 0.23 0.016788312
#> 4 0.11 0.019804762
#> 5 -0.04 0.003913043
#> 6 0.05 0.002330830
#> 7 0.31 0.022048901
#> 8 0.10 0.024690972
#> 9 0.33 0.025544118
#> 10 -0.05 0.009699967
#> 11 0.18 0.009267368
#> 12 -0.02 0.009691588
### load metafor package
library(metafor)
### Data prep
# yi and vi are already provided, but here's how you would use escalc() to obtain
# a raw-mean difference and its variance.
# Note the measure parameter is "MD" for 'raw mean difference'
dat <- metafor::escalc(
measure = "MD",
m1i = Brain_m,
m2i = No_brain_m,
sd1i = Brain_s,
sd2i = No_brain_s,
n1i = Brain_n,
n2i = No_brain_n,
data = dat
)
### meta-analysis using a random-effects model of the raw mean differences
res <- rma(yi, vi, data=dat)
print(res, digits=2)
#>
#> Random-Effects Model (k = 12; tau^2 estimator: REML)
#>
#> tau^2 (estimated amount of total heterogeneity): 0.00 (SE = 0.01)
#> tau (square root of estimated tau^2 value): 0.06
#> I^2 (total heterogeneity / total variability): 28.92%
#> H^2 (total variability / sampling variability): 1.41
#>
#> Test for Heterogeneity:
#> Q(df = 11) = 15.73, p-val = 0.15
#>
#> Model Results:
#>
#> estimate se zval pval ci.lb ci.ub
#> 0.07 0.03 1.94 0.05 -0.00 0.14 .
#>
#> ---
#> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#>
### examine if Included_Critique is a potential moderator
res <- rma(yi, vi, mods = ~ Included_Critique, data=dat)
print(res, digits=2)
#>
#> Mixed-Effects Model (k = 12; tau^2 estimator: REML)
#>
#> tau^2 (estimated amount of residual heterogeneity): 0.00 (SE = 0.01)
#> tau (square root of estimated tau^2 value): 0.06
#> I^2 (residual heterogeneity / unaccounted variability): 27.82%
#> H^2 (unaccounted variability / sampling variability): 1.39
#> R^2 (amount of heterogeneity accounted for): 1.33%
#>
#> Test for Residual Heterogeneity:
#> QE(df = 10) = 14.41, p-val = 0.16
#>
#> Test of Moderators (coefficient 2):
#> QM(df = 1) = 0.84, p-val = 0.36
#>
#> Model Results:
#>
#> estimate se zval pval ci.lb ci.ub
#> intrcpt 0.11 0.05 1.94 0.05 -0.00 0.21 .
#> Included_CritiqueNo_critique -0.06 0.07 -0.92 0.36 -0.20 0.07
#>
#> ---
#> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#>