Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 18 other subscribers

Fixed versus Random-effects Meta-analyses

 

It has been a while since my last blog post and lots has been happening in and around the lab to keep everyone busy! Indeed, I just got back from a productive trip to Lund University to visit Tobias Uller to progress some work Shinichi, Tobias and I have been planning.

I’ve also been busy with Losia, Shinichi and Rose writing about meta-analyses and delving into the details of many things. As such, I thought I would devote this blog to demonstrating what random effect meta-analytic models are and how we can understand what is spit out in model outputs. We often gather data and then build models using these data to get answers. However, it’s fun – and important – to take a few steps back and discover just really what is going on under the hood!

Before we start I’ll need to generate some simulated data that we can use to help us understand things along the way.

# Generate some simulated effect size data with known sampling variance
# assumed to come from a common underlying distribution
set.seed(86) # Set see so that we all get the same simulated results
 # We will have 5 studies
    stdy  <- 1:5                                      
# We know the variance for each effect
    Ves     <- c(0.05, 0.10, 0.02, 0.10, 0.09)    
# We'll need this later but these are weights
    W     <- 1 / Ves                                          
# We assume they are sampled from a normal distribution  with a mean effect size of 2
    es     <- rnorm(length(Ves), 2, sqrt(Ves))          
# Data for our fixed effect meta-analysis
    dataFE <- data.frame(stdy = stdy, es, Ves)

# Generate a second set of effect sizes, but now assume that each study effect does not come from the same distribution, but from a population of effect sizes. 

# Here adding 0.8 says we want to add 0.8 as the between study variability. In other words, each effect size is sampled from a larger distribution of effect sizes that itself comes from a distribution with a variance of 0.8. 
    esRE        <- rnorm(length(Ves), 2, sqrt(Ves + 0.8)) 
# Data for our random effect meta-analysis 
    dataRE <-  data.frame(stdy = stdy, esRE, Ves)

We can get a look at what these two datasets look like in the figure below. The red circles are effect sizes and their standard errors (square root of the sampling error variance) from our fixed effect meta-analysis (FE) data set. In contrast, the black circles are our effect size and standard errors from the data generated in the random effect meta-analysis (RE) dataset. The red line is the average, true effect size (2).

We notice a few important differences here. In the RE dataset the variance across the studies is alot larger compared to the FE dataset. This is what we would expect because we have added in a between study variance. Now, let’s use a common meta-analysis package, metafor, to analyse these datasets.

# Run a fixed effect meta-analysis using the FE dataset. 
    metafor::rma(yi = es, vi = Ves, method = "FE", data = dataFE)
## 
## Fixed-Effects Model (k = 5)
## 
## Test for Heterogeneity: 
## Q(df = 4) = 2.2340, p-val = 0.6928
## 
## Model Results:
## 
## estimate       se     zval     pval    ci.lb    ci.ub          
##   2.0731   0.0994  20.8459   <.0001   1.8782   2.2680      *** 
## 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

So this suggests that the average estimate is 2.07 and it has a standard error of 0.1. It also provides us with a Q statistic, which measures the amount of heterogeneity among effect sizes. If Q is large and the p-value is small then it suggests substantial heterogeneity among effect sizes beyond what we would expect if these effects were generated from a common underlying distribution. This is a major assumption of fixed effect meta-analyses and in this case is supported. This is a good thing because we specifically generated these data under the assumption that they are from the same distribution. We can see this if we look at how we generated the data: rnorm(length(Ves), 2, sqrt(Ves)). This draws random effect sizes from a normal distribution with a variance (sqrt(Ves)) for each effect size that is only defined by its sampling variability.

What is this model doing though and what is the logic behind the calculations? The best way to understand this is to hand calculate these values. Basically the effect sizes are being weighted by their sampling error variance when deriving the pooled estimate and it’s variance. Let’s calculate this by hand and see what’s happening:

  # Calculate pooled effect size
  EsP.FE      <- sum(W*dataFE$es) / sum(W)
  EsP.FE
## [1] 2.073105
    # Calculate the pooled variance around estimate
    VarEsP.FE          <- 1 / sum(W)
    VarEsP.FE
## [1] 0.00989011
    # Calculate the standard error around estimate
    SE.EsP.FE    <- sqrt(VarEsP.FE)
    SE.EsP.FE
## [1] 0.09944903

Wow! This is so cool. We just did a fixed effect meta-analysis by hand…and, look, it matches the model output perfectly. The math is not so scary after all! But what about this extra statistic, Q? How do we derive this?

#Now lets calculate our Q. Q is the total amount of heterogeneity in our data.
    Q.fe <- sum(W*(dataFE$es^2) ) - (sum(W*dataFE$es)^2 / sum(W))
    Q.fe
## [1] 2.234034

Cool. So this value also matches up nicely. Now lets move on to a more complicated situation, a random effect meta-analysis using the RE dataset. Remember, we know from this data set that each effect size comes from a different overall distribution. How might Q change? (Hint: We expect it to get larger!)

metafor::rma(yi = esRE, vi = Ves, method="REML", data = dataRE)
## 
## Random-Effects Model (k = 5; tau^2 estimator: REML)
## 
## tau^2 (estimated amount of total heterogeneity): 0.3331 (SE = 0.2843)
## tau (square root of estimated tau^2 value):      0.5771
## I^2 (total heterogeneity / total variability):   85.22%
## H^2 (total variability / sampling variability):  6.76
## 
## Test for Heterogeneity: 
## Q(df = 4) = 24.4015, p-val < .0001
## 
## Model Results:
## 
## estimate       se     zval     pval    ci.lb    ci.ub          
##   2.0199   0.2837   7.1199   <.0001   1.4639   2.5759      *** 
## 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

What happened here? What does all this mumbo jumbo actually mean?! As we predicted, our Q has jumped up a lot! Why is this so? That’s because we added in extra variability to the mix because each effect size is to come from a distribution of effect sizes. Because they can all have their own mean they can differ substantially from the other effect sizes and this results in more variance (i.e. heterogeneity) being added to our data over all.

Now lets see if we can reproduce these results. We need to remember now that we need to add in the between study variance to our weighting of effect sizes for this model. To do this we need to estimate how much heterogeneity we have between studies above what we would expect from sampling variability. This can be estimated by calculating the tau^2 statistic, which is the variance in the ‘true’ effect sizes. To calculate this we need to calculate Q using our weights, which are the same because we know these to be true.

# Calculate our Q statistic again
    Q <- sum(W*(dataRE$es^2) ) - (sum(W*dataRE$es)^2 / sum(W))
    Q
## [1] 24.40149
# Calculate tau2
    C <- sum(W) - ((sum(W^2))/sum(W))
    C
## [1] 69.23077
    df <- nrow(dataRE) - 1   
    T2 <- (Q - df) / C
    T2
## [1] 0.2946883

Now we can re-calculate our weights by adding in the between study heterogenetity.

  # RE weights
    W.re <- 1 / (T2 + dataRE$Ves)
    #Pooled effect size for random effect meta-analysis
    esPoolRE      <- sum(W.re*dataRE$es) / sum(W.re) 
    esPoolRE
## [1] 2.016261
    # Calculate the pooled variance around estimate
    VarES          <- 1 / sum(W.re)
    # Calculate the standard error around estimate
    SE.ES.RE    <- sqrt(VarES)
    SE.ES.RE
## [1] 0.2697219

OK. What happened here? Our Q statistic is correct but tau^2, our mean estimate and it’s standard error are slightly different. Why? This is because the metafor model we ran is using a different estimation method (REML) to estimate these statistics compared to our hand calculations. But, we are awfully close on all our estimates in any case! However, we can re-run this all using the method we used for our hand calculations above and get a nice match between our claculations and the models.

    metafor::rma(yi = esRE, vi = Ves, method="DL", data = dataRE)
## 
## Random-Effects Model (k = 5; tau^2 estimator: DL)
## 
## tau^2 (estimated amount of total heterogeneity): 0.2947 (SE = 0.2731)
## tau (square root of estimated tau^2 value):      0.5429
## I^2 (total heterogeneity / total variability):   83.61%
## H^2 (total variability / sampling variability):  6.10
## 
## Test for Heterogeneity: 
## Q(df = 4) = 24.4015, p-val < .0001
## 
## Model Results:
## 
## estimate       se     zval     pval    ci.lb    ci.ub          
##   2.0163   0.2697   7.4753   <.0001   1.4876   2.5449      *** 
## 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Now we have a better understanding of the difference between fixed and random-effect meta-analysis, and, we have even been able to do our own analysis by hand! How cool is that?!


 

Visit to the Uller lab at Lund University

It has been ages since my last blog post. I promise to correct this lull with more frequent blog posts from now on!

I’ve been very busy with the i-deel team at UNSW and also just got back from a wonderful trip to visit Tobias Uller and the Uller lab at Lund University. It was an absolutely fantastic trip, filled with loads of science discussion and productive work towards a paper Tobias, Shinichi and I have been planning.

The Uller lab is growing fast and there are all kinds of cool and exciting projects happening from Anole (Anolis) lizards to Daphnia to good old Wall Lizards (Podarcis). Of course, I got a chance to enjoy the sights on my way to work every day and even visited Southern France for a few days to help with some field work and give a talk at the CNRS in Moulis. It was great to catch up with Fabien Aubret again too!

Some pictures from my adventures below!

My favourite...white and dessert!

My favourite…French wine and dessert!

A crazy old church in France!

A crazy old church in France!

Members of the Uller lab doing some Tim Tam Slams!

Members of the Uller lab doing some Tim Tam Slams!

Beautiful buildings and flowers in Lund

Beautiful buildings and flowers in Lund

Fonti Kar's Thesis Completion

It’s been a massive couple months for Fonti Kar, who has pulled off an amazing thesis. Congratulations! She remained cool, calm and collective right to the end and wrapped up her thesis with plenty of time to spare!

IMG_2331

Fonti did two huge experiments that jived well together. Her first chapter explored the predictors of contest outcome in our favourite lizard, Eulamprus quoyii. This is one area we lack a lot of information on and Fonti has shown that there are some very interesting assessment stages that Eulamprus proceed through before escalating to fights. Actually, escalated fights that result in fighting and biting between two males only occurred in 38% of contest pairings.  Interestingly, given the differences between escalated and non-escalated (i.e. contests resolved through signalling) contests, she predicted that the predictors of contest outcome should vary depending on these assessment stages. While most of her predicted traits did not affect contest outcome (which is interesting given she did 116 contests!), she did find a prior winning effect that influenced the probability of initiating a contest, which was a strong predictor of contest outcome (if you initiated you won!). In support of her prediction, she also showed that the importance of initiating a contest depended on whether the contest escalated or not. Males that initiated in non-escalated contests had a much higher probability of winning whereas it didn’t seem to make a difference if they decided to initiate a contest that resulted  in physically fighting with another male.

Fonti’s second chapter made use of this huge contest experiment to look at the role of social dominance (whether you win against other males during a contest) on social information use. Making use of the contest data she paired males of known contest history together and randomly assigned one of these males (dominant or subordinate) to be the experimental lizard and the other was designated the demonstrator lizard. Dominant males were winners and subordinate lizards the losers of contests. The demonstrator was trained to flip lids to access a mealworm under a rewarded lid and was then allowed to show the experimental lizard this task. Importantly, we have shown that E. quoyii do use social information to learn such a task (click here to learn more about that work), but only young lizards used social information to learn. However, age and dominance often go hand in hand in lizards and so Fonti attempted to control for age by assigning equal numbers of large and small males across three treatment groups: 1) Where the dominant male was the demonstrator; 2) the subordinate male was the demonstrator and 3) a control group, which was a mix of dominant and subordinate experimental lizards that were just shown the other lizard but not doing the task. She specifically predicted that watching a male that was dominant or subordinate to yourself would affect how you use social information. In other words, subordinate male lizards would pay more close attention to those big macho males, while macho males wouldn’t care what some small little male was up to. Her results were interesting, but to make a long story short, social dominance didn’t seem to affect social information use in anyway and actually her results were similar to those found in previous work comparing old males – they didn’t seem to use social information at all! This is pretty cool because it suggests that age may really be the big driver of social information use for these particular tasks.

All this hard work required a massive celebration (a night out on the city) and a special trophy for Fonti to remind her of her lizard contests thanks to her fellow MSc friends (See below).

IMG_2334