Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 2,095 other subscribers

Archives

Subscribe to Blog via Email

Join 2,095 other subscribers

Skinks and Ladders: A family-living lizard’s learning ability is not affected by their home environment

by Julia Riley

tree-skink_mom-and-juveniles

We have found that the learning ability of the Tree Skink, a lizard that lives with family, is not linked to growing up with others. These lizards were able to learn to navigate a complex spatial maze whether they lived by themselves or with a ‘roomate’.

 This result was surprising because previous studies have shown that for social animals, like humans, rats, rhesus macaques, and chickens, being removed from social contact during development, negatively affects how they grow up. Individuals raised alone are more fearful, anxious, more sedentary, less social, and have a harder time learning. Yet, no one has studied how being raised away from from the social group affects growth in other, less-obvious, social animals – like reptiles.

 Australian Tree Skinks (Egernia striolata) are a common lizard found throughout southeastern Australia. In the wild, these lizards are often found in family groups – most often parent(s) and offspring are found together. However, each lizard varies in how social they like to be – there are loners, lizards that tend to spend most of their time alone, and social butterflies, lizards that are always found with other lizards. Their variable social nature make the Australian Tree Skink a desirable species to study how a social environment can affect their behaviour during development.

 Myself and my PhD supervisor, Martin Whiting, from Macquarie University’s Department of Biological Sciences teamed up with Dan Noble from the University of New South Wales in Sydney and Richard Byrne from the University of St. Andrews in Scotland to conduct these first examinations of how social environment can affect lizard learning ability.

 We raised Tree Skinks alone, or paired with another juvenile from just after birth until they were about a year in age. Then we presented each lizard with a complex spatial maze – each lizard had to navigate a set of five ladders and three ledges to access a food reward. Only 2 of the 5 ladders were climbable, and we measured how long it took each lizard to learn the correct path.

[youtuber error=’Not a YouTube, Vimeo or Google Video URL: www.youtube.com/watch?v=w0R17R_OBl0′]

Our study’s findings, published in Animal Cognition, were unexpected! Unlike social mammals and birds, we found no evidence that the social environment they were raised in affected their learning ability – almost the exact same number of skinks from each treatment learnt this complex task!

 We think there are key reasons for this unexpected result, Tree Skinks naturally vary in their individual sociability, so growing up alone may be a normal option in the wild and is less stressful for them. Alternatively, the presence of a parent while growing up (which we did not measure) may be what affects development of tree skink learning ability.

 Article reference: Riley, JL., Noble, DWA., Byrne, RW., Whiting, MJ. 2016. Does social environment influence learning ability in a family-living lizard? Animal cognition. (doi:10.1007/s10071-016-1068-0)

Why do winners keep winning?

Why do winners keep winning?

by Fonti Kar

Animals often find themselves in direct competition with other individuals for resources and mates. Because fighting is costly, many species honestly signal their fighting ability to avoid injury (non-escalated fights). For example, in flat lizards (Platysaurus broadleyi), males can resolve dominance status by displaying their UV-reflective throats to their opponent. However, when opponents can’t resolve conflict through these displays, interactions often escalate to physical fighting which can be quite costly to both parties (escalated fights).

Male flat lizard showing off his colourful belly to ward off rivals. Photo credit - Martin Whiting

Male flat lizard showing off his colourful belly to ward off rivals. Photo credit – Martin Whiting

Many attributes of an individual can influence its chance of winning a fight. Studies have shown that body dimensions, athletic ability, and weapon size are all key predictors of contest outcome. What is not clear though, is the role of prior contest experience. Prior contest experience can give rise to “winner-loser effects” where winners tend to keep winning subsequent bouts and similarly, losers tend to keep losing. For a long time, prior contest experience was assumed to be the direct cause for winner-loser effects and the underlying behavioural mechanisms are often neglected. So, we wanted to know how exactly contest experience influences an animal’s behaviour?

Two Eastern Water Skinks giving no quarter in an escalated contest

Two Eastern Water Skinks giving no quarter in an escalated contest

Our recent paper published in Behavioral Ecology and Sociobiology explored the effects of prior contest experience (whether you won or lost your previous fight) on fighting behaviour and how this may influence future contest success in Eastern Water Skinks.

We hypothesised that the effects of contest experience should act on an individual’s behaviour directly, in order to influence contest outcome. We specifically predicted that the effects of experience on behaviour should depend on how intense the fight is because different fighting behaviours are used at different stages of a contest.

We staged contests between pairs of size-matched males and recorded 1) which individual initiated the interaction; 2) whether the fight escalated to physical biting; 3) if so, how many times did the contestants bit his rival; and, 4) the winner.

Interestingly, we found that contest initiation was the best predictor of contest outcome in non-escalated contests. Lizards that initiated contests also tended to win that fight. And prior winners also tended to initiate the subsequent fight!

Contrary to what we predicted, the total number of times a lizard bite his rival was not an important predictor of contest outcome in escalated contests. Biting behaviour was not influenced by prior contest experience either! Instead, body mass differences between the contestants seemed to be more informative: heavier males had an upper hand in escalated contests.

What these results suggest is that the effects of contest experience on fighting behaviour depends on how a  contest escalates. The effects of experience seem to act most strongly when the contest is non-escalated, which implies that prior contest experiences may alter an individual’s perception of its own fighting ability. Prior winners may feel better about their ability and thus are more motivated to engage first in the next aggressive interaction and this is likely very intimidating to a rival!

Checking out the published study here:

Kar, M. J. Whiting and D.W.A. Noble (2016) Influence of prior contest experience and level of escalation on contest outcome. Behavioral Ecology and Sociobiology, DOI 10.1007/s00265-016-2173-4

BEAS

Fixed versus Random-effects Meta-analyses

 

It has been a while since my last blog post and lots has been happening in and around the lab to keep everyone busy! Indeed, I just got back from a productive trip to Lund University to visit Tobias Uller to progress some work Shinichi, Tobias and I have been planning.

I’ve also been busy with Losia, Shinichi and Rose writing about meta-analyses and delving into the details of many things. As such, I thought I would devote this blog to demonstrating what random effect meta-analytic models are and how we can understand what is spit out in model outputs. We often gather data and then build models using these data to get answers. However, it’s fun – and important – to take a few steps back and discover just really what is going on under the hood!

Before we start I’ll need to generate some simulated data that we can use to help us understand things along the way.

# Generate some simulated effect size data with known sampling variance
# assumed to come from a common underlying distribution
set.seed(86) # Set see so that we all get the same simulated results
 # We will have 5 studies
    stdy  <- 1:5                                      
# We know the variance for each effect
    Ves     <- c(0.05, 0.10, 0.02, 0.10, 0.09)    
# We'll need this later but these are weights
    W     <- 1 / Ves                                          
# We assume they are sampled from a normal distribution  with a mean effect size of 2
    es     <- rnorm(length(Ves), 2, sqrt(Ves))          
# Data for our fixed effect meta-analysis
    dataFE <- data.frame(stdy = stdy, es, Ves)

# Generate a second set of effect sizes, but now assume that each study effect does not come from the same distribution, but from a population of effect sizes. 

# Here adding 0.8 says we want to add 0.8 as the between study variability. In other words, each effect size is sampled from a larger distribution of effect sizes that itself comes from a distribution with a variance of 0.8. 
    esRE        <- rnorm(length(Ves), 2, sqrt(Ves + 0.8)) 
# Data for our random effect meta-analysis 
    dataRE <-  data.frame(stdy = stdy, esRE, Ves)

We can get a look at what these two datasets look like in the figure below. The red circles are effect sizes and their standard errors (square root of the sampling error variance) from our fixed effect meta-analysis (FE) data set. In contrast, the black circles are our effect size and standard errors from the data generated in the random effect meta-analysis (RE) dataset. The red line is the average, true effect size (2).

We notice a few important differences here. In the RE dataset the variance across the studies is alot larger compared to the FE dataset. This is what we would expect because we have added in a between study variance. Now, let’s use a common meta-analysis package, metafor, to analyse these datasets.

# Run a fixed effect meta-analysis using the FE dataset. 
    metafor::rma(yi = es, vi = Ves, method = "FE", data = dataFE)
## 
## Fixed-Effects Model (k = 5)
## 
## Test for Heterogeneity: 
## Q(df = 4) = 2.2340, p-val = 0.6928
## 
## Model Results:
## 
## estimate       se     zval     pval    ci.lb    ci.ub          
##   2.0731   0.0994  20.8459   <.0001   1.8782   2.2680      *** 
## 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

So this suggests that the average estimate is 2.07 and it has a standard error of 0.1. It also provides us with a Q statistic, which measures the amount of heterogeneity among effect sizes. If Q is large and the p-value is small then it suggests substantial heterogeneity among effect sizes beyond what we would expect if these effects were generated from a common underlying distribution. This is a major assumption of fixed effect meta-analyses and in this case is supported. This is a good thing because we specifically generated these data under the assumption that they are from the same distribution. We can see this if we look at how we generated the data: rnorm(length(Ves), 2, sqrt(Ves)). This draws random effect sizes from a normal distribution with a variance (sqrt(Ves)) for each effect size that is only defined by its sampling variability.

What is this model doing though and what is the logic behind the calculations? The best way to understand this is to hand calculate these values. Basically the effect sizes are being weighted by their sampling error variance when deriving the pooled estimate and it’s variance. Let’s calculate this by hand and see what’s happening:

  # Calculate pooled effect size
  EsP.FE      <- sum(W*dataFE$es) / sum(W)
  EsP.FE
## [1] 2.073105
    # Calculate the pooled variance around estimate
    VarEsP.FE          <- 1 / sum(W)
    VarEsP.FE
## [1] 0.00989011
    # Calculate the standard error around estimate
    SE.EsP.FE    <- sqrt(VarEsP.FE)
    SE.EsP.FE
## [1] 0.09944903

Wow! This is so cool. We just did a fixed effect meta-analysis by hand…and, look, it matches the model output perfectly. The math is not so scary after all! But what about this extra statistic, Q? How do we derive this?

#Now lets calculate our Q. Q is the total amount of heterogeneity in our data.
    Q.fe <- sum(W*(dataFE$es^2) ) - (sum(W*dataFE$es)^2 / sum(W))
    Q.fe
## [1] 2.234034

Cool. So this value also matches up nicely. Now lets move on to a more complicated situation, a random effect meta-analysis using the RE dataset. Remember, we know from this data set that each effect size comes from a different overall distribution. How might Q change? (Hint: We expect it to get larger!)

metafor::rma(yi = esRE, vi = Ves, method="REML", data = dataRE)
## 
## Random-Effects Model (k = 5; tau^2 estimator: REML)
## 
## tau^2 (estimated amount of total heterogeneity): 0.3331 (SE = 0.2843)
## tau (square root of estimated tau^2 value):      0.5771
## I^2 (total heterogeneity / total variability):   85.22%
## H^2 (total variability / sampling variability):  6.76
## 
## Test for Heterogeneity: 
## Q(df = 4) = 24.4015, p-val < .0001
## 
## Model Results:
## 
## estimate       se     zval     pval    ci.lb    ci.ub          
##   2.0199   0.2837   7.1199   <.0001   1.4639   2.5759      *** 
## 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

What happened here? What does all this mumbo jumbo actually mean?! As we predicted, our Q has jumped up a lot! Why is this so? That’s because we added in extra variability to the mix because each effect size is to come from a distribution of effect sizes. Because they can all have their own mean they can differ substantially from the other effect sizes and this results in more variance (i.e. heterogeneity) being added to our data over all.

Now lets see if we can reproduce these results. We need to remember now that we need to add in the between study variance to our weighting of effect sizes for this model. To do this we need to estimate how much heterogeneity we have between studies above what we would expect from sampling variability. This can be estimated by calculating the tau^2 statistic, which is the variance in the ‘true’ effect sizes. To calculate this we need to calculate Q using our weights, which are the same because we know these to be true.

# Calculate our Q statistic again
    Q <- sum(W*(dataRE$es^2) ) - (sum(W*dataRE$es)^2 / sum(W))
    Q
## [1] 24.40149
# Calculate tau2
    C <- sum(W) - ((sum(W^2))/sum(W))
    C
## [1] 69.23077
    df <- nrow(dataRE) - 1   
    T2 <- (Q - df) / C
    T2
## [1] 0.2946883

Now we can re-calculate our weights by adding in the between study heterogenetity.

  # RE weights
    W.re <- 1 / (T2 + dataRE$Ves)
    #Pooled effect size for random effect meta-analysis
    esPoolRE      <- sum(W.re*dataRE$es) / sum(W.re) 
    esPoolRE
## [1] 2.016261
    # Calculate the pooled variance around estimate
    VarES          <- 1 / sum(W.re)
    # Calculate the standard error around estimate
    SE.ES.RE    <- sqrt(VarES)
    SE.ES.RE
## [1] 0.2697219

OK. What happened here? Our Q statistic is correct but tau^2, our mean estimate and it’s standard error are slightly different. Why? This is because the metafor model we ran is using a different estimation method (REML) to estimate these statistics compared to our hand calculations. But, we are awfully close on all our estimates in any case! However, we can re-run this all using the method we used for our hand calculations above and get a nice match between our claculations and the models.

    metafor::rma(yi = esRE, vi = Ves, method="DL", data = dataRE)
## 
## Random-Effects Model (k = 5; tau^2 estimator: DL)
## 
## tau^2 (estimated amount of total heterogeneity): 0.2947 (SE = 0.2731)
## tau (square root of estimated tau^2 value):      0.5429
## I^2 (total heterogeneity / total variability):   83.61%
## H^2 (total variability / sampling variability):  6.10
## 
## Test for Heterogeneity: 
## Q(df = 4) = 24.4015, p-val < .0001
## 
## Model Results:
## 
## estimate       se     zval     pval    ci.lb    ci.ub          
##   2.0163   0.2697   7.4753   <.0001   1.4876   2.5449      *** 
## 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Now we have a better understanding of the difference between fixed and random-effect meta-analysis, and, we have even been able to do our own analysis by hand! How cool is that?!