Month

“Dogs and mice appear to have gotten much smarter as investigators became more willing to put in the necessary observation time and work required to elicit their abilities.”—Thomas Bouchard, Jr. (2014). Genes, Evolution and Intelligence. Behavior Genetics, 1–29. doi:10.1007/s10519-014-9646-x

“so long as we allow private entities freely to engage in these practices [of Facebook-like manipulation], we ought not unduly restrain academics trying to determine their effects.”—Michelle N. Meyer,
How an IRB Could Have Legitimately Approved the Facebook Experiment—and Why that May Be a Good Thing

“At least once almost every 24 hours we undergo a remarkable transformation in which we cease common activities, become relatively unresponsive to external stimuli, and recall little other than fragments of dreams.”—

Bareham et al, Losing the left side of the world: rightward shift in human spatial attention with sleep onset

“And that which is more to be wondered at, [melancholy] skips in some families the father, and goes to the son, “or takes every other, and sometimes every third in a lineal descent, and doth not always produce the same, but some like, and a symbolizing disease.””—Robert Burton, The Anatomy of Melancholy (1621) via Scott Stossel, My Age of Anxiety

Block diagonal R-structures in MCMCglmm

A new feature in MCMCglmm version 2.18 is block diagonal residual structures.

Jarrod Hadfield:

Block Diagonal R-structures are now allowed. For example imagine a bivariate model where pairs of observations are made on individuals of different sex. There may be a need to fit different 2x2 residual covariance matrices for the two sexes.

`rcov=~us(trait:sex):units`

would fit a 4x4 covariance matrix, but the between-sex residual covariances would be estimated despite not being identifiable (no individual can be both sexes). Now, models of the form`rcov=~us(trait:at(sex, "M")):units+us(trait:at(sex, "F")):units`

can be fitted that allow the non-identified covariances to be effectively set to zero.

In other words, previously if you had 2 traits measured for females and males and you wanted to fit a sex specific residual with `rcov=~us(trait:sex):units`

you would end up with a residual matrix like

for the variances (V) and covariances (cov) of females traits f_{1} and f_{2} and male traits m_{1} and m_{2}. The bold components are estimable from the data while the red ones are not. However, because of the way the residual is set up MCMCglmm will try to estimate the red parameters as well.

With block diagonals, you are specifying a matrix like

where the gray cells are now ignored. This is accomplished by fitting two 2 × 2 matrices for the R structure then having separate terms in the `rcov`

formula

Helping in cooperatively breeding long-tailed tits: a test of Hamilton's rule

Ben J. Hatchwell, Philippa R. Gullett and Mark J. Adams

*Phil. Trans. R. Soc. B* 19 May 2014 vol. 369 no. 1642 20130565 doi:10.1098/rstb.2013.0565

When asked if he would lay down his life for his brother, the population geneticist J. B. S. Haldane is said to have quipped “No, but I would to save two brothers or eight cousins.” Haldane’s response was an early invocation of what has been mathematically codified as “Hamilton’s Rule”:

$rB > C$

which says that an altruistic behavior by will evolve if the benefit to the recipient of the help ($B$) times the relationship between the helper and the recipient ($r$) is greater than the cost to the helper ($C$).

How well Hamilton’s Rule describes social evolution has been the subject of some controversy among evolutionary biologists but there have been very few empirical tests of it.

We set out to do just that with our data from our long-term study of long-tailed tits

Long-tailed tits are cooperative breeders, meaning that related individuals help each other to raise their offspring. Birds who fail to breed in a particular year join the nest of one of their relatives as a helper. This behavior is altruistic because helpers pay a cost: they have lower survival to the next year when they decide to help, this imperiling their future chances of having their own offspring. However, they also get a fitness benefit. The chicks that they help are more likely to survive. Because the helper is on average related to these chicks, the benefit to the helper’s ultimate fitness (in terms of the number of copies of their genes that make it into the next generation) outweighs the costs.

**Abstract**

Inclusive fitness theory provides the conceptual framework for our current understanding of social evolution, and empirical studies suggest that kin selection is a critical process in the evolution of animal sociality. A key prediction of inclusive fitness theory is that altruistic behaviour evolves when the costs incurred by an altruist (c) are outweighed by the benefit to the recipient (b), weighted by the relatedness of altruist to recipient (r), i.e. Hamilton’s rule rb > c. Despite its central importance in social evolution theory, there have been relatively few empirical tests of Hamilton’s rule, and hardly any among cooperatively breeding vertebrates, leading some authors to question its utility. Here, we use data from a long-term study of cooperatively breeding long-tailed tits Aegithalos caudatus to examine whether helping behaviour satisfies Hamilton’s condition for the evolution of altruism. We show that helpers are altruistic because they incur survival costs through the provision of alloparental care for offspring. However, they also accrue substantial benefits through increased survival of related breeders and offspring, and despite the low average relatedness of helpers to recipients, these benefits of helping outweigh the costs incurred. We conclude that Hamilton’s rule for the evolution of altruistic helping behaviour is satisfied in this species.

Shrout & Fleiss's ICC in mixed model form

Shrout & Fleiss give guidelines for calculating intraclass correlation coefficients for estimating interrater reliability and agreement. The original paper shows how to calculate them using ANOVA but they can also be estimated in a mixed model framework. Noting these down so I remember them

- "Repeatability"

$y = a + e$

$ICC(1, k) = \frac{\sigma_a^2}{\sigma_a^2 + \frac{\sigma_e^2}{k}}$ - "Agreement"

$y = a + r + \epsilon$

$ICC(2, k) = \frac{\sigma_a^2}{\sigma_a^2 + \frac{\sigma_r^2 + \sigma_\epsilon^2}{k}}$ - "Consistency"

$y = a + r + \epsilon$

$ICC(3, k) = \frac{\sigma_a^2}{\sigma_a^2 + \frac{\sigma_\epsilon^2}{k}}$

“Every speciality changes its classification of illnesses every few years, as we learn more about illnesses, but only psychiatry gets abuse for doing so.”—Alex Langford, Categorically Ill: My argument in favour of the diagnosis of mental illnesses

“Think of overfitting as memorizing as opposed to learning.”—James Faghmous , New to Machine Learning? Avoid these three mistakes

Feasibility and Uncertainty in Behavior Genetics for the Nonhuman Primate

M. J. Adams International Journal of Primatology February 2014, Volume 35, Issue 1, pp 156-168 doi: 10.1007/s10764-013-9722-8

Nonhuman primates are good species to study for understand the genetic underpinnings of behavior, especially because their behaviors are so similar to our. One reason that primates are good to study is because they are individually identifiable and we tend to know which individuals are related (or we can discover this using genotyping).

However, studying primates offers several challenges. The main one is that when studying nonhuman primates the sample sizes (dozens or at most hundreds) are much smaller than what is typical in studies of humans (hundreds to tens of thousands). This small sample size diminishes statistical power and limits the types of questions that can be investigated.

I conducted some simulations to see just how gloomy the prospects are. I found that statistical power can be increased slightly by multiple measurements of each individual, something that is feasible since primates are long-lived.

**Abstract**

The analysis of phenotypic covariances among genetically related individuals is the basis for estimations of genetic and phenotypic effects on phenotypes. Beyond heritability, there are several other estimates that can be made with behavior genetic models of interest to primatologists. Some of these estimates are feasible with primate samples because they take advantage of the types of relatives available to compare in primate species and because most behaviors are expressed orders of magnitude more often and in a greater variety of contexts than morphological or life-history traits. The hypotheses that can be tested with these estimates are contrasted with hypotheses that will be difficult to achieve in primates because of sample size limitations. Feasible comparisons include the proportion of variance from interaction effects, the variation of genetic effects across environments, and the genetics of growth and development. Simulation shows that uncertainty of genetic parameters can be reduced by sampling each individual more than once. Because sample sizes are likely to remain relatively small in most primate behavior genetics, expressing uncertainty in parameter estimates is needed to move our inferences forward.

“Practically no experimental work has been done upon individual differences and family resemblances in animal behavior. In most cases the behaviorist has been content to study the mass reaction of a group of animals to external stimuli, and in the main, has not attempted to treat the variability of his group because of the relatively small number of animals tested.”—Halsey Bagg, 1920, Archives of Genetics Mono. Vol 43, p.1 (via Rosalind Arden)

“…the shy-animal mental model of experimentation. The effect is there; you just need to create the right circumstances to coax it out of its hiding place.”—Rolf Zwaan

“

Natural selection of phenotypes cannot in itself produce cumulative change, because phenotypes are extremely temporary manifestations. They are the result of interactions between genotype and environment that produces what we recognize as an individual. Such an individual consists of genotypic information and information recorded since conception. Socrates consisted of the genes his parents gave him, the experiences they and his environment later provided, and the growth a development mediated by numerous meals. For all I know, he may have been very successful in the evolutionary sense of leaving numerous offspring. His phenotype, nevertheless, was utterly destroyed by the hemlock and has never since been duplicated. If the hemlock had not killed him, something else soon would have. So however natural selection may have been acting on Greek phenotypes in the forth century B.C., it did not of itself produce any cumulative effect.

The same argument holds also for genotypes. With Socrates’ death, not only did his phenotype disappear, but also his genotype. […] The loss of Socrates’ genotype is not assuaged by any consideration of how prolifically he may have reproduced. Socrates’ genes may be with us yet, but not his genotype, because meiosis and recombination destroy genotypes as surely as death.

”—G. C. Williams, Adaptation and Natural Selection pp 23–24, via Graham CoopGenetic and environmental deviations in GCSE

A twin study of GSCE results found that over half of the variance in grades could be attributed to genetic factors:

heritability was substantial for overall GCSE performance for compulsory core subjects (58%) as well as for each of them individually: English (52%), mathematics (55%) and science (58%). In contrast, the overall effects of shared environment, which includes all family and school influences shared by members of twin pairs growing up in the same family and attending the same school, accounts for about 36% of the variance of mean GCSE scores. The significance of these findings is that individual differences in educational achievement at the end of compulsory education are not primarily an index of the quality of teachers or schools: much more of the variance of GCSE scores can be attributed to genetics than to school or family environment.

While the shared environment is rather small relative to the genetic variance, we can consider how big a differences in scores you could make if you could magically change someone’s genes or environment but leave everything else about them the same.

First let’s look at the distribution of scores. I took numbers from main tables (XLS) in the 2013 results

```
EnglishGradeCount <- list(Astar = 20304, A = 68114, B = 126027, C = 174106,
D = 94935, E = 42310, F = 16836, G = 5138)
MathsGradeCount <- list(Astar = 37075, A = 69429, B = 110726, C = 188947, D = 58911,
E = 35678, F = 28273, G = 18912)
```

In the Shakeshaft et al study they coded the highest grade (A*) as 11 and and the lower (G) as 4

```
codings <- 11:4
plot(codings, unlist(EnglishGradeCount)/sum(unlist(EnglishGradeCount)), type = "h",
xlab = "GCSE English score", ylab = "Frequency")
```

```
plot(codings, unlist(MathsGradeCount)/sum(unlist(EnglishGradeCount)), type = "h",
xlab = "GCSE Maths score", ylab = "Frequency")
```

The results from the paper give the mean and SD of scores in their sample

```
englishMean = 8.93
englishSD = 1.17
mathsMean = 8.96
mathsSD = 1.4
```

and the heritabilities (called $h^2$ or $a^2$) and proportions of shared ($c^2$) and unique ($e^2$) environmental variance

```
englishA2 = 0.52
englishC2 = 0.31
englishE2 = 0.18
mathsA2 = 0.55
mathsC2 = 0.26
mathsE2 = 0.18
```

We can also calculate the weighted mean and SD from the 2013 data

```
EnglishScores <- cov.wt(matrix(codings), unlist(EnglishGradeCount))
MathsScores <- cov.wt(matrix(codings), unlist(MathsGradeCount))
sqrt(EnglishScores$cov)
## [,1]
## [1,] 1.57
```

The combined genetic or shared environment effects are deviations from the mean with a standard deviation of $\sqrt{(v^2\sigma^2)}$ where $v^2$ is the proportion of variance and $\sigma^2$ is the variance (= the standard deviation squared). *NB:* I’m using the heritabilities from the study but the standard deviations and means from the 2013 results.

```
component_sd <- function(v2, sd) sqrt(v2 * sd^2)
# calculate genetic, shared environment, and unique environment standard
# deviations
englishASD <- component_sd(englishA2, sqrt(EnglishScores$cov))
englishCSD <- component_sd(englishC2, sqrt(EnglishScores$cov))
englishESD <- component_sd(englishE2, sqrt(EnglishScores$cov))
```

To make this concrete, think about drawing your GCSE scores from a genetic and environmental lottery. You start with the mean score, then pick a number from hats *A*, *C*, and *E*. The hats vary in size based on how much of the differences in scores each contributes. The bigger the hat (like the genetic hat, *A*), the farther from zero the numbers if yields will potentially be. With smaller hats, like the unique environment hat *E*, the numbers will be more clustered around 0. The numbers you get from the three hats are added or subtracted from the mean to produce your score.

```
set.seed(34450)
draw1 <- rnorm(3, mean = 0, sd = c(englishASD, englishCSD, englishESD))
draw1
## [1] -0.6979 -0.4289 -0.4988
EnglishScores$center + sum(draw1)
## [1] 6.507
```

So our first hypothetical person draws from the hat and gets 3 genetic, shared environment, and unique environment deviations that are below the mean (e.g., = ‘bad genes’, ‘poor schooling’, ‘bad experiences’). These add up to a GSCE score of 6.5 (between an E and a D)

```
draw2 <- rnorm(3, mean = 0, sd = c(englishASD, englishCSD, englishESD))
draw2
## [1] 1.1156 -0.5552 0.7212
EnglishScores$center + sum(draw2)
## [1] 9.414
```

The next person gets positive genetic and unique environment deviations but a negative shared environment deviation (= ‘good genes’, ‘poor school’, ‘good experience’). These factors combine together to result in a score of 9.4 (a B).

We can look at the distribution of deviations on each of these factors in comparison to the phenotypic variance (P)

```
scores <- seq(4, 11, by = 0.01)
englishADensity <- dnorm(scores, mean = EnglishScores$center, sd = englishASD)
englishCDensity <- dnorm(scores, mean = EnglishScores$center, sd = englishCSD)
englishEDensity <- dnorm(scores, mean = EnglishScores$center, sd = englishESD)
englishPDensity <- dnorm(scores, mean = EnglishScores$center, sd = component_sd(1,
sqrt(EnglishScores$cov)))
plot(codings, unlist(EnglishGradeCount)/sum(unlist(EnglishGradeCount)), type = "h",
xlab = "GCSE English score", ylab = "Frequency", ylim = c(0, 0.6))
lines(scores, englishADensity, col = "blue")
lines(scores, englishCDensity, col = "orange")
lines(scores, englishEDensity, col = "red")
lines(scores, englishPDensity, col = "purple")
legend("topright", c("A", "C", "E", "P"), col = c("blue", "orange", "red", "purple"),
lwd = 3)
```

The plot shows the distribution of deviations around the mean for each of the factors compared to the phenotypic variance (purple). If your genetic (blue) and shared environment (orange) factors are average (with values of 0), then your unique environment deviation could not move you very far from the mean. This is what the paper means when it says that the unique environment contributes very little to the variance. The genetic factor, in contrast, is more spread out, so having genes that contribute very positively or very negatively to your score can move you far away from the mean.

Yet the environent can still make a big difference to test results if we were able to manipulate it. For example, if we took someone whose genetics and individual experience were average and moved them into a great family/school environment (2SD above the mean), we could potentially raise their English GCSE score from an 8 (a C) to a $8 + 2 \times \sqrt{.31 \times 1.6^2} = 9.8$ (which is almost an A). A student’s whose genes predispose them to the high intelligence and motivation that would result in an A* could be pulled down to a B by an unsupportive family/school environment. And someone’s whose genetic endowment destines them for a G could be potentially pulled up to a $4 + 2 \times \sqrt{.31 \times 1.6^2} + 2 * \sqrt{.18 \times 1.6^2} = 6.7$ (an E) with the right mix of interventions.

“The real issue, as I see it, is that we’re getting Bayes factors and posterior probabilities we don’t believe, because we’re assuming flat priors that don’t really make sense.”—Andrew Gelman

Typesetting confidence intervals

When summarizing parameters from a Bayesian analysis I usually write, say if we had a parameter *θ* with a posterior mode of 2 and a coverage interval (CI) from 1 to 3

θ= 2 (CI = 1, 3)

The reason I use a’ comma ,’ instead of an n-dash ‘–’ is if the upper part of the range is negative. For example, if the CI was -3 to -1 versus -3 to +1.

θ= -2 (CI = -3–-1)θ= -2 (CI = -3–1)

is typographically unclear but

θ= -2 (CI = -3, -1)θ= -2 (CI = -3, 1)

is easier to distinguish. However using a comma doesn’t visually show that we are talking about a range. My first thought was to give more space around the n-dash

θ= -2 (CI = -3 – -1)θ= -2 (CI = -3 – 1)

But that could still be confusing as the n-dash ‘—‘ and minus sign ‘-‘ look very similar. Could some other symbol, like a left right arrow ‘↔︎’ work?

θ= -2 (CI = -3 ↔ -1)θ= -2 (CI = -3 ↔ 1)

**[Update 12 Dec 2013]** Alex Weiss suggests using ‘to’ as the separator:

θ= -2 (CI = -3 to -1)θ= -2 (CI = -3 to 1)

“Models of data have a deep inﬂuence on the kinds of theorising that researchers do. A structural equation model with latent variables named Shifting, Updating, and Inhibition (Miyake et al. 2000) might suggest a view of the mind as inter-connected Gaussian distributed variables. These statistical constructs are driven by correlations between variables, rather than by the underlying cognitive processes – though the latter were used to select the measures used. Davelaar and Cooper (2010) argued, using a more cognitive-process-based mathematical model of the Stop Signal task and the Stroop task, that the inhibition part of the statistical model does not actually model inhibition, but rather models the strength of the pre-potent response channel. Returning to the older example introduced earlier of *g* (Spearman 1904), although the scores from a variety of tasks are positively correlated, this need not imply that the correlations are generated by a single cognitive (or social, or genetic, or whatever) process. The dynamical model proposed by van der Mass et al. (2006) shows that correlations can emerge due to mutually beneﬁcial interactions between quite distinct processes.”—Fugard, A. J. B & Stenning, K. (2013). Statistical models as cognitive models of individual differences in reasoning. Argument & Computation, 4(1), 89–102 wherein @indicutivestep has a “brief whinge about two latent variable models often used in psychology”

“

Psychology is an empiricist discipline; it has no core theory and so it leans heavily on it’s empirical results to prove that it’s doing something interesting.

The Bem experiments demonstrate how, without a theory, psychology is unable to deal rigorously with anomalous results.

”—from Golonka, S., & Wilson, A. D. Gibson’s ecological approach - a model for the benefits of a theory driven psychology and Replication will not save psychology“Open source…is not open science. Methods matter.”—Lior Pachter

“Taken together, these nine predictions represent a very stringent test of the theory, because all nine originate from a single theoretical framework. None of these predictions can be varied independently of the other eight in order to fit a particular set of data—the data must confirm them all, or the theory is ruled out.”—

Laura Mersini-Houghton, Beyond the Horizon of the Universe

Amazing how theories in physics fit together. To be able to murder hypotheses so definitively.

“[research] shows how good teachers improve reading standards for all but this means that *the variance that remains is more due to genetic differences*. This leads to a conclusion almost completely at odds with prevailing conventional wisdom in political and academic debates over education.”—Dominic Cummings, *Some thoughts on education and political priorities*