Genetics

  • A Note on Jöreskog’s ACDE Twin Model: A Solution, Not the Solution by Dolan et al. This critique was published on the heels of my own recent, critical post on Jöreskog’s twin model. Using Mendelian algebra and a simple one-locus model, Dolan et al. show that Jöreskog’s estimates are biased. They also note that the combination of MZ and DZ covariances that Jöreskog proposes as an estimator of additive genetic variance does not have the correct expected value. While these arguments are true and on point, in their short article Dolan et al. do not go into what I think is the main problem with Jöreskog’s model: the absurdity of the idea that minimizing the Euclidean norm would produce meaningful behavioral genetic estimates. They note that sometimes Jöreskog’s ACDE estimates may be less biased than ACE and ADE estimates, but that would be pure happenstance because the data generating mechanism suggested by Jöreskog’s model is never realistic. In contrast, the ACE model (or its submodel, AE) is often a realistic approximation of the true data generating mechanism, and even if this is not the case, the amount of bias is usually tolerably low, while the biases of Jöreskog’s estimates can be severe in typical datasets (e.g., if AE is the true model).
  • Polygenic Health Index, General Health, and Disease Risk by Widen et al. This is a paper from people associated with Steve Hsu’s eugenics biotechnology startup. With UK Biobank data, they build an index from polygenic risk scores for twenty diseases (e.g., diabetes, heart disease, schizophrenia), and show that lower values on this index are associated with a lower risk for almost all the diseases included and a higher risk for none. The index also predicts a longer lifespan, and works, with lower accuracy, within families (between siblings) as well. Thus the index is a candidate for use in embryo selection. A common anti-eugenic argument is that by artificially selecting for something positive one may inadvertently select for something negative. The paper shows that in fact one can simultaneously decrease the risk of many diseases without increasing that of any of them. Generally, the argument about accidental adverse selection rests on the tacit assumption that the status quo where eugenic and dysgenic concerns are ignored is somehow natural, neutral, and harmless. However, every society selects for something and it is seems unlikely that, say, embryo selection based on polygenic index scores would have worse consequences than the status quo. For example, selection against educational attainment and for increased criminal offending happen in some contemporary societies, but that is hardly any kind of inevitable state of affairs that should not be tampered with.

Cognitive abilities

  • Brain size and intelligence: 2022 by Emil Kirkegaard. A good review of the state of the brain size and IQ literature. It seems that the true correlation is around 0.30.
  • General or specific abilities? Evidence from 33 countries participating in the PISA assessments by Pokropek et al. Arthur Jensen coined the term specificity doctrine to refer to the notion that cognitive ability tests derive their meaning and validity from the manifest surface content of the tests (e.g., a vocabulary test must solely or primarily measure the size of one’s vocabulary or, perhaps, verbal ability). He contrasted this view with the latent variable perspective, according to which the specific content of tests is not that relevant because all tests are measures of a small number of latent abilities, most importantly the g factor, which can be assessed with any kind of cognitive tests (see Jensen, 1984). While the specificity doctrine has very little to recommend for it (see also e.g., Canivez, 2013), it remains a highly popular approach to interpreting test scores. For example, in research on the PISA student achievement tests, the focus is almost always on specific tests or skills like math and reading rather than the common variance underlying the different tests. Pokropek et al. analyze the PISA tests and show that in all 33 OECD countries a g factor model fits the data much better than non-g models that have been proposed in the literature. The PISA items are close to being congeneric (i.e., with a single common factor), with the specific factors correlating with each other at close to 0.90, on average. The amount of reliable non-g variance is so low that subtests cannot be treated as measures of specific skill domains like math, reading, or science. The correct way to interpret PISA tests is at the general factor level, which is where the reliability and predictive validity of the tests is concentrated. The relevance, if any, of specific abilities is in their possible incremental validity over the g factor.
  • Training working memory for two years—No evidence of transfer to intelligence by Watrin et al. Another study showing that training cognitive skills improves the trained skills but has no effect on other skills or intelligence in general. This is another datum that supports the existence of a reflective, causal general intelligence factor, while it contradicts the idea that general intelligence is an epiphenomenon that arises from a sampling of specific abilities.

Group differences

  • How useful are national IQs? by Noah Carl. A nice defense of research on national IQs. Interesting point: “If the measured IQ in Sub-Saharan Africa is 80, this would mean the massive difference in environment between Sub-Saharan Africa and the US reduces IQ by only 5 points, yet the comparatively small difference in environment between black and white Americans somehow reduces it by 15 points.”
  • Analyzing racial disparities in socioeconomic outcomes in three NCES datasets by Jay M. A lucidly written analysis of the main drivers of racial/ethnic disparities in educational attainment, occupational prestige, and income in America, based on several large longitudinal datasets. Some stylized facts from the many models reported: Outcome gaps strongly favor whites to blacks in unconditional analyses but these gaps are eliminated or reversed after controlling for just high school test scores and grades; Asians outachieve whites to a similar degree regardless of whether analyses are adjusted for test scores and grades; Hispanics and Native Americans are as disadvantaged as blacks in unconditional analyses, and while controlling for test scores and grades typically makes them statistically indistinguishable from whites, the effect of these covariates in them is clearly weaker than in blacks; the effect of cognitive skills is larger for educational attainment and occupational prestige than for income (although this may be partly because the analysis platform used does not permit the more appropriate log-normal functional form).
  • Examination of differential effects of cognitive abilities on reading and mathematics achievement across race and ethnicity: Evidence with the WJ IV by Hajovsky & Chesnut. This study finds that scalar invariance with respect to white, black, Hispanic, and Asian Americans holds for the Woodcock-Johnson IV IQ test. For the most part, the test also predicts achievement test scores similarly across races. The achievement tests were also invariant with respect to race/ethnicity. While these results are plausible, there are several aspects of this study that makes it relatively uninformative. Firstly, they fit a model with seven first-order factors, which is the test publisher’s preferred model, but, as usual with these things, it is an overfactored model and the fit is pretty marginal. Secondly, they don’t test for strict invariance. Thirdly, the white sample is much larger than the non-white samples, which means that the fit in whites contributes disproportionately to the invariance tests. Fourthly and most damagingly, they adjust all the test scores for parental education, which removes unknown amounts of genetic and environmental variance from the scores. The results reported therefore concern a poorly fitting model based on test scores part of whose variance has been removed in a way that may in itself be racially non-invariant. I would like to a see a methodologically more thoughtful analysis of this dataset.
  • Race and the Mismeasure of School Quality by Angrist et al. Students in schools with larger white and Asian student shares have superior academic outcomes. This instrumental variable analysis suggests that this is not because such schools offer superior instruction but simply because of selection effects, so that if students were randomized to attend schools with different racial compositions, they would be expected to achieve at similar levels. This seems plausible enough, and Angrist et al. suggest that this information should “increase the demand for schools with lower white enrollment.” That does not seem plausible to me because, as they also note, “school choice may respond more to peer characteristics than to value-added.” A “good school” is one primarily because of the quality of its students, not the quality of its teaching.