Politics
- Psychologist Bryan Pesta was fired from his tenured position at Cleveland State University. The reason given was careless handling of protected data, but, as detailed in the linked article, the firing was the culmination of many years of campaigning against Pesta by activists who were incensed by his research on racial differences. Among other things, Pesta is a coauthor of Global Ancestry and Cognitive Ability, a seminal study that found that IQ increases linearly as a function of European ancestry in black Americans, independently of skin color (a result that has been replicated in independent samples).
- The academic journal Nature Human Behavior announced that it seeks to suppress all research that has even the slightest whiff of HBD, possibly including even research on purely cultural differences. Of course, many other journals already follow something like this policy, only less openly (e.g., Behavioral Sciences), so this is more of a codification of the fait accompli than something new. As bad as it is, it could easily get much worse. Universities have hired thousands upon thousands of faculty and administrators wedded to the blank slate dogma and its attendant conspiracy theories about group differences, and these people may well be able to throw their weight around in the coming years, plumbing new depths of foolishness. Meanwhile, the more level-headed researchers will be easily cowed into silence, and ever wider areas of human behavior will be closed off to honest research.
- Don’t Even Go There by James Lee. A case in point regarding the widening circle of censorship is the blocking of access to publicly funded genomic data to researchers studying “dangerous” topics such as human intelligence. As explained in the article, this suppression includes analyses focused strictly on individual differences, not just group differences. Vague insinuations that amorphous harms could occur if human differences were freely studied are enough to stop research now.
- The ISIR Vienna affair by Noah Carl. A post mortem of the cancellation of Emil Kirkegaard at an intelligence research conference last summer. As Carl notes, the instigator, geneticist Abdel Abdellaoui, has himself been subject to attacks by activists for some of the same offenses that he took Emil to task for. Abdellaoui’s leftist detractors reject his protestations to the contrary and treat his research as a stalking horse for the racial questions that are explicit in Emil’s writings. In this (and only this) respect they are onto something, I think. Abdellaoui tries to draw a bright line between the good, moral individual differences research he is engaged in, and the bad, immoral group differences research of Emil and others. However, individual differences and group differences are made of the same stuff, and trying to stave off the latter while championing the former cannot be done with intellectual consistency. A good starting point for research on group differences is Cheverud’s conjecture which asserts that if there is a phenotypic correlation between two traits, the expectation should be that there is also a genetic correlation of a similar magnitude between them. So, if there is a phenotypic correlation between racial identity and IQ, one should bet on there being a genetic correlation, too.
- Gender Gaps at the Academies by Card et al. This paper analyzed the publication and citation records of male and female psychologists, economists, and mathematicians elected as members of the National Academy of Science or the American Academy of Arts and Science over the last sixty years. In a sample of all authors who had published in the top dozen or so journals in each field, women were equally or (non-significantly) less likely to get an academy membership in the 1960s and 1970s conditional on publications and citations. In the 1980s and 1990s, there was gender parity or some female advantage, but in the last twenty years, a large gender gap has emerged, with women being 3–15 times more likely to become members conditional on publications and citations. While this kind of study design is vulnerable to omitted variable bias, the female advantage is now so large that it is likely that the men elected to membership in these organizations are of clearly higher caliber than the women.
- Septimius Severus Was Not Black, Who Cares? by Razib Khan. In today’s academia, centering the black experience is seen as a moral imperative, so given that blacks have been non-players in most of the world’s history, there is now a strong incentive to transmogrify historical figures of uncertain ancestry into blacks, a practice with a long tradition in Afrocentric pseudoscholarship. Razib’s post is a nice evisceration of an article by a history professor claiming that many prominent figures in Ancient Rome were black Africans, and even that “Black Romans were central to Classical culture”.
Genetics
- On the causal interpretation of heritability from a structural causal modeling perspective by Lu & Bourrat. According to the authors, the “current consensus among philosophers of biology is that heritability analysis has minimal causal implications.” Except for rare dissenters like the great Neven Sesardic, philosophers seem to never have been able to move on from the arguments against heritability estimation that Richard Lewontin made in the 1970s. Fortunately, quantitative and behavioral geneticists have paid no attention to philosophers’ musings on the topic, and have instead soldiered on, collecting tons of new genetically informative data and developing numerous methods so as to analyze genetic causation. Lu & Bourrat’s critique of behavioral genetic decompositions of phenotypic variance is centered on gene-environment interactions and correlations. They write that “there is emerging evidence of substantial interaction in psychiatric disorders; therefore, deliberate testing of interaction hypotheses involving meta-analysis has been suggested (Moffitt et al., 2005).” That they cite a 17-year-old paper from the candidate gene era as “emerging evidence” in 2022 underlines the fact that the case for gene-environment interactions remains empirically very thin, despite its centrality to the worldview of the critics of behavioral genetics (see Border et al., 2019 regarding the fate of the research program by Moffitt et al.). As to gene-environment correlations, twin and family studies are well-equipped to deal with passive gene-environment correlations and population stratification, whereas active and reactive correlations (for definitions, see Plomin et al., 1977) are quite naturally regarded as ways for the genotype to be expressed, and thus can be subsumed under genetic variance. One can imagine a draconian experimental scenario where gene-environment correlations are prevented from occurring during individual development, so that, say, bookish people are not allowed to read more books, or extraverted people cannot befriend more people, or athletic people are prevented from playing sports. Behavioral genetic estimates based on individuals raised in such unnatural, circumscribed environments would hardly be more meaningful than, say, ordinary twin estimates which are based on comparatively uncontrolled environments. Lu & Bourrat also provide an extended treatment of heritability using Judea Pearl’s causal calculus, but I do not think Pearl’s machinery sheds new light on the topic. R.A. Fisher’s traditional definition of genetic causation as the average effect of gene substitution, i.e., what would happen, on average, if an individual had this rather than that variant of a gene, is in agreement with modern counterfactual frameworks like Pearl’s. Thus (additive) heritability is the standardized variance of the sum of the substitution effects of all loci.
- Causal effects on complex traits are similar across segments of different continental ancestries within admixed individuals by Hou et al. Phased genotype data includes information on the origin of each allele, i.e., whether it came from a paternal or maternal gamete. In an admixed population like African-Americans, phased data enables the determination of whether a given variant, along with all the DNA linked to it, was inherited from a black or white ancestor. With this information it is possible to compare associations between a trait and the same SNP inherited from black and white ancestors in the same individual, provided that he or she is homozygous for the locus. It turns out that the effect sizes are correlated at 0.95 between variants inherited from white and black ancestors. If I am interpreting this correctly, this finding goes against the popular argument that the decay of GWAS effect sizes (and polygenic scores) in samples that are ancestrally distant from the GWAS discovery population is primarily due to the same SNPs failing to tag the true causal variants in different populations. Instead, the paper suggests that the main culprit for the decay effect is differences in allele frequencies between populations, the average effect of an allele being a function of both its actual effect and its frequency. The within-individual method of this study could also be used to resolve some race difference problems, such as the one I discussed in this note.
Psychometrics
- Theory-driven Game-based Assessment of General Cognitive Ability: Design Theory, Measurement, Prediction of Performance, and Test Fairness by Landers et al. In this study, a sample of >600 students completed both a traditional IQ test battery and a gamified test battery consisting of video games designed to assess cognitive skills. The correlation of the g factors from the two batteries was 0.97, indicating that general mental ability can be measured equally well in these two quite different modalities. This is a nice demonstration of Charles Spearman’s principle of the indifference of the indicator: any task that requires cognitive effort and discrimination between correct and incorrect stimuli can be used to measure g. The study also looked into black-white gaps in the sample. The game-based assessment gap was 0.77, while the gap in the traditional test was 0.95; however, the difference between the two gaps was non-significant. I think the data from the study are openly available, so you could fit latent variable models to see if the difference has a substantive cause. or if it is just noise.
- Are Piagetian Scales Just Intelligence Tests? by Jordan Lasker. The latent correlation between g from IQ tests and the general factor of Piagetian tests was found to be 0.85 in a meta-analysis, so the two were highly similar and might have been completely indistinguishable if better, larger test batteries had been available.
- Stop Worrying About Multiple-Choice: Fact Knowledge Does Not Change With Response Format by Staab et al. Arguably, the multiple-choice format typically used in standardized tests is suboptimal, contributing irrelevant method variance to test scores. This study compared multiple-choice and open-ended items in tests of knowledge in natural sciences, life sciences, humanities, and social sciences. While the open-ended items were somewhat more difficult, the two types of items ranked individuals in identical order (latent correlation ~1), meaning that “method factors turned out to be irrelevant, whereas trait factors accounted for all of the individual differences.”
- Cognitive Training: A Field in Search of a Phenomenon by Gobet & Sala. The Holy Grail of cognitive training research is far transfer which means that the training produces generalized improvements across different abilities and not just near transfer, or better performance on the trained task and closely related tasks. As detailed in the article, this goal has not panned out–regardless of type of training, only near transfer is achieved. The implication for education is that it is best to focus on mastering specific content domains rather than trying to improve general reasoning abilities. On the other hand, this throws the importance of general intelligence into high relief: while human learning is content-specific, higher g makes it easier to gain an understanding of any specific topic, enabling both superior performance on novel tasks and a shorter path to mastery over any given knowledge domain.
- Personality and Intelligence: A Meta-Analysis by Anglim et al. This very large-N meta-analysis found the reliability-corrected correlations of general intelligence with openness and neuroticism to be 0.20 and -0.09, respectively, while the correlations with extraversion, agreeableness, and conscientiousness were essentially zero. All kinds of personality types are found at every level of intelligence with nearly the same probability. These estimates are almost identical to those reported in a previous meta-analysis by Judge et al. (2007), so no new ground was broken in this respect, but at least some things replicate in psychology. Anglim et al. also meta-analyzed the relationship of intelligence to narrower aspects of personality, as well as sex differences in personality (for example, women score about 0.30 SDs higher in neuroticism and agreeableness).
Group differences
- Measurement Invariance, Selection Invariance, and Fair Selection Revisited by Heesen & Romeijn. If two groups differ in their mean values (or variances) for some trait, an unbiased test measuring that trait will generally not be able to predict the performance of the members of those groups (in, say, a job or school) without bias with respect to group membership. This may lead to unfairness when the test is used to select individuals from different groups. I have previously discussed this phenomenon (Kelley’s paradox), and some of the related history, which goes back to the 1970s and even earlier, here. Heesen & Romeijn revisit this argument and express it in a more general form. They also note that Kelley’s paradox has been recently rediscovered outside of psychometrics, in research on algorithmic bias in machine learning. The paradox entails that when people are selected based on an unbiased test, and one group has a higher mean than the other but variances are the same, the higher-scoring group will generally have a higher true positive rate (sensitivity), and a higher positive predictive value, while the lower-scoring group will have a higher true negative rate (specificity), and a higher negative predictive value. When variances differ, too, the pattern of expected differences in error rates is more complicated, but if the higher-scoring group also has a higher variance, the differences would typically be in the same direction as when only means differ. These results apply not only to psychometric tests but to any (less than perfectly reliable) procedures for assessing and selecting individuals (or other units of analysis).
- Role Models Revisited: HBCUs, Same-Race Teacher Effects, and Black Student Achievement by Lavar Edmonds. This study found that elementary school teachers who graduated from Historically Black College and Universities (HBCUs) have a small positive effect (about 0.03 standard deviations) on the math test scores of their black students compared to non-HBCU graduates. This was found for both black and non-black HBCU teachers, but there were no effects either way on non-black students. Overall, having a black teacher was not associated with superior black student performance, because non-HBCU-trained black teachers had a significant negative effect (-0.02 SDs) on their black students. However, I do not quite buy these estimates because the study has shortcomings that are common in observational studies, especially in economics. In particular, the study has a huge sample (thousands of teachers, hundreds of thousands of students), and even the simplest models reported contain two fixed effects and ten control variables, yet the reported effects are small and in some models barely significant (despite a no doubt extensive, even if unacknowledged, specification search). With such large models it is difficult to say what is even being estimated. My ideal model would be one where the theory is so well-developed and the data so deftly collected that a bivariate regression will yield a plausible causal estimate. The further away you move from this ideal, the less credible your causal claims become, so every additional variable is potentially problematic. I do not believe that drawing DAGs showing the causal pathways, or lack thereof, between your variables is nearly as useful as Judea Pearl and his acolytes think, but if you would not even be capable of drawing one because of how complex your model is, I do not think you should be making causal claims.
- Racial and Ethnic Differences in Homework Time among U.S. Teens by Dunatchik & Park. According to time diary data, Asian-American high-schoolers spend an average of 2 hours and 14 minutes a day on homework, while the averages for white, Hispanic, and black students are 56 minutes, 50 minutes, and 36 minutes, respectively. On the other hand, a 2015 meta-analysis found a correlation of less than 0.20 between homework time and standardized test scores, so homework is not a strong predictor of achievement even before accounting for reverse causality. Then again, looking at, say, the skyrocketing SAT performance of Asian-Americans, I believe that they do get some returns on their Stakhanovite attitude to school.
- National Intelligence and Economic Growth: A Bayesian Update by Francis & Kirkegaard. In 2002, Lynn & Vanhanen published IQ and the Wealth of Nations where the predictive power of national IQ was shown to be superior to that of the traditional predictors of growth used by economists. The most common response to this finding has been to ignore it, while the second-most common response has been to dispute the validity of Lynn & Vanhanen’s data. The problem with the latter approach is that even with all their shortcomings, national IQ data predict GDP and all other indices of development extraordinarily well, so it is unwise to dismiss them out of hand. Moreover, new and more carefully curated test score collections, such as the World Bank’s harmonized test scores, show strong convergent validity with national IQs. An exception to the neglect of IQ in the econometric growth literature is Jones & Schneider (2006) where national IQ was put to a severe test through Bayesian model averaging. They ran thousands of growth regressions with different sets of predictors, and found that the effect of national IQ was extremely robust to differences in model specification, indicating that IQ must be treated as an independent predictor of growth, not a proxy for something else. Francis & Kirkegaard’s recent study is an update and extension of Jones & Schneider’s analysis. They use more data and new robustness checks while explicitly comparing IQ and competing predictors. Across millions of growth regression models with different sets of predictors, they find that national IQ blows all other variables out of the water (with the exception of previous GDP, which seems to predict as well as IQ but negatively, reflecting the “advantage of backwardness”). The authors also use three instruments–cranial capacity, ancestry-adjusted UV radiation, and 19th-century numeracy scores (age heaping)–in an attempt to rule out confounding but whether the instruments really are exogenous in the GDP~IQ regression can be questioned. Reverse causality seems to be, to some extent, baked-in in national IQ estimates because of the Flynn effect. There are many questions of causality that remain unsettled, but given the incomparable predictive power of national IQs, no serious study of the wealth of nations should ignore them. The causal influence of national IQs is prima facie more credible than that of many other predictors because of the robust influence of individual IQs on socioeconomic outcomes.
- Understanding Greater Male Variability by Inquisitive Bird. A lucidly written overview of greater male variability in cognitive tests. The post makes the interesting observation that the male-female variance ratio increases as the mean difference in favor of males in the test increases but that the male variance is larger even when the means are equal.
- Skill deficits among foreign-educated immigrants: Evidence from the U.S. PIAAC by Jason Richwine. Using test scores from the PIAAC survey, this study found that immigrants to the U.S. score 0.82 and 0.54 SDs lower on measures of literacy and numeracy, respectively, compared to natives, after controlling for age and educational attainment. The gaps are somewhat reduced but remain significant after controlling for self-assessed English reading ability. Test score differences explain at least half of the wage penalty and “underemployment” (i.e., holding a job below one’s apparent skill level) experienced by foreign-educated immigrants. Richwine does not report country of origin effects, arguing that the sample is too small. However, the immigrant sample size is about 1500, so some analyses at the continental level would have been feasible.
- Replotting the College Board’s 2011 Race and SES Barplot by Jordan Lasker. Mean SAT scores by race and parental income in 2011 (click to enlarge):