Category: Links

Links for April ’22

IQ and psychometrics

  • On the Continued Misinterpretation of Stereotype Threat as Accounting for Black-White Differences on Cognitive Tests by Tomeh & Sackett. A common misconception about stereotype threat, and a major reason for the popularity of the idea, is that in the absence of threat in the testing situation, the black-white IQ gap is eliminated. This is of course not the case but rather the experimental activation of stereotypes has (sometimes) been found to make the black-white gap larger than it normally is. In an analysis of early writings on stereotype threat, Sackett et al. (2004) reported that this misinterpretation was found in the majority of journal articles, textbooks, and popular press articles discussing the effect. In the new article, Tomeh and Sackett find that more recent textbooks and journal articles are still about equally likely to misinterpret stereotype threat in this way as to describe it correctly. I had hoped that the large multi-lab study of the effect would have put the whole idea to bed by now, but that study has unfortunately been delayed.
  • Invariance: What Does Measurement Invariance Allow us to Claim? by John Protzko. In this study people were randomized to complete either a scale aiming to measure “search for meaning in life”, or an altered nonsense version of the same scale where the words “meaning” and “purpose” had been replaced with the word “gavagai”. The respondents indicated their level of agreement or disagreement with statements such as “I am searching for meaning/gavagai in my life”. Both groups also completed an unaltered “free will” scale, and confirmatory factor models where a single factor underlay the “meaning/gavagai” items while another factor underlay the “free will” items were estimated. The two groups showed not only configural but also metric and scalar invariance for these factors. Given the usual interpretation of factorial invariance in psychometrics, this would suggest that the mean difference observed between the two groups on the “meaning/gavagai” scale reflects a mean difference on a particular latent construct. The data used were made available online, and I was able replicate the finding of configural, metric, and scalar invariance, given the ΔCFI/RMSEA criteria (strict invariance was not supported). The paradox appears to stem from the fact that individual differences on the “meaning in life” scale mostly reflect the wording and format of the items as well as response styles rather than tapping into a specific latent attitude which may not even exist, given the vagueness of the “meaning in life” scale. I found that I could move from scalar invariance to a more constrained model where all of the “meaning/gavagai” items had the same values for loadings and intercepts without worsening the model fit. So it seems that all the items were measuring the same thing (or things) but what that is is not apparent from a surface analysis of the items. Jordan Lasker has written a long response to Protzko, taking issue with the idea that two scales can have the same meaning without strict invariance as well as with the specific fit indices used. While I agree that strict invariance should always be pursued, Protzko’s discovery of scalar invariance using the conventional fit criteria is nevertheless interesting and requires an explanation. I think Lasker also makes a mistake in his analysis by setting the variances of the “meaning in life/gavagai” factors both to 1 even though this is not a constraint required for any level of factorial invariance. The extraneous constraint distorts his loadings estimates.
  • Effort impacts IQ test scores in a minor way: A multi-study investigation with healthy adult volunteers by Bates & Gignac. In three experiments (total N = 1201), adult participants first took a short spatial ability test (like this one) and were randomly assigned either to a treatment group or to a control group. Both groups then completed another version of the same test, with the treatment group participants promised a monetary reward if they improved their score by at least 10%. The effect of the incentives on test scores was small, d = 0.166, corresponding to 2.5 points on a standard IQ scale. This suggests that the effect size of d = 0.64 (or 9.6 points) reported in the meta-analysis by Duckworth et al. is strongly upwardly biased, as has been suspected. A limitation of the study is that the incentives were small, £10 at most. However, the participants were recruited through a crowdsourcing website and paid £1.20 for their participation (excluding the incentive bonuses), so it is possible that the rewards were substantial to them. Nevertheless, I would have liked to see if a genuinely large reward had a larger effect. Bates & Gignac also conducted a series of big observational studies (total N = 3007) where the correlation between test performance and a self-report measure of test motivation was 0.28. However, this correlation is ambiguous because self-reported motivation may be related to how easy or hard the respondent finds the test.

Education

  • The Coin Flip by Spotted Toad. This is an illuminating commentary on the Tennessee Pre-K study (on which I commented here) and the difficulty of causal inference in long-term experiments.
  • Do Meta-Analyses Oversell the Longer-Term Effects of Programs? Part 1 & Part 2 by Bailey & Weiss. This analysis found that in a meta-analytic sample of postsecondary education RCTs seeking to improve student outcomes, trials that reported larger initial effects were more likely to have long-term follow-up data collected and published. While this could be innocuous, with more effective interventions being selected for further study, it could also simply mean that studies more biased to the positive direction by sampling error were selected. So when you see a study touting the long-term benefits of some educational intervention, keep in mind that the sample may have been followed up only because the initial results were more promising than in other samples subjected to the same or similar interventions.
  • An Anatomy of the Intergenerational Correlation of Educational Attainment -Learning from the Educational Attainments of Norwegian Twins and their Children by Baier et al. Using Norwegian register data on the educational attainment of twins and their children, this study finds that the intergenerational correlation for education is entirely genetically mediated in Norway. The heritability of education was about 60 percent is both parents and children, while the shared environmental variance was 16% in parents and only 2% in children. This indicates that the shared environment is much less important for educational attainment in Norway than elsewhere (cf., Silventoinen et al., 2020), although this is partly a function of how assortative mating modeled.


Links for March ’22

Genetics

  • Polygenic prediction of educational attainment within and between families from genome-wide association analyses in 3 million individuals by Okbay et al. This is the newest iteration of the educational attainment GWAS by the SSGAC consortium, now with a sample of three million people. It was published today and I have only skimmed it. The number of SNPs identified is about 4,000 now, up from 1,300 in the previous GWAS, while the $R^2$ increased from 11–13% to 12–16%, depending on the validation sample. They also conclude that there are no common SNPs with substantial dominance effects for educational attainment, underlining the validity of the additive genetic model. The within-family effect sizes are about 56% of the population effect sizes for educational attainment, while the same ratio is 82% for IQ and more than 90% for height and BMI. The discrepancy between the within-family and population estimates is probably mostly due to indirect genetic effects (“genetic nurture”) and assortative mating. Replicating SNP effects from the previous, smaller education GWAS sample, they find that 99.7% of the SNP effects have matching signs in the new data, and that 93% are significant at the 1% level or lower, which fits theoretical predictions well (it seems that the GWAS enterprise has vindicated the much-derided null hypothesis significance testing paradigm).
  • Cross-trait assortative mating is widespread and inflates genetic correlation estimates by Border et al. The genetic correlation is a statistic measuring the extent to which genetic effects on two different traits are correlated. It is easy enough to calculate, but not easy to interpret because while the simplest interpretation is pleiotropy, several other causal and non-causal explanations are possible. This paper suggests that many genetic correlations are non-causal and result from cross-assortative mating, e.g., smarter than average women preferring to have children with taller than average men, which leads to a genetic correlation between IQ and height genes in the next generation even if height genes have no effect on IQ nor IQ genes on height. Among other findings, the paper suggests that the importance of the general factor of psychopathology has been overestimated due to a failure to consider cross-trait assortative mating.
  • Modeling assortative mating and genetic similarities between partners, siblings, and in-laws by Torvik et al. This is a nice example of using psychometric methods to infer latent genetic parameters.
  • Behavioral geneticist Lindon Eaves has died. He was one of the major creative forces behind the methodology of modern twin and family studies. I did not know that he was also an ordained Anglican priest. You do not see many men of the cloth in science these days (or at least their creed is rather different now). Eric Turkheimer says that he never heard Eaves utter an illiberal word, but I do notice some forbidden literature on his bookshelf at the first link.

IQ and personality

Miscellaneous

  • More waves in longitudinal studies do not help against initial sampling error by Emil Kirkegaard. Speaking of James Heckman, he has published another one of his endless reanalyses of the Perry Preschool study. Emil has a fun takedown of this absurd enterprise.
  • Assortative Mating and the Industrial Revolution: England, 1754-2021 by Clark & Cummings. In another installment of Greg Clark’s studies into the persistence of social status across generations, he has apparently found a constant, latent status correlation of 0.80 between spouses in England over the last few centuries. This suggest that grooms and brides matched tightly on underlying educational and occupational abilities even when higher education was rare and female participation in the labor market was limited. I have previously commented briefly on Clark’s work and the role of assortative mating in it here


Links for February ’22

Genetics

  • The “Golden Age” of Behavior Genetics? by Evan Charney. The author is a political scientist best known for his anti-hereditarian screeds, of which this is the latest. He likes to discuss various random phenomena from molecular biology, describing them as inscrutably complex, a hopeless tangle in the face of which genetic analyses are futile. Unfortunately, his understanding of the statistical models designed to cut through that tangle is very limited. For example, he endorses Eric Turkheimer’s howler that genome-wide association studies are “p-hacking”, and makes a ridiculous argument about GWAS findings being non-replicable (p. 8)–he does not appear to know, among other things, that statistical power is proportional to sample size (Ns in the studies he cites range from ~100k to ~1000k), that the p-value is a random variable, or that SNP “hits” are subject to the winner’s curse (he cites but evidently never read Okbay et al., 2016 and Lee et al., 2018, wherein it is shown that GWAS replication rates match theoretical expectations). He seeks to identify and amplify all possible sources of bias that could inflate genetic estimates, while ignoring biases in the opposite direction (e.g., the attenuating effect of assortative mating on within-sibship genomic estimates). Often the article is weirdly disjointed, e.g., Charney first discusses how sibling models have been used to control for population stratification, and then a couple of pages later says that it is impossible to know whether differences in religious affiliation are due to heritability or stratification. All in all, the article is a good example of what Thomas Bouchard has called pseudo-analysis.
  • Neither nature nor nurture: Using extended pedigree data to elucidate the origins of indirect genetic effects on offspring educational outcomes by Nivard et al. Contra naysayers like Charney, we are in the midst of a genuine golden age in behavior genetics. The underlying reason is the abundance of genomic data, which has spurred the development of so many new methods that it is hard to keep up with them. This preprint is the latest salvo in the debate about indirect genetic effects. Previous research has found indirect parental genetic effects in models where child phenotypes are regressed on child and parent polygenic scores. This study refines the design by doing the regression on adjusted parental polygenic scores that capture the personal deviations of parents’ scores from the mean scores of the parents and their own siblings. This refined design finds scant evidence for indirect parental genetic effects on children’s test scores, suggesting instead that apparent indirect effects are grandparental or “dynastic” effects of some sort. I think assortative mating is the most likely culprit. A limitation of this study is that even with a big sibling sample, the power to discriminate between different models is not high. Moreover, the study does not actually test the difference between the βs of the sibship and personal polygenic scores, and instead reasons from differences in significance, which is bad form.
  • The genetics of specific cognitive abilities by Procopio et al. This impressively large meta-analysis finds the heritability of specific abilities to be similar to that of g. That may be the case although most measures of non-g abilities in the analysis are confounded by g. They can formally separate g and non-g only in the TEDS cohort which has psychometrically rather weak measures of g.

IQ

  • Ian Deary and Robert Sternberg answer five self-inflicted questions about human intelligence. The two interlocutors in this discussion are mismatched: Deary is the most important intelligence researcher of his generation, known for his careful, wide-ranging empirical work, while Sternberg is one of the greatest blowhards and empty suits in psychology, known for generating mountains of repetitive, grandiose verbiage and for his disdain for anything but the most perfunctory tests of the theoretical entities that proliferate in his writings. Sternberg’s entries provide little insight, but there is some comedy in first reading his bloviations and then Deary’s courteous but often quietly savage responses. Deary emphasizes the value of establishing empirical regularities before or even instead of formulating psychological theories; notes the ubiquity of the jangle fallacy in cognitive research; and argues that cognitive psychological approaches have not generated any reductionist traction in explaining intelligence. According to Deary, a hard problem in intelligence research is one of public relations, that is, getting “across all the empirical regularities known about intelligence test scores”, the establishment of which has been “a success without equal in psychology.”
  • More articles by Stephen Breuning that need retraction by Russell Warne. Stephen Breuning is an erstwhile academic psychologist who was caught fabricating data on a large scale. He received the first US criminal conviction for research fraud in 1988. Nevertheless, many of his publications have not been retracted and continue to be cited, e.g., in the influential meta-analysis of the effects of test motivation on IQ by Duckworth et al. (2011). Warne reviews four of Breuning’s unretracted studies and identifies a number of inconsistencies and implausibilities that point to data fraud. It might be useful to further analyze these studies with GRIM, SPRITE, and the like.

Sex and race

  • Sex differences in adolescents’ occupational aspirations: Variations across time and place by Stoet & Geary. More evidence for the gender equality paradox which postulates that sex differences are larger in wealthier and freer societies because heritable sex differences are suppressed in poorer societies where individuals have less choice.
  • Why Are Racial Problems in the United States So Intractable? by Joseph Heath. Most modern societies have dealt with ethnic and racial diversity either by trying to integrate minorities to the majority population, or by recognizing the separateness of minorities and devolving political power to them. Some countries have judged the success of these efforts through the lens of equal opportunity, while others have sought outcome equality. Heath argues that race problems involving African-Americans are so intractable and acrimonious because there is no agreement on whether integration or separatism should be pursued, nor on how success and failure in racial affairs are to be judged. He manages to squeeze a good deal of analytic juice from this simple model while avoiding “bad actor” explanations which attribute all racial problems either to white malevolence or black incompetence. Money quote: “[T]he best way to describe the current American ap­proach to racial inclusion would to be to say that it is attempting to achieve Singaporean outcomes using Canadian methods and legal frameworks.”


Links for January ’22

I will try to get in the habit of collecting the most interesting studies, articles, posts, etc. related to human biodiversity in a monthly post, together with some commentary. The links are not necessarily to brand-new stuff; they are just what I happened to come across recently. Continue reading

© 2022 Human Varieties

Theme by Anders NorenUp ↑