Author: Dalliard (Page 1 of 3)

Links for April ’22

IQ and psychometrics

  • On the Continued Misinterpretation of Stereotype Threat as Accounting for Black-White Differences on Cognitive Tests by Tomeh & Sackett. A common misconception about stereotype threat, and a major reason for the popularity of the idea, is that in the absence of threat in the testing situation, the black-white IQ gap is eliminated. This is of course not the case but rather the experimental activation of stereotypes has (sometimes) been found to make the black-white gap larger than it normally is. In an analysis of early writings on stereotype threat, Sackett et al. (2004) reported that this misinterpretation was found in the majority of journal articles, textbooks, and popular press articles discussing the effect. In the new article, Tomeh and Sackett find that more recent textbooks and journal articles are still about equally likely to misinterpret stereotype threat in this way as to describe it correctly. I had hoped that the large multi-lab study of the effect would have put the whole idea to bed by now, but that study has unfortunately been delayed.
  • Invariance: What Does Measurement Invariance Allow us to Claim? by John Protzko. In this study people were randomized to complete either a scale aiming to measure “search for meaning in life”, or an altered nonsense version of the same scale where the words “meaning” and “purpose” had been replaced with the word “gavagai”. The respondents indicated their level of agreement or disagreement with statements such as “I am searching for meaning/gavagai in my life”. Both groups also completed an unaltered “free will” scale, and confirmatory factor models where a single factor underlay the “meaning/gavagai” items while another factor underlay the “free will” items were estimated. The two groups showed not only configural but also metric and scalar invariance for these factors. Given the usual interpretation of factorial invariance in psychometrics, this would suggest that the mean difference observed between the two groups on the “meaning/gavagai” scale reflects a mean difference on a particular latent construct. The data used were made available online, and I was able replicate the finding of configural, metric, and scalar invariance, given the ΔCFI/RMSEA criteria (strict invariance was not supported). The paradox appears to stem from the fact that individual differences on the “meaning in life” scale mostly reflect the wording and format of the items as well as response styles rather than tapping into a specific latent attitude which may not even exist, given the vagueness of the “meaning in life” scale. I found that I could move from scalar invariance to a more constrained model where all of the “meaning/gavagai” items had the same values for loadings and intercepts without worsening the model fit. So it seems that all the items were measuring the same thing (or things) but what that is is not apparent from a surface analysis of the items. Jordan Lasker has written a long response to Protzko, taking issue with the idea that two scales can have the same meaning without strict invariance as well as with the specific fit indices used. While I agree that strict invariance should always be pursued, Protzko’s discovery of scalar invariance using the conventional fit criteria is nevertheless interesting and requires an explanation. I think Lasker also makes a mistake in his analysis by setting the variances of the “meaning in life/gavagai” factors both to 1 even though this is not a constraint required for any level of factorial invariance. The extraneous constraint distorts his loadings estimates.
  • Effort impacts IQ test scores in a minor way: A multi-study investigation with healthy adult volunteers by Bates & Gignac. In three experiments (total N = 1201), adult participants first took a short spatial ability test (like this one) and were randomly assigned either to a treatment group or to a control group. Both groups then completed another version of the same test, with the treatment group participants promised a monetary reward if they improved their score by at least 10%. The effect of the incentives on test scores was small, d = 0.166, corresponding to 2.5 points on a standard IQ scale. This suggests that the effect size of d = 0.64 (or 9.6 points) reported in the meta-analysis by Duckworth et al. is strongly upwardly biased, as has been suspected. A limitation of the study is that the incentives were small, £10 at most. However, the participants were recruited through a crowdsourcing website and paid £1.20 for their participation (excluding the incentive bonuses), so it is possible that the rewards were substantial to them. Nevertheless, I would have liked to see if a genuinely large reward had a larger effect. Bates & Gignac also conducted a series of big observational studies (total N = 3007) where the correlation between test performance and a self-report measure of test motivation was 0.28. However, this correlation is ambiguous because self-reported motivation may be related to how easy or hard the respondent finds the test.

Education

  • The Coin Flip by Spotted Toad. This is an illuminating commentary on the Tennessee Pre-K study (on which I commented here) and the difficulty of causal inference in long-term experiments.
  • Do Meta-Analyses Oversell the Longer-Term Effects of Programs? Part 1 & Part 2 by Bailey & Weiss. This analysis found that in a meta-analytic sample of postsecondary education RCTs seeking to improve student outcomes, trials that reported larger initial effects were more likely to have long-term follow-up data collected and published. While this could be innocuous, with more effective interventions being selected for further study, it could also simply mean that studies more biased to the positive direction by sampling error were selected. So when you see a study touting the long-term benefits of some educational intervention, keep in mind that the sample may have been followed up only because the initial results were more promising than in other samples subjected to the same or similar interventions.
  • An Anatomy of the Intergenerational Correlation of Educational Attainment -Learning from the Educational Attainments of Norwegian Twins and their Children by Baier et al. Using Norwegian register data on the educational attainment of twins and their children, this study finds that the intergenerational correlation for education is entirely genetically mediated in Norway. The heritability of education was about 60 percent is both parents and children, while the shared environmental variance was 16% in parents and only 2% in children. This indicates that the shared environment is much less important for educational attainment in Norway than elsewhere (cf., Silventoinen et al., 2020), although this is partly a function of how assortative mating modeled.


Links for March ’22

Genetics

  • Polygenic prediction of educational attainment within and between families from genome-wide association analyses in 3 million individuals by Okbay et al. This is the newest iteration of the educational attainment GWAS by the SSGAC consortium, now with a sample of three million people. It was published today and I have only skimmed it. The number of SNPs identified is about 4,000 now, up from 1,300 in the previous GWAS, while the $R^2$ increased from 11–13% to 12–16%, depending on the validation sample. They also conclude that there are no common SNPs with substantial dominance effects for educational attainment, underlining the validity of the additive genetic model. The within-family effect sizes are about 56% of the population effect sizes for educational attainment, while the same ratio is 82% for IQ and more than 90% for height and BMI. The discrepancy between the within-family and population estimates is probably mostly due to indirect genetic effects (“genetic nurture”) and assortative mating. Replicating SNP effects from the previous, smaller education GWAS sample, they find that 99.7% of the SNP effects have matching signs in the new data, and that 93% are significant at the 1% level or lower, which fits theoretical predictions well (it seems that the GWAS enterprise has vindicated the much-derided null hypothesis significance testing paradigm).
  • Cross-trait assortative mating is widespread and inflates genetic correlation estimates by Border et al. The genetic correlation is a statistic measuring the extent to which genetic effects on two different traits are correlated. It is easy enough to calculate, but not easy to interpret because while the simplest interpretation is pleiotropy, several other causal and non-causal explanations are possible. This paper suggests that many genetic correlations are non-causal and result from cross-assortative mating, e.g., smarter than average women preferring to have children with taller than average men, which leads to a genetic correlation between IQ and height genes in the next generation even if height genes have no effect on IQ nor IQ genes on height. Among other findings, the paper suggests that the importance of the general factor of psychopathology has been overestimated due to a failure to consider cross-trait assortative mating.
  • Modeling assortative mating and genetic similarities between partners, siblings, and in-laws by Torvik et al. This is a nice example of using psychometric methods to infer latent genetic parameters.
  • Behavioral geneticist Lindon Eaves has died. He was one of the major creative forces behind the methodology of modern twin and family studies. I did not know that he was also an ordained Anglican priest. You do not see many men of the cloth in science these days (or at least their creed is rather different now). Eric Turkheimer says that he never heard Eaves utter an illiberal word, but I do notice some forbidden literature on his bookshelf at the first link.

IQ and personality

Miscellaneous

  • More waves in longitudinal studies do not help against initial sampling error by Emil Kirkegaard. Speaking of James Heckman, he has published another one of his endless reanalyses of the Perry Preschool study. Emil has a fun takedown of this absurd enterprise.
  • Assortative Mating and the Industrial Revolution: England, 1754-2021 by Clark & Cummings. In another installment of Greg Clark’s studies into the persistence of social status across generations, he has apparently found a constant, latent status correlation of 0.80 between spouses in England over the last few centuries. This suggest that grooms and brides matched tightly on underlying educational and occupational abilities even when higher education was rare and female participation in the labor market was limited. I have previously commented briefly on Clark’s work and the role of assortative mating in it here


Links for February ’22

Genetics

  • The “Golden Age” of Behavior Genetics? by Evan Charney. The author is a political scientist best known for his anti-hereditarian screeds, of which this is the latest. He likes to discuss various random phenomena from molecular biology, describing them as inscrutably complex, a hopeless tangle in the face of which genetic analyses are futile. Unfortunately, his understanding of the statistical models designed to cut through that tangle is very limited. For example, he endorses Eric Turkheimer’s howler that genome-wide association studies are “p-hacking”, and makes a ridiculous argument about GWAS findings being non-replicable (p. 8)–he does not appear to know, among other things, that statistical power is proportional to sample size (Ns in the studies he cites range from ~100k to ~1000k), that the p-value is a random variable, or that SNP “hits” are subject to the winner’s curse (he cites but evidently never read Okbay et al., 2016 and Lee et al., 2018, wherein it is shown that GWAS replication rates match theoretical expectations). He seeks to identify and amplify all possible sources of bias that could inflate genetic estimates, while ignoring biases in the opposite direction (e.g., the attenuating effect of assortative mating on within-sibship genomic estimates). Often the article is weirdly disjointed, e.g., Charney first discusses how sibling models have been used to control for population stratification, and then a couple of pages later says that it is impossible to know whether differences in religious affiliation are due to heritability or stratification. All in all, the article is a good example of what Thomas Bouchard has called pseudo-analysis.
  • Neither nature nor nurture: Using extended pedigree data to elucidate the origins of indirect genetic effects on offspring educational outcomes by Nivard et al. Contra naysayers like Charney, we are in the midst of a genuine golden age in behavior genetics. The underlying reason is the abundance of genomic data, which has spurred the development of so many new methods that it is hard to keep up with them. This preprint is the latest salvo in the debate about indirect genetic effects. Previous research has found indirect parental genetic effects in models where child phenotypes are regressed on child and parent polygenic scores. This study refines the design by doing the regression on adjusted parental polygenic scores that capture the personal deviations of parents’ scores from the mean scores of the parents and their own siblings. This refined design finds scant evidence for indirect parental genetic effects on children’s test scores, suggesting instead that apparent indirect effects are grandparental or “dynastic” effects of some sort. I think assortative mating is the most likely culprit. A limitation of this study is that even with a big sibling sample, the power to discriminate between different models is not high. Moreover, the study does not actually test the difference between the βs of the sibship and personal polygenic scores, and instead reasons from differences in significance, which is bad form.
  • The genetics of specific cognitive abilities by Procopio et al. This impressively large meta-analysis finds the heritability of specific abilities to be similar to that of g. That may be the case although most measures of non-g abilities in the analysis are confounded by g. They can formally separate g and non-g only in the TEDS cohort which has psychometrically rather weak measures of g.

IQ

  • Ian Deary and Robert Sternberg answer five self-inflicted questions about human intelligence. The two interlocutors in this discussion are mismatched: Deary is the most important intelligence researcher of his generation, known for his careful, wide-ranging empirical work, while Sternberg is one of the greatest blowhards and empty suits in psychology, known for generating mountains of repetitive, grandiose verbiage and for his disdain for anything but the most perfunctory tests of the theoretical entities that proliferate in his writings. Sternberg’s entries provide little insight, but there is some comedy in first reading his bloviations and then Deary’s courteous but often quietly savage responses. Deary emphasizes the value of establishing empirical regularities before or even instead of formulating psychological theories; notes the ubiquity of the jangle fallacy in cognitive research; and argues that cognitive psychological approaches have not generated any reductionist traction in explaining intelligence. According to Deary, a hard problem in intelligence research is one of public relations, that is, getting “across all the empirical regularities known about intelligence test scores”, the establishment of which has been “a success without equal in psychology.”
  • More articles by Stephen Breuning that need retraction by Russell Warne. Stephen Breuning is an erstwhile academic psychologist who was caught fabricating data on a large scale. He received the first US criminal conviction for research fraud in 1988. Nevertheless, many of his publications have not been retracted and continue to be cited, e.g., in the influential meta-analysis of the effects of test motivation on IQ by Duckworth et al. (2011). Warne reviews four of Breuning’s unretracted studies and identifies a number of inconsistencies and implausibilities that point to data fraud. It might be useful to further analyze these studies with GRIM, SPRITE, and the like.

Sex and race

  • Sex differences in adolescents’ occupational aspirations: Variations across time and place by Stoet & Geary. More evidence for the gender equality paradox which postulates that sex differences are larger in wealthier and freer societies because heritable sex differences are suppressed in poorer societies where individuals have less choice.
  • Why Are Racial Problems in the United States So Intractable? by Joseph Heath. Most modern societies have dealt with ethnic and racial diversity either by trying to integrate minorities to the majority population, or by recognizing the separateness of minorities and devolving political power to them. Some countries have judged the success of these efforts through the lens of equal opportunity, while others have sought outcome equality. Heath argues that race problems involving African-Americans are so intractable and acrimonious because there is no agreement on whether integration or separatism should be pursued, nor on how success and failure in racial affairs are to be judged. He manages to squeeze a good deal of analytic juice from this simple model while avoiding “bad actor” explanations which attribute all racial problems either to white malevolence or black incompetence. Money quote: “[T]he best way to describe the current American ap­proach to racial inclusion would to be to say that it is attempting to achieve Singaporean outcomes using Canadian methods and legal frameworks.”


Links for January ’22

I will try to get in the habit of collecting the most interesting studies, articles, posts, etc. related to human biodiversity in a monthly post, together with some commentary. The links are not necessarily to brand-new stuff; they are just what I happened to come across recently. Continue reading

The Persistence of Cognitive Inequality: Reflections on Arthur Jensen’s “Not Unreasonable Hypothesis” after Fifty Years

In 1969, Harvard Educational Review published a long, 122-page article under the title “How Much Can We Boost IQ and Scholastic Achievement?” It was authored by Arthur R. Jensen (1923–2012), a professor of educational psychology at the University of California, Berkeley. The article offered an overview of the measurement and determinants of cognitive ability and its relation to academic achievement, as well as a largely negative assessment of attempts to ameliorate intellectual and educational deficiencies through preschool and compensatory education programs. Jensen also made some suggestions on how to change educational systems to better accommodate students with disparate levels of ability.

While most of the article did not deal with race, Jensen did argue that it was “a not unreasonable hypothesis” that genetic differences between whites and blacks were an important cause of IQ and achievement gaps between the two races. This set off a huge academic controversy—Google Scholar says that the article was cited more than 1,200 times in the decade after its publication and almost 5,400 times by December 2019. The dispute about the article centered on the question of racial differences, which is understandable as Jensen’s thesis came out on the heels of the civil rights movement and its attendant controversies, such as school integration, busing of students, and affirmative action. Jensen questioned whether it is in fact possible to eliminate racial differences in socially valued outcomes through conventional policy measures, striking at the foundational assumption of liberal and radical racial politics. His floating of the racial-genetic hypothesis was what set his argument apart from the general tenor of the era’s scholarly and policy debate.

In this post, I will take a look at Jensen’s arguments and their development over time. The focus will be on the race question, but many related, more general topics will be discussed as well. The post has four parts. The first is a synopsis of Jensen’s argument as it was presented in the 1969 article. The second part offers an updated restatement of Jensen’s model of race and intelligence, while in the third part I argue, using the Bradford Hill criteria, that the model has many virtues as a causal explanation. In the fourth and concluding part I will make some more general remarks about the status and significance of racialist thinking about race and IQ.[Note]
Continue reading

Racial and Ethnic Differences in Cognitive Skills in Working-Age, Native-Born Americans

Given the central role that testing plays in the American educational system, most datasets that we have on racial and ethnic differences in cognitive ability include only children, adolescents, or young adults. Most of the economic and social effects of cognitive differences are, however, produced by the working age population, so it would be useful to have test scores from older adults as well. The PIAAC survey of adult skills conducted by the OECD provides excellent data for this purpose. Continue reading

Measurement Error, Regression to the Mean, and Group Differences

Regression to the mean, RTM for short, is a statistical phenomenon which occurs when a variable that is in some sense unreliable or unstable is measured on two different occasions. Another way to put it is that RTM is to be expected whenever there is a less than perfect correlation between two measurements of the same thing. The most conspicuous consequence of RTM is that individuals who are far from the mean value of the distribution on first measurement tend to be noticeably closer to the mean on second measurement. As most variables aren’t perfectly stable over time, RTM is a more or less universal phenomenon.

In this post, I will attempt to explain why regression to the mean happens. I will also try to clarify certain common misconceptions about it, such as why RTM does not make people more average over time. Much of the post is devoted to demonstrating how RTM complicates group comparisons, and what can be done about it. My approach is didactic and I will repeat myself a lot, but I think that’s warranted given how often people are misled by this phenomenon.
Continue reading

Top Ten Human Varieties Posts

In the more than three years of its existence, about 110 posts have been published on this blog. While blogging has unfortunately been light in recent times around here, the upside of the data- and analysis-heavy format of our posts is that they rarely lose their relevance with time, making the perusal of our old posts well worth the time.

To help readers search through our archives, below is a list of what I consider to be some of the best content we’ve published. They’re not necessarily our most popular posts, but I think they offer a good dive into human biodiversity, in particular our perennial favorite topic of IQ differences between groups. The list is in the order of original publication. Continue reading

Equal Environments Assumption and Sex Differences

In the classic twin study design, identical (MZ) twin pairs are compared to fraternal (DZ) twin pairs so as to estimate the relative contributions of heredity and environment to individual differences. The classic twin design depends on the equal environments assumption (EEA) according to which the shared environment of MZ twins is not more similar than that of DZ twins.

The claim that the EEA is an unrealistic assumption which is routinely violated in reality is perhaps the most common criticism of the classic twin design. Violations of the EEA generally bias estimates of the effect of heredity upwards and those of the environment downwards. For this reason, there have been a number of studies where the assumption has been put to test with research questions such as:

  • Are twin pairs who are misinformed about their actual zygosity as similar as pairs who know their real zygosity?
  • Are twin pairs with objectively more similar environments more similar phenotypically?
  • Are the results of twin studies consistent with the results of other kinds of behavioral genetic designs, such as adoption studies?

This research has indicated that the EEA is generally valid and that even when it’s violated, the effect on parameter estimates is small (Barnes et al., 2014; Felson, 2014).

I think sex differences offer an underappreciated way of further evaluating the EEA. Half of DZ pairs are same-sex (male-male or female-female) and half are opposite-sex (male-female), whereas MZ pairs are, of course, all same-sex. Differences in twin correlations across these sex categories are informative about the EEA because if the shared environment differs by zygosity, you would expect it to differ by sex, too. Continue reading

« Older posts

© 2022 Human Varieties

Theme by Anders NorenUp ↑