Regression to the mean, RTM for short, is a statistical phenomenon which occurs when a variable that is in some sense unreliable or unstable is measured on two different occasions. Another way to put it is that RTM is to be expected whenever there is a less than perfect correlation between two measurements of the same thing. The most conspicuous consequence of RTM is that individuals who are far from the mean value of the distribution on first measurement tend to be noticeably closer to the mean on second measurement. As most variables aren’t perfectly stable over time, RTM is a more or less universal phenomenon.
In this post, I will attempt to explain why regression to the mean happens. I will also try to clarify certain common misconceptions about it, such as why RTM does not make people more average over time. Much of the post is devoted to demonstrating how RTM complicates group comparisons, and what can be done about it. My approach is didactic and I will repeat myself a lot, but I think that’s warranted given how often people are misled by this phenomenon.
Wang, M., Fuerst, J., Ren, J. (2016). Evidence of dysgenic fertility in China. Intelligence, 57, 15-24.
From the discussion: “We’ve seen, in Table 4, that urban populations in China exhibited a relatively high dysgenic fertility trend in the 1951–1970 birth cohort. For this same cohort, the trend was much smaller in the rural populations. It suggests that dysgenic selection is related to urbanity. This supports Pan’s (1923) observation that “modern urbanization has had so many dysgenic effects upon the race.”
Michael Rönnlund and colleagues have a very nice paper out in Intelligence. They show that the individual differences in general intelligence that exist at age 18 are almost perfectly preserved to age 60, after which this stability starts to slowly break down. Continue reading
A few years ago James Heckman, together with some other economists, published a study arguing that “achievement tests” and “IQ tests” are different beasts: the former, they claim, are better predictors of criterion outcomes (such as grade point averages) and are more strongly influenced by personality differences than the latter. Like most of Heckman’s forays into psychometrics — he has been obsessed with trying to shoot down Bell Curve -type arguments ever since the book was released — the study leaves much to be desired. David Salkever has published a nifty reanalysis of Heckman and colleagues’ study, showing that their results stem from faulty imputation and a failure to take into account age effects. Continue reading
I’ve been catching up on recent research on psychometrics, behavioral genetics, race differences, and so on. I’ll be posting some comments on papers I found particularly interesting. The first is Frisby and Beaujean’s study of Spearman’s hypothesis. Continue reading
Or nearly so. I was planning to publish that blog article for the 31th December 2014. As you can see, I failed in this task, and didn’t finish in the right time. Anyway, I wrote this article, mainly because I am bothered that when people cite The Bell Curve the typical opponent responds with a link toward Wikipedia, specifically the part related to the “controversy” of The Bell Curve. It goes without saying that these persons did not read the books written in response to The Bell Curve. In fact, they have certainly read none of them. It is ridiculous to cite a book you didn’t read, but apparently, it does not bother many people, as I see.
For the 20 years of the book, I found appropriate to write a defense of the book. Or more precisely, a critical comment on the critics. I have decided to read carefully one of these books I can have access, and for what I have read here and there, it is probably the best book ever written against The Bell Curve. I know that Richard Lynn (1999) has already written a review before. But I wanted to go into the details. The title of the book I’m reviewing is :
Devlin, B. (1997). Intelligence, Genes and Success: Scientists Respond to the Bell Curve. Springer.
In fact, I have read that book some time ago, but didn’t find the need to read everything in detail. And I was unwilling to write a lengthy review. But I have changed my mind because of some nasty cowards.
Earlier, I have reviewed Braden’s (1994) book, Deafness, Deprivation, and IQ. Considerable amount of studies have been conducted since then. The focus is on the validity of measures of intelligence among the deaf population, such as reliability, predictive validity, measurement properties of the tests.
Jeffery P. Braden. (1994). Deafness, deprivation, and IQ. Springer.
The book is a compilation of studies on deaf people, which concludes that cultural deprivation due to deafness lowers verbal IQ but not nonverbal IQ. Braden sought to prove Arthur Jensen wrong about his conclusions on the genetic component in racial differences in IQ. At the end, his research culminated in a trauma well known to scientific history, namely, his perfectly good theory was ruined by his data. Being born deaf does not affect g. And genetic theories are the most powerful arguments to account for the pattern of the data.
Philosopher Jonathan Kaplan recently published an article called Race, IQ, and the search for statistical signals associated with so-called “X”-factors: environments, racism, and the “hereditarian hypothesis,” which can be downloaded here. His thesis is that the black-white IQ gap could plausibly be due to racism and what he calls racialized environments. He presents simulations in support of this argument. He also argues that “given the actual state of the world there is no way to generate any reasonably strong evidence in favor of the hereditarian hypothesis.”
I have written a detailed critique of his claims. In short, he is wrong. Here’s the abstract of my article:
Jonathan Michael Kaplan recently published a challenge to the hereditarian account of the IQ gap between whites and blacks in the United States (Kaplan, 2014). He argues that racism and “racialized environments” constitute race-specific “X-factors” that could plausibly cause the gap, using simulations to support this contention. I show that Kaplan’s model suffers from vagueness and implausibilities that render it an unpromising approach to explaining the gap, while his simulations are misspecified and provide no support for his model. I describe the proper methodology for testing for X-factors, and conclude that Kaplan’s X-factors would almost certainly already have been discovered if they did in fact exist. I also argue that the hereditarian position is well-supported, and, importantly, is amenable to a definitive empirical test.
The PDF is available at Open Differential Psychology. You can also read the article below the cut. Continue reading
It goes without saying that multiple regression is one of most popular and applied statistical methods. Thus, it would be odd if most practitioners among scientists and researchers do not understand and misapply it. And yet, this provocative conclusion seems most likely.
Because a simple bivariate correlation does not disentangle confounding effects, the multiple regression is said to be preferred. The technique attempts to evaluate the strength of an independent (predictor) variable in the prediction of an outcome (dependent) variable, when controlling, i.e., holding constant, every other variables entered (included) as independent variables into the regression model, either progressively step by step or altogether at the same time. The rationale is to get the effect of an independent variable that only belongs to it. But this is a fallacy.