Author: Meng Hu (Page 1 of 6)

Sometimes Biased, But Not Systematically: Twin Study Assumptions with A Focus on the Equal Environment

The Classical Twin Design (CTD) has always been criticized for being oversimplistic, and consistently overestimating heritability estimates due to not accounting for GxE, GxG, rGE and equal environmental effects. It is almost never mentioned the bias is not systematic. The criticism largely exaggerates the flaws of the CTD, often misleadingly so, and apparently cherry-picking their evidence whenever large discrepancies in heritabilities are reported due to ignoring key assumptions inherent to the twin design. This article will show why the CTD and its extensions are robust methods, but with a strong focus on the Equal Environment Assumption (EEA).

The classical twin design withstood past criticisms, duo to employing a large variety of methods to test the key assumptions (Plomin & Bergeman, 1991; Andrew et al., 2001; Johnson et al., 2002; Christensen et al., 2006; Kendler & Prescott, 2006, ch. 6; Segal & Johnson, 2009; Plomin et al., 2013, ch. 6 & 12 & 17), and will likely withstand current ones (Tarnoki et al., 2022) in spite of some recent developments (e.g., Sunde et al., 2024) to improve the standard ACE models used to decompose genetic (h²) and shared and non-shared environmental (c² and e²) variances.
Continue reading

National IQ papers must be retracted: Why Kevin Bird and Rebecca Sear don’t get it

A recent article by Samorodnitsky and co-authored by two renowned censorship champions, namely, Kevin Bird and Rebecca Sear, requests that all published papers that have ever used Lynn & Vanhanen (2006) national IQs (or subsequent, revised version, up to Becker & Lynn, 2019) to be retracted. They ignore a plethora of evidence suggesting that L&V or B&L IQ data are robust, despite the questionable data quality regarding lower-IQ countries. This is a case of non-classical measurement error. But many other variables commonly used in either economics or psychology often display a similar form of non-classical measurement error, and sometimes display quite dramatic biases in one or both tails of the distribution due to misreport bias. The right question is to ask how the biases can be corrected, not whether the research and their authors should be cancelled. Econometricians proposed a wide variety of techniques to deal with non-classical measurement error. In fact national IQ researchers already employed some robustness analyses. This article will dispel the logical fallacies used to negate IQ research.

SECTIONS

  1. Non-random error and systematic bias: Nothing new.
  2. “Poor” quality data is the rule, not the exception.
  3. Robustness check has been used in National IQ studies.
  4. Do national assessments reflect cognitive ability?

Continue reading

Controversy over the predictive validity of IQ on job performance

Sackett et al. (2022) recently questioned prior meta-analytic conclusions about the high IQ validity since the studies by Schmidt & Hunter decades ago. The crux of the issue is complex, but while the debate regarding range restriction correction is not (at least not completely) resolved yet, one thing is certain. The validity depends on the measure of job performance that is used in the meta-analysis. Maybe the importance of IQ has declined in recent years but the debate is not settled yet.

(updated: August 2nd 2024)

CONTENT
1. The groundbreaking study
2. The controversy
3. The choice of criterion (dependent) variable matters

Continue reading

How IQ became less important than personality: A critical examination of Borghans et al. (2016)

Borghans et al. (2016) analyze 4 datasets with diverse measures of IQ and, shockingly, concluded that the impact of IQ on social outcomes is weak compared to personality measures, despite what the earlier reviews and meta-analyses showed (Gottfredson, 1997; Poropat, 2009; Schmidt & Hunter, 2004). Indeed, as reviewed prior, most studies found that personality measures generally have weak relationship with outcomes once IQ is accounted for. Yet their work has not been subjected to critical examination, just various uninteresting comments (Ganzach & Zisman, 2022; Golsteyn et al., 2022; Stankov, 2023) and replication failures (Zisman & Ganzach, 2022).

Continue reading

Wealth, Poverty and Politics: A must read for understanding group differences

Thomas Sowell’s book, Wealth, Poverty and Politics, provides a thorough explanation as to why nations and groups of peoples developed at different rates, how and why they rise or fall as a group or empire. There are only a few sections which I do not find convincing, such as his arguments on group differences in IQ and his complete rejection of the genetic hypothesis.

To summarize the ideas of the book, Sowell shows that 1) population differences emerged because geography has never been egalitarian, 2) cultural and geographical isolations are great impediments to development, 3) equal opportunity will not create equal outcomes between groups, 4) education is not human capital and has sometimes caused negative outcomes, 5) exploitation of the poor through either slavery or imperialism does not explain prosperity status, 6) poverty and inequality are so ill-defined to the point that comparisons are meaningless, 7) the government has a duty to please the masses through dubious tactics at the expense of economic performance.

Continue reading

The Structure of Well Designed Online IQ Tests

There are convenient ways researchers can collect IQ scores and correlate the observed scores with measures of self-reported health, socio-economic attainment, personality or political views. In platforms such as Prolific or MTurk, participants make money in their spare time by completing tasks. Designing a test that displays both a high loading on the general factor of intelligence, while avoiding measurement bias and bad quality data from online participants, is quite a challenging task.

(update: July 19th 2024)

CONTENT

  1. Introduction page content
  2. Item’s pass rate and g-loading
  3. Lazy and dishonest test takers
  4. Short versus long test
  5. Scrolling dilemma
  6. Item type “write-in”
  7. Instruction and rules
  8. Cultural content and cultural bias
  9. Computerized Ability Test

The issues related to online testing are illustrated based on the numerous IQ tests Jurij Fedorov devised, with my assistance, using Alchemer’s professional software.

Continue reading

Gender wage gap: Why the discrimination theory (likely) fails

Probably the most rehearsed explanation of the gender pay gap is discrimination. After accounting for traditional labor market factors, a large residual gap remains. This residual gap is also called the unexplained gap. Researchers often commit the fallacy of equating unexplained effect to discrimination effect instead of omitted variable bias. In fact, most wage decomposition models are probably contaminated by bias. This article will explain that much of the residual gap is likely due to other causes. In particular, time flexibility. The evidence for the discrimination effect is often ambiguous.

Continue reading

Affirmative action failed: An extensive and complicated literature review

Despite the US Supreme Court recently overturning affirmative action (AA), many scholars believe that AA bans in higher education hurt minorities’ opportunities because affirmative action actually delivered on its promises. Although the findings lack consistency, the impact of AA on the outcomes of under-represented minorities (URM) is generally either ambiguous or slightly negative. The bans exert a less negative effect on more competitive fields such as STEM. The picture is even worse when one considers that AA comes at the cost of lowering the chance of other, more capable minorities, such as Asians, and does not greatly impact the intended targets, i.e., the impoverished families among URMs.

Continue reading

Canadian Race/Ethnic Differences on the LSAT (2019-2023)

This article reports racial gaps in the LSAT scores for Canada (2019-2023). By using the threshold method designed by La Griffe du Lion (2007), the standardized effect sizes are computed from the proportions of members of each groups who attain specific score ranges. Results are compared with U.S. gaps in the Law School Admission Test (LSAT) and Medical College Admission Test (MCAT) scores. Details of the analysis is available here.

Continue reading

Spearman’s g Explains Black-White but not Sex Differences in Cognitive Abilities in the Project Talent

In this analysis of the Project Talent data, the g factor model as represented by the Spearman’s Hypothesis (SH) was confirmed for the black-white cognitive difference but not for the sex difference. Results from MGCFA and MCV corroborate each other. MGCFA detected small-modest bias with respect to race but strong bias with respect to sex cognitive difference. Full result is available at OSF.

Continue reading

« Older posts

© 2024 Human Varieties

Theme by Anders NorenUp ↑