I took a gander at the 2010, 2011, and 2012 GED total scores by race and nation (from “GED Testing Statistical Reports”). The sample sizes were small. Unfortunately, the earlier reports, which go back to the ’80s didn’t provide scores for Bermuda, the Virgin Islands, and Jamaica+Cayman+St.Martin; as these scores were what I was particularly curious about, I didn’t include scores from earlier years. The scores aren’t representative, etc., etc. but they, nonetheless, provide a tad of info on e.g., (self-identified) ethnic differences in Bermuda. The score were averaged across the three years mentioned. The d-values presented at the bottom are inter-national. Those presented on the right are intra-national. The differences are roughly consistent with Richard Lynn’s Global Bell Curve position.
MH’s (02/11/2014) Excel File Here.
Previously, we looked at the association between L&V’s (2012) National IQs, GMAT scores, and English Proficiency scores. We extend that analysis here by including 2010-2012 GRE (quantitative, verbal, and total) scores, 2010 + 2012 TOEFL scores, 2003-2009 migrant PISA scores, and national numeracy rates from the 19th and early 20th century.
The GMAT is a graduate entrance test used by more than 5,900 business programs offered by more than 2,100 universities worldwide. While the test is given in English, it is designed to be as minimally English dependent as necessary to predict successful completion of Business programs taught in English. Further, the test is carefully scrutinized for item bias. Rudner (2012) explains:
Yes, the GMAT test is administered in English and is designed for programs that teach in English. But the required English skill level is much less than what students will need in the classroom. The exam requires just enough English to allow us to adequately and comprehensively assess Verbal reasoning, Quantitative reasoning and Integrated Reasoning skills….
We carefully review our questions using criteria defining good item construction. We also compute statistics to assess whether our questions are appropriate across culture groups. We constantly update guidelines for our item writers, including a master list of terms and phrases to avoid in order to assure cultural fairness. By using carefully defined and thorough item development and review processes, along with statistical analyses to flag questions with possible cultural bias, we have developed a test that minimizes the impact of culture and language. The GMAT exam is the best objective measure of the likelihood of success in management programs across the globe.
Despite the claimed lack of bias and apparent predictive validity of the test, there is substantial global variance in scores. Rudner (2012) attributes this variance largely to differences in native language spoken and to differences in self-selection.
We decided to explore to what extent global differences could be accounted for differences in National IQ. To do this, we examined the relation between measures of national cognitive ability, English language proficiency, English language usage, and GMAT scores by reported citizenship. We also sought to determine to what extent GMAT scores could be used to index the National IQs for poorly investigated regions such as North Korea, Rwanda, and St. Kitts.
The present analysis, using the NLSY97, attempts to model the structural relationship between the latent second-order g factor extracted from the 12 ASVAB subtests, the parental SES latent factor from 3 indicators of parental SES, and the GPA latent factor from 5 domains of grade point averages. A structural equation modeling (SEM) bootstrapping approach combined with a Predictive Mean Matching (PMM) multiple imputation has been employed. The structural path from parental SES to GPA, independently of g, appears to be trivial in the black, hispanic, and white population. The analysis is repeated for the 3 ACT subtests, yielding an ACT-g latent factor. The same conclusion is observed. Most of the effect of SES on GPA appears to be mediated by g. Adding grade variable substantially increases the contribution of parental SES on the achievement factor, which was partially mediated by g. Missing data is handled with PMM multiple imputation. Univariate and multivariate normality tests are carried out in SPSS and AMOS, and through bootstrapping. Full result provided in EXCEL at the end of the article.
In digit span tests, the respondents are asked to repeat a string of digits. There are two variants of the test, forward digit span (FDS) and backward digit span (BDS). In FDS, the digits are repeated in the order of their presentation, while in BDS they must be repeated in the reverse order. The largest number of digits that a person can repeat without error is his or her forward or backward digit span.
It is well-established that the black-white gap is substantially larger on BDS than FSD (see references in The g Factor by Jensen, p. 405, Note 22; see also my recent analysis of the DAS-II). However, replication is always good, so I analyzed black-white differences in the CNLSY sample, which contains FDS and BDS scores for relatively large samples of black and white children. Additionally, I compared the digit span performance of Hispanic American children to that of blacks and whites. Continue reading
According to Spearman’s hypothesis, the magnitude of the black-white gap on a given cognitive ability test is primarily determined by the test’s g loading. Tests that are better measures of g are associated with larger gaps.
The Differential Ability Scales, Second Edition, or the DAS-II, is an IQ test for assessing children and adolescents. It comprises a total of 21 subtests, although in the present analysis only 13 subtests are used, because not all tests are administered across age groups. I will use the method of correlated vectors (MCV) to test whether g loadings are correlated with mean racial differences on the DAS-II subtests. In addition to the black-white gap, I will also investigate if the test performance of Asians and Hispanics is predicted by g loadings. Continue reading
In the present article, I demonstrate that processing speed (using ASVAB speeded subtests) has a modest predictive validity over the g factor extracted from the ASVAB (non-speeded subtests) in predicting overall GPA in the NLSY97, within black, hispanic and the white sample. Next, I investigate the mediation of speed in the black-white difference in IQ (g). For both analyses, processing speed accounts for a modest portion of these associations. Nonetheless, some issues related with such ‘psychometric speed’ measures need to be clarified.