Deviation IQ

deviation IQ (Oxford reference)
A 2019 Oxford Reference definition of deviation IQ. [2]
In genius studies, deviation IQ refers to an IQ (see: IQ key) that is based on the premise of standard deviation of test results, reasoned IQs "normed" or assigned to each deviation.

Overview
In c.1939, David Wechlser (1896-1981) introduced the deviation IQ method owing to the inadequacies of the applicability of the ratio IQ method when used with adults. [1]

The gist of which being the premise that the norm or normal scores at the center of the bell curve will be made by people with average or normal IQs (100), that those scoring to the right of the curve will have higher IQs, as compared with the norm; something along the lines of the following:

Deviation IQ 1

In the years to following, various people began what is called "norming" various tests, using basically personal invented means, so as to assign higher IQs, far into the genius range (140+), for various tests, e.g. the Wechsler IQ test, the Stanford-Binet, etc.; such as follows:

Deviation IQ 2

Meaning, according to the so-called "deviation IQ" method, that if one scores perfect, or in the top 1 to 2 percent of scores, that one has a deviation IQ or simply IQ of 130+ to 160+, as shown by example above. The problem here is that geniuses tend not to take test that have answers, but rather to take tests that are unsolved or unsurmounted. In other words, scoring perfect on the Stanford-Binet, Wechsler IQ test, Mensa IQ test, or Mega IQ test, etc., does not, by default, make one a genius.

References
1. Colangelo, Nicholas and Davis, Gary A. (1991). Handbook of Gifted Education (pg. 92). Allyn and Bacon.
2. Deviation IQ – Oxford Reference.

TDics icon ns

More pages