The abbreviation “IQ” was coined by the psychologist William Stern for the German termintelligenz-quotient, his term for a scoring method for intelligence tests he advocated in a 1912 book. When current IQ tests are developed, the median raw score of the norming sample is defined as IQ 100 and scores each standard deviation (SD) up or down are defined as 15 IQ points greater or less, although this was not always so historically. By this definition, approximately two-thirds of the population scores between IQ 85 and IQ 115, and about 5 percent of the population scores above 125.
An intelligence quotient (IQ) is a score derived from one of several standardized tests designed to assess human intelligence.
IQ scores have been shown to be associated with such factors as morbidity and mortality, parental social status, and, to a substantial degree, biological parental IQ. While the heritability of IQ has been investigated for nearly a century, there is still debate about the significance of heritability estimatesand the mechanisms of inheritance.
IQ scores are used for educational placement, assessment of intellectual disability, and evaluating job applicants. In research contexts they have been studied as predictors of job performance, and income. They are also used to study distributions of psychometric intelligence in populations and the correlations between it and other variables. Raw scores on IQ tests for many populations have been rising at an average rate that scales to three IQ points per decade since the early 20th century, a phenomenon called the Flynn effect.
Investigation of different patterns of increases in subtest scores can also inform current research on human intelligence.