All Collections
Science Library
Traditional IQ tests, their subtests and game-based assessments
Traditional IQ tests, their subtests and game-based assessments
Julie Skorepova avatar
Written by Julie Skorepova
Updated over a week ago

What exactly do the GMA tests measure?

General Mental Ability (GMA) tests, also known as traditional IQ tests, are the most predictive tests for future job performance (Schmidt & Hunter, 1998 & 2004; Schmidt et al., 2016; Kuncel et al., 2004 & 2014).

GMA can be defined as a construct that consists of different cognitive abilities. Other terms, such as intelligence and g factor, are often used interchangeably. The state-of-the-art method for classifying human cognitive ability (based on Cattell-Horn-Carroll theory, Schneider & McGrew, 2012) uses a three-stratum hierarchy structure. The idea is that the g factor is at the top of the hierarchy (Stratum I) and can be broken down into lower levels of cognitive ability (Stratum II & Stratum III), such as fluid reasoning, visual processing, and processing speed.

In other words, GMA influences performance on all cognitive tests and represents a wide range of cognitive ability. In this sense, intelligence tests are designed to measure some of the cognitive factors that contribute to general intelligence and can be expressed by IQ scores. For instance, the Wechsler Adult Intelligence Scale—Fourth Edition (WAIS-IV) is the standard intelligence test for adults ranging from 16-90 years old (Climie & Rostad, 2011). It measures four cognitive factors, including verbal comprehension, perceptual reasoning, working memory, and processing speed.

Should we look at scores from subtests separately?

A statistical method, called factor analysis, enables researchers to use a number of different test items to measure GMAs. By doing so, GMAs are established by a series of analyses grouping subtests as a cognitive factor. For example, WAIS-IV consists of 10 subtests (Stratum III components) that together comprise 4 cognition factors (Stratum II components), yielding scores that together compute the IQ score (Stratum I). As proposed by McGill and colleagues' (2018) review research, a four-factor structure, like the WAIS-IV, was determined to best fit the g factor.

Many have questioned what we can learn from individual subtest results. Research emphasises that the subtest scores are less reliable than the overall score when wanting to make a judgement about general intelligence. Only looking at one subtest score may lead to over- or under-interpretation of general mental ability.

What can GBAs bring to the table?

Game-based assessments (GBAs) or gamified assessments modify existing assessments by adding game elements. Much like traditional psychometric and cognitive assessments, GBAs can be created using an underlying psychological theory to construct a test of behavioral traits or cognitive ability.

Despite trying to evaluate the same feature, different companies take different approaches to developing GBAs. As a result, its reliability and quality frequently vary. Given the growing use of GBAs in the assessment marketplace, it is important to take the quality of GBAs into account before implementing them. Our suggestion is to carefully look at the validation process in relation to prior related theories and assessments.

Develop a theory-driven g-GBA

You might be wondering how well GBAs work for assessing intelligence. First, the cognitive construct of intelligence laid out by Cattell-Horn-Carroll theory, provides particular details that can be detected, enabling targeted game design. Another framework has been proposed to more effectively capture the ability by incorporating cognitive theories into the creation of cognition ability assessments (Embretson, 1994). In summary, this framework suggests that (1) the test items or behaviours that aim to be assessed are connected to underlying cognitive processes that are constructed by theory, and (2)a relationship between test results and other measurements that are governed by the same or a different cognitive theory are established (Landers et al., 2021).

Additionally, it should be taken into consideration during the development process whether the design elements in the game context have a different impact on cognitive processes than they would in a traditional approach. For instance, if the amount of information in games is overwhelming, the test result could be compromised. Applying these principles can provide a strong conceptual framework for the creation of GBAs to measure intelligence (g-GBA).

An example: validating our targeted construct — problem-solving ability

At Equalture, we employ the most widely used approach in order to validate a g-GBA scientifically by comparing games and an existing assessment based on well-established cognitive theory. Here, we will briefly give an overview of how we developed the game Ferry utilizing the aforementioned principles to assess problem-solving skills.

In cognitive theory, problem-solving ability has been related to executive functioning, and it is also known to be correlated with fluid intelligence. With an understanding of the theoretical foundation of problem-solving ability, we target the components of problem-solving ability in our game design. Then, in order to validate our g-GBA, we compare the score with the Tower of London test (ToL; Shallice, 1982), which is widely used in cognition research and clinical practice for assessing problem-solving skills. If the scores from the game and the ToL test are correlated, we can assume that both tests are measuring the same underlying cognitive construct.

More information about our game Ferry's validation report can be found here. If you have any questions about the validation, you can reach out to the Equalture Science team.

References

Can one Climie, E. A., & Rostad, K. (2011). Test review: Wechsler adult intelligence scale. https://doi.org/10.1177/0734282911408707

Embretson, S. (1994). Applications of cognitive design systems to test development. In Cognitive assessment (pp. 107-135). Springer, Boston, MA. https://doi.org/10.1007/978-1-4757-9730-5_6

Kuncel, N. R., Rose, M., Ejiogu, K., & Yang, Z. (2014). Cognitive ability and socio-economic status relations with job performance. Intelligence, 46, 203-208. https://doi.org/10.1016/j.intell.2014.06.003

Kuncel, N. R., Hezlett, S. A., & Ones, D. S. (2004). Academic performance, career potential, creativity, and job performance: construct predict them all?. Journal of personality and social psychology, 86, 148. https://doi.org/10.1037/0022-3514.86.1.148

Mujcic, S., & Blaik, J. (2021). Theory-driven game-based assessment of general cognitive ability: Design theory, measurement, prediction of performance, and test fairness. Journal of Applied Psychology. https://doi.org/10.1037/apl0000954

McGill, R. J., Dombrowski, S. C., & Canivez, G. L. (2018). Cognitive profile analysis in school psychology: History, issues, and continued concerns. Journal of school psychology, 71, 108-121. https://doi.org/10.1016/j.jsp.2018.10.007

Shallice, T. (1982). Specific impairments of planning. Philosophical Transactions Of The Royal Society Of London. B, Biological Sciences, 298, 199-209. https://doi.org/10.1098/rstb.1982.0082

Schmidt, F. L., Oh, I. S., & Shaffer, J. A. (2016). The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 100 years. Fox School of Business Research Paper, 1-74.

Schmidt, F. L., & Hunter, J. (2004). General mental ability in the world of work: occupational attainment and job performance. Journal of personality and social psychology, 86, 162. https://doi.org/10.1037/0022-3514.86.1.162

Schmidt, F. L., & Hunter, J. E. (1998). The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 85 years of research findings. Psychological bulletin, 124, 262. https://doi.org/10.1037/0033-2909.124.2.262

Schneider, W. J., & McGrew, K. S. (2012). The Cattell-Horn-Carroll model of intelligence. In D. P. Flanagan & P. L. Harrison (Eds.), airments of planning. Philosophical Transactions Of The Royal Society Of London. B, Biological Sciences, 298, 199-209. https://doi.org/10.1098/rstb.1982.0082

Did this answer your question?