July 6th 2011

The myth of the effectiveness of intuition in hiring and evaluating personnel

Intuition vs. Objectivity

Although major differences of opinion persist concerning the nature, operationalization and objectives of talent management, nearly all authors agree that the selection and assessment of personnel is a decisive factor for success. This goes without saying: there can be no “management” of talent if this talent cannot be identified. And yet, a large number of managers and human resource management specialists believe that intuition and other subjective methods of assessment (such as non-structured interviews) are more effective at predicting future performance than objective methods (structured interviews, psychometric tests, etc.). Furthermore, the most popular and most commonly used selection tool for the past 100 years is none other than the non-structured interview (Buckley, Norri & Wiese, 2000). However, nearly 70 years’ worth of research has systematically shown that a single psychometric test exceeds the non-structured interview in terms of predictive validity[1]. Why does this myth of the effectiveness of intuition persist?

Highhouse (2008) offers several explanations[2]. First, this myth may partially be the result of an even more widespread, erroneous conception of staff selection and assessment: that it is possible to correct shortcomings in psychometric tests by adding “something else”. This is essentially equivalent to saying that the use of a psychometric test merely fills the “performance prediction glass” halfway. Thus, specialists and managers in search of a full glass attempt to identify an additional component to offset any shortcomings, in the form of an intuitive measurement. This idea is based on two assumptions: a) it is possible to predict employees’ future performance almost completely (filling the glass to the top) and b) intuitive expertise at predicting future performance exists (which can fill the glass beyond the level obtained through objective measurements). Yet both of these assumptions are incorrect, meaning the premise is invalid.

In reality, evaluation and hiring processes involve a large fraction of unexplained variance (with some authors (Highhouse, 2008) suggesting a variance for “success” of up to 70%). Thus, it is inaccurate to think that hiring mistakes are solely dependent on errors in the processes or tools themselves. Many factors that determine job performance cannot be assessed, or even identified, at the time of hiring. As a result, it is impossible to fill the glass entirely, regardless of the assessment tools utilized.

In addition, the fact that objective analysis surpasses intuition in predicting an individual’s behaviour is one of the best-established certainties in the field of behavioural science (Grove & Meehl, 1994; Grove, Zald, Lebow, Snitz & Nelson, 2000). Furthermore, a number of industrial psychologists have proven that the results of a single psychometric test are a better predictor of future behaviour than the results of the same test with an intuitive component added to “enhance” the prediction (Borneman, Cooper, Klieger & Kuncel, 2007; Huse, 1962; Meyer, 1956). In this respect, in 1926, Freyd warned specialists that “allowing selection to be influenced by personal interpretations with their unavoidable prejudices instead of relying upon objective measures gives even less consideration to the well-being and interest of the individual worker” (p. 354). The first study on the subject was published in 1943, in which Sarbin (1943) compared two methods for predicting academic success at the university level: use of the student’s class rank in high school + results from a college aptitude test, versus use of the student’s high school class rank + results of a college aptitude test + intuitive judgement of highschool counselors. In the first case, the results of the study showed a correlation to academic success of 0.45 (r = 0.45). In the second case, this correlation fell to 0.35 (r = 0.35): not only did intuition fail to contribute to predictive validity; it actually reduced the predictive value of the objective tools! Returning to the analogy of the glass, the use of intuitive measurement is tantamount to emptying out a large portion of the delicious performance-prediction concoction onto the floor, while maintaining that the glass has filled up!

Human resource specialists and managers are not the only ones who adhere to this myth of expertise. Resistance to objective measurements and mechanical equations is very widespread. For example, Arkes, Shaffer & Medow (2007) demonstrated that medical specialists making a diagnosis (in this case, pertaining to ankle injuries) using a computerized tool were perceived by patients as less competent and less professional than specialists who diagnosed without such assistance. In parallel, it is more socially acceptable to place one’s trust in human expertise than in test results (Hastie & Dawes, 2001). So it is not surprising that HRM specialists and managers are hesitant to undermine their status by using psychometric tests or structured interviews, even if the results of numerous studies have shown that experience does not have a significant impact on the quality of judgement (in terms of behavioural predictions) of clinical psychologists, social workers, judges, hiring panels, marketing specialists, organizational planning specialists and more…

To summarize, while it is true that it is difficult to predict an individual’s future behaviour, the fact remains that objective tools can help in the decision-making process. Of course, these tools are not perfect. However, if I can reduce uncertainty concerning an individual’s future performance by 25%[3], why would I not take advantage of this? It is now possible to make use of psychometric tools that have been studied in-depth and that enjoy highly beneficial attributes, such as validity, reliability, usefulness and fairness. In short, the challenge facing HRM specialists today is no longer the creation of effective tools; instead, the challenge has changed and become twofold: 1) accepting the proven fact that objective measurements have surpassed subjective techniques, which no longer have a place in the hiring and evaluation processes; and 2) sharing this information, in such a way that it is understood and accepted by managers and decision-makers. In this respect, HRM still has a long way to go.

Philippe Longpré, PhD Cdt.


[1] Validity, the crucial component of any measurement, indicates the extent to which a process or tool is able to measure that which it is meant to measure or to predict that which it is intended to predict (Pettersen, 2000).

[2] The author acknowledges that resistance to objective measurements may be the result of other factors like organizational policy, culture, the legal framework, etc. Nonetheless, some explanations appear to be more universal than contextual, and therefore merit further study.

[3] For example, using a tool with predictive validity of 0.50, like the structured interview (Pettersen, 2000), can explain 25% of the variance in the criterion – in this case, performance.

References

Arkes, H., Shaffer, V. A. & Medow, M. A. (2007). Patients derogate physicians who use a computer assisted diagnostic aid. Medical Decision Making, 27, 189-202.

Borneman, M. J., Cooper, S. R., Klieger, D. M. & Kuncel, N. R. (2007, April). The efficacy of the admissions interview: A meta-analysis. In N. R. Kuncel (Chair), Alternative predictors of academic performance: The glass is half empty. Symposium conducted at the Annual Meeting of the National Council on Measurement in Education, Chicago, IL.

Buckley, M. R., Norris, A. C. & Wiese, D. S. (2000). A brief history of the selection interview: May the next 100 years be more fruitful. Journal of Management History, 6, 113-126.

Freyd, M. (1926). The statistical viewpoint in vocational selection. Journal of Applied Psychology, 10, 349-356, in Highhouse, S. (2008). Stubborn reliance on intuition and subjectivity in employee selection. Industrial and organizational psychology, 1, 333-342.

Grove, W. M. & Meehl, P. E. (1994). Comparative efficiency of informal (subjective, impressionistic) and formal (mechanical, algorithmic) prediction procedures: The clinical-statistical controversy. Psychology, Public Policy, and Law, 2, 293-323.

Grove, W. M., Zald, D. H., Lebow, B. S., Snitz, B. E. & Nelson, C. (2000). Clinical versus mechanical prediction. Psychological Assessment, 12, 19-30.

Hastie, R. & Dawes, R. M. (2001). Rational choice in an uncertain world. Thousand Oaks, CA: Sage.

Highhouse, S. (2008). Stubborn reliance on intuition and subjectivity in employee selection. Industrial and organizational psychology, 1, 333-342.

Huse, E. F. (1962). Assessments of higher level personnel IV: The validity of assessment techniques based on systematically varied information. Personnel Psychology, 15, 195-205.

Meyer, H. H. (1956). An evaluation of a supervisory selection program. Personnel Psychology, 9, 499-513.

Pettersen, N. (2000). Évaluation du potentiel humain dans les organisations. Quebec: Quebec UP.

Sarbin, T. L. (1943). A contribution to the study of actuarial and individual methods of prediction. American Journal of Sociology, 48, 598-602.