For the most part this work indicates ways that human

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: indicates ways that human judgement of subjective probability is inconsistent with probability laws and definitions (Clemen, 1999). This is a situation that is exacerbated in organisational decision-making since many judgements are generated by groups of experts (Clemen, 1999). Myers and Lamm (1975) report evidence that face-to-face intervention in groups working on probability judgements may lead to social pressures that are unrelated to group members’ knowledge and abilities. Gustafson et al. (1973), Fischer (1975), Gough (1975) and Seaver (1978) all found in their experiments that interaction of any kind among experts led to increased overconfidence and, hence, worse 26 calibration of group probability judgements. More recently, Argote, Seabrigh and Dyer (1986) found that groups use certain types of heuristics more than individuals, presumably leading to more biases (Clemen, 1999). The situation outlined above is aggravated by the observation that whilst most people find it easiest to express probabilities qualitatively, using words and phrases such as “credible”, “likely” or “extremely improbable”, there is evidence that different people associate markedly different numerical probabilities with these phrases (for example, Budescu and Wallsten, 1995). It also appears that, for each person, the probabilities associated with each word or phrase varies with the semantic context in which it is used (Morgan and Henrion, 1990) and that verbal, numerical and different numerical expressions of identical uncertainties are processed differently (Gigerenzer, 1991; Zimmer, 1983). Hence, in most cases such words and phrases are unreliable as a response mode for probability assessment (Clemen, 1999). Given this, many writers have proposed encoding techniques. However, the results of the considerable number of empirical comparisons of various encoding techniques do not show great consistency, and the articles reviewed provide little consensus about which to recommend (Clemen, 1999). As Meehl (1978 p831) succinctly comments: “…there are many areas of both practical and theoretical inference in which nobody knows how to calculate a numerical probability value.” The most unequivocal result of experimental studies of probability encoding has been that most assessors are poorly calibrated; in most cases they are overconfident, assigning probabilities that are nearer certainty than is warranted by their revealed knowledge (Morgan and Henrion, 1990). Such probability judgements, Lichenstein, Fischoff and Phillips (1982) found, are not likely to be close to the actual long run frequency of outcomes. Some researchers have investigated whether using specific procedures can improve probability judgements. Stael val Holstein (1971a and 1971b) and Schafer and Borcherding (1973) provide evidence that short and simple training procedures can increase the accuracy (calibration) of assessed probability, although their empirical results do not indicate an overwhelming improvement in performance. Fisch...
View Full Document

This document was uploaded on 03/30/2014.

Ask a homework question - tutors are online