realise that it has hit the point where there is no alternative to change; often, that point comes too late’ (Kent, 2004). Of course, there are many reasons for this conservatism, and there are also growing accounts of how to make aid agencies more aware of and open to their failures. But there is a paradox here worth noting. Many of the mistakes made by aid agencies are well documented and repeated, on both the development and the humanitarian sides of the system. These mistakes often arise from the application of standard procedures with insufficient attention to the immediate context, to local capacity or to the historical roots of a given issue. And these mistakes come up time and time again in aid evaluations, to the extent that these key accountability mechanisms for the sector are almost unanimously seen as ‘telling us nothing new’. All too often, aid agencies echo the Peter Cook quip: ‘I have studied my mistakes carefully and at length, and I am pretty certain I can repeat them exactly’. So the issue is not, ‘how do we embrace failure’, but rather, how can we get aid agencies to abandon their existing – known, acceptable, safe – forms of failure? How can we get them to risk new and diferent, more innovative and unpredictable kinds of failure? To paraphrase Winston Churchill, the secret to success is to move from failure to failure with energy, enthusiasm, honesty and creativity. If this is true, then we can at least say that aid agencies are halfway there. APPLES AND ORANGES— We do NOT reject all predictions, but their decontextualized chain of linear causes is ANTI- KNOWLEDGE, especially for systems in flux. Gurri 2011 [Martin Gurri, Director of National Intelligence Open Source, [Sept 6 “Analyzing Events” ] I won’t dwell on the reasons produced by Watts to explain this delusion. In brief, we love to stretch common sense and Newtonian (or billiard-ball) causation beyond the breaking point . When we fail, we take it for granted it was because of insufficient information. This too is a failure of understanding. It’s not that we lack enough information, it’s that no amount of information can ever be enough. ¶ Human events unfold within complex systems governed by weird, nonlinear dynamics. Prediction by means of billiard-ball mechanics is impossible , in principle . Because each complex system develops in unique ways, events are also rarely susceptible to probabilistic analysis. Rightly considered, a question like “Who will win the 2012 presidential elections?” refers to a single token. There have been no previous 2012 presidential elections to average out with this one. ¶ Of course, analysts persist in making predictions. They are addicted to prophecy. Tetlock proved that they guess right about as often as the flip of a coin , but it doesn’t matter. This is what analysts do: who they are. The robe of the magus fits strangely on the scientist’s lab coat, but the point is clear. These are the people who see into the future. Unfortunately, to keep up the pose – to validate their expertise – they must insist that the future resemble the past.
You've reached the end of your free preview.
Want to read all 181 pages?
- Winter '16
- Jeff Hannan
- Linear Regression, ........., Complex system, Nonlinear system, K. Chakka