Categories Book Review

The Failure of Risk Management by Douglas Hubbard

In his book, The Failure of Risk Management, author Douglas Hubbard talks about starts out with defining risk management explaining why most risk assessment methods don’t work before developing a functional risk management approach.  

Defining risk

To define risk management, the author begins with defining risk itself. It involves assessing the probabilities of various outcomes and, for each outcome measuring the magnitude of the outcome in terms of loss of human life or capital or anything else that may cause damage. In other words, you could say that the risk is the likelihood and magnitude of an undesirable event. For instance in a scientific or mathematical context, risk always describes the probability and magnitude of an undesired effect. Looking at the probabilities and the magnitude helps to anticipate various scenarios and that’s the purpose of risk management. In this respect, managing risk means effectively using resources to decrease danger. For instance, one of the most common definitions of the word management is “the planning, organization, coordination and direction of resources towards defined objectives.” In other words, using what you have to get what you need. Therefore, managing risk is an effort to effectively use limited resources, like money and time, to complete a task. 

How risk management came to being

The author also talks about how risk management developed in the first place. Whether it’s a king building a city wall to prevent foreign attacks or World War II quantitative experts assessing the enemy’s production capabilities and various military outcomes risk management started to develop during World War II. 

Most risk assessment methods don’t work for multiple reasons.

However, most risk assessment methods don’t work for multiple reasons. Some rely too much on qualitative estimates. For example, in measuring whether one particularly outcome may occur or not, some will qualify it as being “likely” or “extremely likely” or “unlikely” or “extremely unlikely.” But the problem with this qualitative definition is that the very terms extremely likely or unlikely may mean different things to different people. In other words there is no common language to describe probability. 

Also, many risk assessment methods don’t take into account the relationship between the risks. For example, an airplane may want to secure its hydraulic capabilities by setting up three hydraulic controls so that if one fails then the second one will come in action and if this one goes out, then a third one comes into action. Second one fails as well the third one could come in action. But sometimes these hydraulic capabilities are all sitting one next to another. Such proximity makes them highly prone to total failure from a single event, like shrapnel from a broken propeller severing all three. This is called the common mode risk, that is the potential of one action destroying all redundant systems at the same time. 

Plus, many risk assessment methods are based upon expert opinion which turned out to be biased. First of all, people to overestimate their own capabilities: 87% of Stanford MBA students that their academic performance in the top half of their class, as an example. Cornell psychologist Kruger and Dunning published a book called unskilled and unaware of it: how difficulties in recognizing one’s own incompetence lead to inflated self-awareness which showed that approximately two thirds of the population considered themselves exceedingly reasonable, humorous. 

Finally, Israeli psychologist Daniel Kahnman has shown that human beings tend to give a higher degree of probability to any experience that they may have lived through recently. For example, if someone is planning on picnicking and looks at the weather forecast and sees that it’ll be a nice day for picnicking. And if, once outside it turns out that it’s raining, many will assign a high probability to this one particular experience. This is called the peak end rule.  

Now, how to correct this? 

The author points to calibrated term training to give people an accurate picture of their uncertainties such as testing for ranges. For example, in assessing whether or not to fund the production unit one most have an idea of how the products and price may vary over time. So the author would recommend that expert provides both a low range price estimate and a high range price estimate. This should at least give us some assurance that pricing should not be lower than the lowest range or higher than the highest range thereby reducing pricing risks. Another way of conducting calibrated training consists in looking at post mortem analysis. In this exercise, the testee is asked to estimate the probability of a disaster by assuming it already occurred and the asked why it happened. This method has proven effective at producing more complete and creative ideas about potential risks than brainstorming alone 

The author also details a functional risk management approach that’s based on the Monte Carlo simulation. A Monte Carlo displays all the factors that influence the probability and magnitude of the risk and then uses them to run thousands of random scenarios which ascertain the real probability of particular outcomes. First, one must define the variables and set realistic ranges for each of them. But many critics object: where does one get the data? The author would recommend that risk managers look at how the nuclear industry dealt with this question. Basically, in calculating nuclear power plant-related risks, risk managers start out by deconstructing the nuclear power plant into its components. Then, the idea is to calculate the failure risk of each part of the nuclear plants components. Then, one should calculate relationship between various risks and assign probabilities as well as magnitude. 

At this point, risk managers should have a functioning model and the author suggests that they compare their models estimate estimates with facts on the ground. Indeed, testing the model against reality will uncover flawed flaws. Some wonder whether all this work is worth it. And this goes into calculating the value of additional information in order. It really depends on the expected opportunity loss. If, for example, the potential opportunity loss is $10 million, then it would make sense to invest an additional $50,000 to get the needed information. So basically, one should evaluate the probability of losing money and computed with the amount of lost money. 

On a broader level, the author suggests that companies use a comprehensive organizational strategy to efficiently manage risk. This may involve building a cross-functional team in charge of assessing risk throughout the company’s value chain. And this team would also gain much experience by building a scenario library where all potential scenarios may be found as well as their probability of occurrence. 

In Short

In his book, The Failure of Risk Management, author Douglas Hubbard talks about starts out with defining risk management explaining why most risk assessment methods don’t work before developing a functional risk management approach. The idea involves developing a model based on a number of simulations and then test this model against facts on the ground to make adjustments if necessary. The author also suggests that organizations develop a dedicated risk management team to assess any risk that may procure throughout the value chain. 

What I find particularly interesting in the book is that the author really focuses on the idea that experts’ opinions are sometimes overvalued and that people have a natural tendency of overestimating their capabilities. Therefore, a lot of risk management involves assessing what we don’t know and the mistakes that we may make in order to prepare for their eventual occurrence. 

No comment

Leave a Reply