Can
one predict risk in financial markets?
“We
believe that the boat is unsinkable,” is the famous piece of hubris attributed
to Phillip Franklin, when speaking of the Titanic in 1912. Franklin was Vice-President of the White Star Line, a
shipping company which hadn’t lost one ship at sea since 1893. In that time,
the company had undertaken tens of thousands of voyages. None of the White Star
Lines’ ships up until that point had been as statistically safe as the RMS
Titanic and many had travelled on the same path that it would take. It isn’t
impossible to see how Franklin could be led to make such a remark, even with
the benefit of hindsight.
![]() |
The Titanic: Unsinkable |
But hindsight can be
dangerous and misleading when making forecasts about the future. One of the
fundamental weaknesses of experts, commentators, risk models and quantitative
teams at financial institutions among others, is that they are drawing from
past data. Data in their nature cannot be taken from the future. If a country
with an average annual rainfall of 750mm per year reaches a cumulative 300mm by
the last day of a given year, it doesn’t mean that the last day will have 450mm
of rainfall. A quantitative model might suggest a range around 450mm,
illustrating the danger of relying on such models.
Similarly, the
financial system is set up in such a way that encourages a lack of
transparency. Despite the best attempts of governments to uncover the workings
of the shadow banking industry and establish increased controls on tax havens,
the financial industry remains murky. It’s unlikely to become discernibly
clearer any time soon. This lack of transparency in turn makes important data
unavailable for forecasting models. As a result, we’re then unable to assess
the interconnectedness of financial institutions and players (which came back
to haunt us in 2008, when the liquidity crisis struck).
Despite these inherent
drawbacks, the desire to measure risk is still commendable. To not do so would
be reckless at best. Even when making small investments, the standard practise
is to use the Capital Asset Pricing Model (CAPM), a model developed over 50
years ago, which tells us how much return we should expect for a given level of
risk. If anything, this model is a good place to show that risk isn’t a bad
thing in itself: more risk can mean more returns. Therefore, we reach an
important distinction, which is often overlooked: the management of risk versus
the measurement of risk.
Risk
Management
The key word here is
“exposure.” It comes down to an investor not exposing themselves to losing more
than they can afford to. Warren Buffett uses the analogy of somebody betting
their house on a sure thing. Why would anyone expose themselves to that kind of
risk even if it was a sure thing? Hence, the need to legislate for minimum
capital requirements at financial institutions – the idea that no matter what a
bank invests in, they have the cushion of a certain amount of capital (which is
usually not enough, but here is not the place to discuss that).
Risk Management is also
now de rigeur in all annual reports.
Its scope is quite huge. From currency transactions where the currency is
hedged (McDonalds lose $100 million for every 1 cent fall in the euro against
the dollar) to purchasing oil several months in advance to insure against hikes
in price which could adversely affect revenues (airlines couldn’t survive
without this form of energy trading – most don’t survive even with it). There
are many examples. In fact, the financial risk is measurable because it has been managed. Measuring
financial risk outside the parameters of those that are fully-managed and thus
predictable, is quite a different story.
Risk
Measurement
Companies use a range
of neat and simplistic ratios to measure their financial risk. These range from
debt to equity and debt to capital to the capital expenditure ratio and times
interest earned. With so many companies going bankrupt every year, something
would suggest that the ratios might in fact be a little too neat and
simplistic. What’s happening with companies’ risk measurement procedures that
causes so many to go bankrupt?
Well, on one side,
there’s human error and moral hazard; human error being incompetent CFOs who
don’t see the flashing red lights on a company’s financial statements and the
CEOs that hire them. John Coates, the author of The Hour Between Dog and Wolf: Risk-Taking, Gut Feelings and the
Biology of Boom and Bust, says, “this is actually a biologically-driven
phenomenon,” letting incompetent risk-takers everywhere off the hook. In
addition to the human element, there’s moral hazard. For example, limited
liability companies; setting up a limited company allows its founders to limit
their losses (but not the losses of others), no doubt increasing the financial
risk-taking (risky loans etc.) of the founders at the expense of everyone else
and increasing financial risk in the system.
But these, for all our
purposes, are known risks. On the other side of the equation are “unknown
risks.” In the aftermath of the 2008 financial crisis, a book by Nicholas Taleb
was published to some acclaim. In the book, Black
Swans: The Impact of the Highly Improbable, a three-fold definition is
given: “A Black Swan is a highly-improbable event with three principal
characteristics: it is unpredictable; it carries a massive impact; and after
the fact, we concoct an explanation that makes it appear less random, and more
predictable, than it was;” all in all, a good description for the financial
crisis which occurred the same year.
Maybe the name black
swan isn’t even appropriate for the kind of financial risk that Taleb was
trying to give an analogy for. An actual physical black swan, by its
definition, is easy to picture in the mind´s eye; it is foreseeable in some
sense. The point is that the kind of financial risk Taleb was alluding to was
unforeseeable. And in 2008, it was not just financial risk – but financial risk
aligned with liquidity risk – and then both combining with human nature (which
we’re beginning to think may have been the missing element in risk models all
along) to create a financial crisis of devastating proportions. The basis of it
all is the normal curve (see below).
The normal curve has
been assigned the task of telling the finance industry where financial returns
(and risks occur). The curve - and those who use it - basically tells us that
its logic “stands to reason until it doesn’t stand to reason.” That is, 95-99%
of the time, results will fall within a certain defined range of the average
result (that we believe to be the average based on past data). What happens in
the space outside the 99% are known as outliers. They’re extreme results and in
layman’s terms, can be defined as, “everything else.” Everything else can an
extremely good financial result (in which case somebody will more than likely
take the credit for it), or it can be an extremely detrimental financial result
(in which case, the models will be blamed).
The drawback here of
course is that it’s impossible to list everything else. To begin with, there’s
tail risk. This is the not unreasonable assumption that the tails of the normal
curve are fatter than they appear, thus severely increasing the risk on the
downside (see below).
Now, as the curve
illustrates, the chances of achieving a financial result outside 3 standard
deviations to the mean have significantly enhanced. On the downside, this means
more severe losses and more often. If you’re wondering what kind of events
might fall under the fat left-tail, there have been a multitude in the past 20
years: the European debt crisis, the Japanese earthquake/tsunami, Enron, the
Argentinian default, LCTM, the fall of the Russian rouble, the collapse of the
Asian “tiger” economies and the list goes on. How many did the man on the
street predict?
Therefore, one can see
the problems inherent in using the standard bell-curve in attempting to measure
financial risk. But this is standard practise in the financial industry. “Rank beliefs not according to
their plausibility, but by the harm they may cause,” says Nicholas Taleb, which
seems to fit the normal curve.[i] Incidentally, the normal curve is used in the industry standard
model for risk measurement - Value at Risk (VaR). This is a tool which measures
the probability of a risk, the size of the risk in dollar terms and the
time-frame expected for the risk. So, a VaR model can tell you that there is a
5% chance that you will lose $10 million every 3 months, for example. However, VaR
models failed to forecast the collapse of the US housing market and ensuing
crisis, catching off guard banks that relied on them and landing them with
huge, unexpected losses.[ii]
The normal curve
(sometimes referred to as the “Gaussian curve” or a “bell curve”) is one of the
main drawbacks behind the model. For example, in 2008, Goldman Sachs profit and
loss distribution looked a lot more like an elongated letter “U” than the bell
shape predicted by VaR.[iii]
As one mathematician puts it in the Scientific American: “If the things that
fluctuate are not correlated at all with one another, than it’s demonstrable
that a Gaussian function is the correct histogram. The catch is: in a financial
market, everything is correlated.”[iv]
It continues: “No model can consistently predict the future. It can’t possibly
be.”
Financial industry
practitioners have recognized (some of) the imperfections of VaR but will
continue to rely on similar models. New tests called stress tests and expected
shortfall have arisen since the failure of VaR. Both address some of the
inadequacies of the VaR but still fail to address a once-in-a-lifetime (or as
it happens, a once-every-fifteen-years) event, which are now commonly referred
to as “black swans.” The models still start with basic assumptions – reasonable
in itself – but the assumptions don’t always hold and when they don’t, they
fail spectacularly.
Conclusion:
Can one predict financial risk?
Former US Secretary of
Defense Donald Rumsfeld attracted the world’s mirth in February 2002 when he
made a speech about, “known knowns,” “known unknowns,” and “unknown unknowns.”
Aside from the clumsy nature of the speech, Rumsfeld was actually communicating
something extremely important – and something which cost the world in dramatic
fashion in the global financial crisis in 2008: we cannot yet predict financial
risk as effectively as we would like to think.
Prediction, not
narration, is the real test of our understanding of the world.[v]
The financial risk models which have been developed thus far are imperfect in
the extreme. What’s more, it’s difficult to see how a model will ever be
developed which can predict financial risk. Think of it this way: People who
talk about the roll of a dice automatically assume that the dice should have
six sides. That’s a one-in-six chance! The paradigm changes when the number of
sides changes. What about a dice with seven sides? Or eight? The dice is flawed
in that it assumes there can only be six outcomes. Existing financial risk
models are similar.
Take another example:
the price of oil. Given its location, the price of oil depends on social and
political events in the Middle East. Does this seem something predictable? For
2014, Goldman Sachs predicted a price of oil of $105,[vi] JP
Morgan predicted a price of oil of $122.5[vii]
and Morgan Stanley predicted a price of oil of $103.[viii]
Who to believe? Currently, Brent crude oil is trading at around $110 and is as
volatile as ever with the perpetual difficulties in the Middle East changing
form on a daily basis.
Statistics are
dangerous. Recall the example of the mathematician who drowned in a swimming
pool with an average depth of two inches if you don’t believe it. One can
predict financial risk in the same way that one can make weather forecasts: you
can generally trust the forecasters to get it right. But sometimes, they get it
wrong and you get wet. And sometimes, they get it really wrong and you get
soaking wet.
Even an Economic
Scenario Generator (ESG) such as a Monte-Carlo simulation can only begin to
touch on the economic scenarios that may play out. In addition, they can
provide variations which are large, ultimately telling us a lot but telling us
very little. These simulations also require lots of data input, but ultimately,
is it the right data? The answer, invariably, is “no.”
We can predict
financial risk. It’s as inevitable as night follows day. However, as of yet, we
just can’t predict it with the accuracy that we require.
[i]
Fooled by Randomness (2001). Page 203
[ii] http://www.ft.com/intl/cms/s/0/67d05d30-7e88-11e1-b7e7-00144feab49a.html#axzz36ppjuSoh
[iii] http://ftalphaville.ft.com/2009/06/24/58871/on-goldmans-fat-tail-risk/
[iv] http://www.scientificamerican.com/article/can-math-beat-financial-markets/
[v]
See Black Swan, page 3.
[vi] http://stream.marketwatch.com/story/markets/SS-4-4/SS-4-17298/
[vii] http://www.investorvillage.com/smbd.asp?mb=5028&mn=28108&pt=msg&mid=12453217
[viii]
http://www.oilngold.com/ong-focus/insights/ibank-focus-2014-crude-oil-price-outlook-2013122826944/
No comments:
Post a Comment