MarketFox Investment Commentary – Investment Innovation Institute

MarketFox Investment Commentary

Are We Biased? Bringing Time Series into Behavioural Finance

Heuristics Aren’t Necessarily Dumb

Before becoming a professional investor, I was a psychology student at the University of Melbourne. The course took me on an interesting personal journey. In first year, we studied the classic research. Studies such as Pavlov’s conditioning experiments with dogs, Asch’s conformity studies, Milgram’s coerced compliance experiments and Zimbardo’s infamous Stanford prison experiment.

By the end of the first year, I had the impression that humans are fairly predictable. All I needed to predict behaviour were a bunch of simple cause and effect rules learned from these seminal studies.

Along came year two, which I describe as the ‘yes, but…’ year. In this year, I learned that there are no perfect causal relationships in psychology. A host of variables and circumstances affect results. In short, most of the simple cause and effect relationships that I learned in first year started to look tenuous.

I remember feeling as if I’d wasted the first year of my studies. It was at this point that I went to see my developmental psychology professor to express my concerns. She reassured me that many students experience similar feelings. In fact, she went further, explaining this was actually one of the aims of the course.

What the faculty wanted students to learn wasn’t a few simple causal relationships. They wanted us to experience deeper lessons. They wanted us to realise that human behaviour is hard to understand. That it’s almost impossible to conclude what is or isn’t normal. That there are always lots of variables to consider. And, that behaviour is context dependent.

Our conversation helped me to realise in some ways it’s the questions that matter most. Of course, the answers are important. But when it comes to humans, it’s impossible to ever be 100 per cent sure of the answer. I had to learn to accept this.

Psychologists aren’t mind readers. They have to accept and learn to work with uncertainty. The willingness to tolerate uncertainty is probably one of the biggest differences between someone with a psychology and a mathematical or engineering background. Can you imagine an engineer working on problems with no clean, certain and inconvertible (i.e. 1+1 = 2) answers?

Fast forward a few years and I’m working as an investment analyst. Everywhere I looked, the effects of human behaviour stuck out like a proverbial sore thumb. This prompted me to learn as much as I could about behavioural economics.

Why write about this? Because I’m now going through a similar experience with behavioural economics. I’ve become increasingly aware that the simple, neat cause-and-effect relationships offered up by behavioural economics might not give us the full picture. It’s not enough to conclude that humans are lazy, error-prone decision-makers who can’t reason probabilistically.

Instead, what’s needed is a framework to help investors figure out when simple decision-making rules or complicated models work best. This is the first post in a four-post series examining some of the criticisms of behavioural economics. We’ll consider three questions:

1. Are we biased?
2. Heuristics: are they lazy and dumb?
3. Does it help to think of system one and system two?

My last post will attempt to bring it all together by presenting the framework mentioned above.

Are We Biased?

Behavioural economics asserts that we’re biased because we fail to make decisions that maximise our expected utility. This assumes that we’re able to accurately measure the expected value of a given decision. Is this a valid assumption?

Critics present two problems with this view. Firstly, most behavioural economics research uses ensemble probabilities to test for irrationality and bias. Meanwhile, real world situations actually involve time series probabilities. Secondly, it’s impossible to figure out the expected value of any decision if total loss or failure is even a remote possibility.

In other words, what appears to be irrational or biased decision making, only looks that way because behavioural economics uses the wrong yardstick (ensemble instead of time probability) to measure investor behaviour.

What’s the difference between ensemble and time series probability? Nassim Taleb provides an example in his thought-provoking new book, Skin in the Game. Taleb recounts the example of 100 people going to the casino. Some punters win and some lose. We can figure out the expected value of gambling at the casino by counting the total money left to the gamblers at the end of the day and dividing it by 100.

Suppose that unlucky punter 28 goes bust. Does this affect the expected value calculation for gambler 29? No. The 1% chance that unlucky punter 28 will go bust is already reflected in the expected value. This is ensemble probability.

What happens if instead of 100 gamblers we have a single punter who goes to the casino for 100 consecutive days? If they go bust on day 28, there is no day 29. In this case, a 1 per cent chance of going bust on any given day guarantees that a repeat player has a 100 per cent chance of going bust. This is time probability.

Taleb argues that most behavioural economics research confuses the two types of probabilities; treating real-world, repeated and path dependant time risks as if they were one-off, ensemble risks. He explains.

Let us call the first set ensemble probability, and the second one time probability (since the first is concerned with a collection of people and the second with a single person through time). Now, when you read material by finance professors, finance gurus, or your local bank making investment recommendations based on the long-term returns of the market, beware. Even if these forecasts were true (they aren’t), no individual can get the same returns as the market unless he has infinite pockets and no uncle points (MF: emphasis added). This is conflating ensemble probability with time probability. If the investor has to eventually reduce his exposure because of losses, or because of retirement, or because he got divorced to marry his neighbour’s wife, or because he suddenly developed a heroin addiction after his hospitalization for appendicitis, or because he changed his mind about live, his returns will be divorced from the market, period.

Maybe we aren’t as biased or irrational as we’ve been lead to believe. For example, the loss aversion (i.e. most investors prefer to avoid a loss rather than make a profit) displayed by equity investors doesn’t make sense when if we use ensemble probability. But it might make sense for a long-term investor. Such an investor repeatedly samples from a distribution where -56.8 per cent (US, 2007-2009) or even -83 per cent (US, 1929-1932) returns are possible.

Taleb explains in characteristically vivid terms why it’s impossible to make expected value calculations (which behavioural economics use as the benchmark for rational decision making), when there is the chance of total ruin.

Consider a more extreme example than the casino experiment. Assume a collection of people play Russian roulette a single time for a million dollars… About five out of six will make money. If someone used a standard cost-benefit analysis, he would have claimed that one has an 88.33 per cent chance of gains, for an ‘expected’ average return of $833,333. But if you keep playing Russian roulette, you will end up in the cemetery. Your expected return is… not computable.

In a Harvard Business Review interview, Psychologist Gerd Gigerenzer summed the problem up this way.

The problem of the heuristics and biases people, including much of behavioural economics, is that they keep the standard models normative, and think whenever someone does something different, it must be a sign of cognitive limitations. That’s a big error. Because in a world of uncertainty, these models are not normative. I mean, everyone should be able to understand that.

Maybe we’ve been too hard on ourselves. The answer to the question of “are investors rational?” depends both on the behaviours that we observe and the standards that we use to measure what’s rational. For example, heuristics aren’t necessarily as dumb or as lazy as they’re often made out to be. There are lots of situations when using heuristics results in faster and more accurate results (a reasonable way to define rationality don’t you think?). We’ll consider some of these situations in part two of this series.

__________

[i3] Insights is the official educational bulletin of the Investment Innovation Institute [i3]. It covers major trends and innovations in institutional investing, providing independent and thought-provoking content about pension funds, insurance companies and sovereign wealth funds across the globe.