Why data should replace expert advice

Fewer guesses, more rational decisions

Somi András | October 8, 2017

This summer we cancelled our holiday trip to Italy, just two days before departure, because the doctor advised my then few-weeks pregnant wife to avoid flying. The reason was a blurry little something he saw on the ultrasound (I mean, besides our future child, who was also a blurry little something at that point). According to the same doctor, this was quite common and would most probably disappear without any noticable effect (it did, indeed).

Obviously we felt that a short (and not quite exotic) holiday in the Mediterranian wasn’t worth the risk of any issues with the pregnancy. But what if we could have known the exact risk? Like there were some kind of problems in exactly 1 in a million similar cases? Or 1 in a billion? Could this change our minds? Is it really how it should work?

As quite a few common fallacies are packed into this little story it keeps bugging my analytical mind.

Inability with probability

One thing that came into play here is that we are genetically not prepared to grasp probabilistic outcomes. Maybe our ancestors did not want to assess the chances of being slaughtered by a sabertooth tiger by repeating the same experiment thousand times, just to have a fair understanding of the probability distribution. Most of them simply ran away, and the remaining prehistoric statisticians fell victim to natural selection.

So the human brain is wired for loss-aversion, as a clear evidence of being lousy with probabilities. It forces us to avoid scenarios where the perceived loss is high, even when the expected value (the sum of outcomes, each weighted by its probability) is better than in other alternatives. Like when we act to avoid losing our future child, even if the chances are equal to being hit by an asteroid with a rainbow-colored unicorn named Jeff riding on it.

It works the other way around, too. People buy lottery tickets or invest in lottery-like securities for the promise of extreme payout with extreme low probability (with a clearly negative expected value, even with apparently low costs).

So we are in general very bad at assessing tail-risks and expected values, especially when it includes lot of factors. Our case was not a banal dilemma, but people react similarly to smaller common problems. And huge businesses are built upon these fallacies, usually with some kind of ‘experts’ and their advice in the middle.

Experts don’t know better

Most likely our doctor had no idea of the exact probabilities in a case like ours. Things that might happen in the body of a pregnant woman under special circumstances is too complex to predict in details just by relying on medical folklore and the subjective memories of a single person (except when the symptoms are obvious, and it’s already too late).

But admitting this is not an option. Should something bad occur on the trip he approved, the doctor would not be able to explain that it was not related to travelling (he wouldn’t even know himself). So he tossed the risk around, playing it out on our loss-aversion, to make it our call to make the trip, against his advice, even if it was unnecessary precaution (maybe it was, maybe not, I don’t know.)

Or think of predictions of stock market pundits. In that circus you have to say something specific and certain all the time. It’s like buying lottery tickets: maybe you get lucky and it will make you The Expert. The pay-off can be huge while being wrong is quite okay if you wrapped your message in proper amount of bullshit. That’s why stock market talking heads make as many long shots as they can and use clichés and professional language, that might sound knowledgable but in fact is pure gibberish.

We need data and experts to explain it

So experts are not free from all the psychological pitfalls every human have to deal with, even burdened by some others. Still, it does not necessarily mean we should get rid of them. On the contrary.

Besides many important things the doctors have always provided great service by explaining the current condition of a patient, a good stock market analyst could give you comprehensive summary of the current state of a company, real-estate agent can guide you through the process of buying a new home, and so on. You just shouldn’t expect these people to make predictions or see through complex problems in their entirety.

The ultimate solution is using data in place of anecdotal evidence. Our advice should have come in a form like

’in XX thousand cases flying caused minor issues in 0.00X% and major issues in 0.000X% of the cases for pregnant women with similar condition, but in certain circumstances with some specific actions these rates fell by XX%.’

so we could make a more educated decision (but still have to digest probabilities ourselves, which is hard).

Better, imagine that a clear guidance was based on a real-time model that continuously evaluates hundreds of different external and internal factors about the body of the mother (obtained through wearable or even implanted devices) and compares it to a vast database of pregnant women under different circumstances all around the world. It’s not even sci-fi, it’s all existing technology.

These things will mean much less ‘expert’ guesses in the future, and maybe a little less irrationality in everyday decision-making. But for a long-long time we will still need human experts to build, finetune, operate and explain the results of such artificial intelligence. So the best experts are not threatened by the looming AI revoluion, but will benefit the most from transferring their expertise to this new platform.

And in the meantime we will have the coolest kid in March. That’s a fact.

Header photo by Chris Holgersson/ Unsplash