Daniel Kahneman, Olivier Sibony and Cass Sunstein, three lovers of cognitive functioning who need no introduction, are the co-authors of the new book: Noise - A flaw in Human Judgment, published this year in English (soon to be published in French).
In this book, they pedagogically discuss the importance of noise in explaining inconsistencies in our judgements and raise the associated ethical or credibility issues. I propose to summarise here three key ideas: first, that cognitive biases produce statistical biases; second, that noise is unavoidable in our judgements; and third, the interesting and useful idea that it is possible to reduce noise.
Although they are fascinating, I will refrain from giving you yet another presentation on cognitive biases. I will only point out their systematic nature. Indeed, decades of research in the behavioural sciences have shown that, under certain conditions, this type of inconsistency in our reasoning is almost inevitable. By following the experimental method, these studies have been able to demonstrate that we all exhibit the same cognitive deviations (where rational reasoning would be expected). What does this mean statistically?
Daniel Kahneman, Olivier Sibony and Cass Sunstein illustrate this phenomenon in Figure C, where a statistical bias can be seen. Indeed, we notice that all the shots of the target in question (i) are not in the centre of the target, (ii) but rather clustered in the lower right quarter. One could therefore assume that the gun used to shoot has a defect that causes a certain behaviour - i.e. in all shooters. In this context, theerror (mistake, relative to the centre of the target) is shared, which makes theerror (in the statistical sense) directional. Because of their systematic nature, biases are relatively predictable, and it is therefore possible to anticipate them in order to correct the error and thus gain in accuracy, using the nudge method.
Before we get to that, let me illustrate the concept of noise with an example that speaks to everyone. Every morning I plan to get up at 7am to start my day. Sometimes I manage to get out of bed at 6.45am, while on other days I stay in bed until 7.10am. Lately it's been 6:57 and then 7:08 the next day. This variability is just noise: the number of minutes between the actual and the planned time of getting up does not really follow a trend (-15 min, +10 min, -3min, +8min). It is indeed an unpredictable fluctuation. Statistically, an error can be noted in relation to what is expected (a rise at 7:00 am). This error is represented schematically by the three co-authors in Figure B. It can be seen that the shots are fired over the entire target: it is therefore not an error caused by a bias.
The trick with statistical noise is that it is much less visible than bias. Indeed, if you average all my wake-up data, you might think that I am perfectly fulfilling my objective (since on average I get out of bed at 7:00 am). What you have to keep in mind is that in a data set, you should expect to observe systematic noise (or even noise AND bias, as shown in Figure D).
Nevertheless, noise can be problematic in our judgements. To judge is to measure information that has some uncertainty. In their book, Daniel Kahneman, Olivier Sibony and Cass Sunstein focus specifically on so-called "professional" judgements, where a certain consensus is expected (i.e. where variability is not desired). For example, it is expected that, faced with the same case, two doctors will make the same diagnosis; similarly, it is expected that a judge's verdict - for a given case - will not vary according to the time of the decision. And yet... ! The authors present various studies that underline the importance of noise, whatever the sector (medicine, justice, insurance, recruitment...). Although it is not very visible, noise is everywhere.
As humans, we judge a situation according to our singularity (taking into account - often unconsciously - our own history, our own experiences, and also our own cognitive biases at work at the time of judgement). This singularity creates noise. This is what the authors of Noise call 'pattern noise', which can be seen as the signature of our uniqueness. In other words, we are unique, and this definitely leads to noise in our judgements. The problem is that we often don't see it... Worse: several large-scale data analyses have revealed our tendency to minimise it (we expect reasonable disagreement between different judgements, which may in fact be five times greater than we imagine). For this reason, the authors suggest that organisations carry out a ' noise audit ', an organisational assessment of the importance of noise in judgements.
More generally, the authors propose to practice decision hygiene. This is the idea that it is possible to make our judgements more robust, i.e. more precise, by creating conditions that are favourable to the decision. For example, chopping up the mass of information available to us to avoid having the feeling of judging the whole. In reality, this is a skill we are incapable of: we will naturally tend to focus on information that goes in our direction and confirms our hypotheses. But our sense is our own, by definition, and therefore very sensitive to noise!) Also, aggregating several judgements collected independently (to avoid the influence of one or the other or even the group on the judgement) is a method for preventing noise. For example, instead of polling your employees on a process during a meeting, ask for their opinion individually.
One thing is certain, it is impossible to remove all the error from (professional) judgements. Yes, that is the nature of humans. We are not machines that make their "judgements" by strictly following an algorithm and where error is technically impossible (we can cautiously assume that an Excel formula will never be wrong, so we expect zero variability). Let's accept that our singularity makes us poor judges and prevent noise by adopting good decision hygiene!