How beliefs are created, and their consequences
Our beliefs are the core of our identity. They shape how we see the world and how we act. They help us make decisions. Our beliefs have consequences on what kind of people we frequent, how we spend our time, and why we do what we do. But how do we form them? What’s the process? Does it affect other parts in our lives? It turns out that’s the case, and what started 400 years ago as a philosophical battle has many implications in our daily lives.
If you live in the western world, what follows might be an accurate description of how you think beliefs are created. First, we are presented with an idea and we comprehend it (“Rhinoceroses have six legs”). Then we assess it against what we know (“Could it be that rhinoceroses have six legs?”). Then we decide to believe it or not (“No, I don’t believe that rhinoceroses have six legs”). If this process seems obvious, it is due to one man who influenced a huge part of western philosophy : René Descartes.
Descartes was the first to initiate the separation between comprehension and assessment of ideas when he partitioned the mind into two distinct domains. The first domain is the intellect, in charge of the comprehension. The intellect is passive and its job is to receive ideas, in an effortless and automatic way. The second domain is the will, which is the active part of the mind. The will is the sole decision-maker about the veracity of a statement. The intellect present ideas, and the will decide of their truthfulness. This whole concept was one of the axioms upon which Descartes built his thesis. In Principles of philosophy he wrote:
That we have power to give or withhold our assent at will, is so evident that it must be counted among the first and most common notions that are innate in us.
Descartes’ description of beliefs formation became the gold standard of rationality in occident. In the french language, rational people are even said to have an esprit cartésien.
Back to belief formation. If we had to draw a workflow for Descartes’ framework, it would look like this:
Comprehension => Assessment => Acceptance OR Rejection
That is all well and good. Except that around the same time, another great philosopher, Baruch Spinoza, was suggesting a different process of belief formation. For Spinoza, in order to comprehend an idea, we need to implicitely accept it first . Therefore comprehending and accepting an assertion are the same operation. When meeting the idea “this elephant is pink”, the act of creating in our head the image of a pink elephant is the same as believing it. It is only after we have understood (and by the same token, accepted) the idea, that we may assess its veracity, and unaccept it if needed. No, after further assessment, this elephant is desparately gray, like any other elephant.
A schema of Spinoza’s thesis would look like this:
Comprehension AND Acceptance => Certification OR Unacceptance
While reserving judgement about an idea is perfectly possible in Descartes’ theory, Spinoza argued that it cannot be: the idea has to be believed first, and then judged.
For the following centuries, most people went along Descartes, as his theory seemed more intuitive and elegant.
Scarcity of resources
But why does it matter? At the end of the day, the most important thing is that we are able to judge and assess correctly ideas and beliefs. One might argue that the intermediate steps leading to the result should not make any difference…
That would be the case if our brain had infinite resources.
However, by now research has shown that it is far from being the case. Countless examples demonstrate how a cognitive load on one task can prevent the brain to perform other tasks. As a result, it is perfectly possible that the process of belief formation can be interrupted if our brain is busy doing something else. Following two conversations at the same time for instance, or being distracted by an attractive member of the opposite sex.
When the belief formation process is unterrupted, the belief stays “as-is” in our psyche. It will likely not be examined again: we believe we already did the work of assessing this belief.
Scarcity of mental resources has different consequences depending on whether Descartes or Spinoza is right.
If Descartes’ thesis is correct, then cognitive load would probably interrupt a belief during the assessment phase. As a result, a wrong assessment could lead to any outcome:
- believing a false statement
- rejecting a true statement
On the other hand, if Spinoza is right, then the interruption of the process is done while performing the second phase : updating our belief. We are already at a step where the idea is accepted. Therefore a faulty process would keep true statements accepted, and keep false statements accepted as well.
In other words, according to Spinoza, being cognitively busy increases the chances of believing everything you’re told.
Enter the psychologist Daniel Gilbert. After analyzing previous unrelated experiments, Gilbert noticed that his subjects tended to display excessive credulity while being distracted. Being aware of the dichotomy between Descartes and Spinoza, he set out to figure out how beliefs are created in our psyche.
In the white paper You can’t not believe everything you read, Gilbert, Tafarodi and Malone reveal the clever psychological experiment they designed.
- In the experiment, subjects had to read a pair of crime reports on a screen.
- The color of the text on the screen indicated if a particular statement in the reports is true or false.
- The first report contained false statements exacerbating the gravity of the crime.
- The second report contained false statements decreasing the severity of the crime.
- Subjects were divided in two groups: the control group and the experimental group.
- Members of the control group had only to read the statements of the two crime reports.
- Members of the experimental group had to read all statements of the reports, but also had to perform a digit-search task while reading the report.
At the end of the experiment, every subject had to perform the following tasks:
- recommend the length of prison term for each criminal.
- rate criminals, from 1 to 10, on their dangerousness and dislikeableness.
- perform a memory test to attest if some statements contained in the report were true or false.
The digit-search task sole purpose was to stress the experimental group under cognitive load. The experiment would test not only if people were more likely to misjudge false statements than true ones (assessed by final the memory test), but also if they would act on these beliefs.
The results are remarkable.
|Extenuating false statements||Exacerbating false statements||Difference|
|Recommended years in prison||6.03||7.03||-1.00|
|Extenuating false statements||Exacerbating false statements||Difference|
|Recommended years in prison||5.83||11.15||-5.32|
Memory Test - Proportion of statements recognized:
|Control Group||Experimental Group||Difference|
|Recalled as true||0.93||0.89||0.04|
|Recalled as false||0.03||0.06||-0.03|
|Recalled as foils (*)||0.03||0.04||-0.01|
|Recalled as true||0.23||0.44||-0.21|
|Recalled as false||0.69||0.34||0.35|
|Recalled as foils||0.09||0.23||-0.14|
|Recalled as true||0.05||0.09||-0.04|
|Recalled as false||0.01||0.05||-0.04|
|Recalled as foils||0.94||0.86||0.08|
(*) foils statements are statements that never appeared in the crime reports
While it looks like true statements were in general not impacted, the experimental group made many mistakes by considering false statements as true. Not only this, but it influenced their behavior as well. At the extreme, subjects in the experimental group were willing to recommend a prison term almost twice longer than the control group (11 years vs 7 years) when the false statements were exacerbating the crime’s gravity.
The work of Gilbert, Tafarodi and Malone vindicated Spinoza’s theory 400 years after it was enounced. First we believe in an idea, and then, maybe, we reconsider, confirm, or un-accept it.
Evolution of belief formation
But why is it so? Why would a belief-forming mechanism be built this way? The software developer in me would prefer a system obeying Descartes’ description. At least, with Descartes, every single part of the system has a single responsibility.
Descartes’ vision would make sense if we had to build such a system from scratch. However, from an biological point of view this system has surely evolved over time. At the beginnings of our species, our ancestors did not need a belief mechanism as complex as what we currently have.
For most of our history, the signals our brain received did not need to be checked. We are mainly visual creatures, and the images coming through our eyes are very difficult to fake. Having a doubting mechanism at these times was even a competitive disadvantage. Imagine two persons, Bill and Bob, in the african savannah 100’000 years ago. Bill trusts everything coming to his brain, while Bob checks his beliefs. Now imagine that both see a lion coming towards them. Bill will immediately run away. On the other hand Bob will take some time to confirm that the lion is coming. From this point of view, it is not hard to see that given enough generations, Bill’s offsprings will outnumber Bob’s ones. Bob’s lineage will quickly exit the gene pool.
However, around 70’000 years ago, Homo Sapiens underwent what Yuval Harari calls the cognitive revolution. Humans started to use more elaborated languages, and their brains were suddenly able to understand abstract concepts and discuss them. While the day-to-day physical world is hard to fake, the world of ideas is a completely different animal. In this new environment, we can see how someone who trusts everything he’s told would run into many dangers. In order to survive, Homo Sapiens had to evolve a doubting mechanism over its existing trusting one. The resulting belief-formation system would resemble closely what Spinoza described in his Ethics.
The practical lessons we can draw from there are numerous. Independently of how our brain is wired, it should first strike you that doing any kind of knowledge task while already being under cognitive load is an appalling idea. Avoid being in these situations and you will make yourself a great service.
- Avoid multi-tasking. This is the plague of knowledge workers.
- Avoid states of physical or emotional stress, and chronic fatigue.
- Whenever you take a decision, avoid rushing or being in a sense of urgency.
- Be wary of information overload: being unable to discern what is relevant from what is not is a guarantee to make you take poor decisions.
Following the guidelines above will limit the frequency when your mind will trick you, but it will not eliminate it. Knowing that in such a case you are bound to believe everything you’re told, be very wary of the following situations:
- Fact-check news. In fact, avoid news totally if you can. News’ business model is to substitute sensational and drama for relevance. But don’t take my word for it. Rolf Dobelli wrote a wonderful essay on the topic.
- If you still want to follow news, don’t do it while driving and listening to the radio, or reading the headlines of the newspaper while doing groceries or taking the train.
- The success of tabloids and scandal magazines now makes a lot of sense: we rarely read them while being focused, and we are bound to believe all the emotional rollercoaster contained there. In a few weeks the tabloid will publish a public retractation, but the evil is already done. People already believe the emotion-laden stories and will not reassess them.
- In the same vein, we can now understand why smear campaigns and defamation are so effective: by the time the court has condemn the defamatory side, public opinion is still influenced by the false statements it received. People will still believe the lies long after the truth is revealed.
Our brain has a limited horsepower. Knowing about its limitations is a needed first step. But knowing when we are likely to be deceived and not going there is the beginning of avoiding stupidity.