Simon Shim @simons him
In "Socially Adaptive Belief" and "Motivated Ignorance, Rationality, and Democratic Politics", both published in 2020, philosopher Daniel Williams, a Research Fellow at Cambridge University and an Associate Fellow at the Leverhulme Centre for the Future of Intelligence, discusses how our social environment influences which beliefs we adopt. Here we talk with him about socially adaptive beliefs, signaling, motivated ignorance, and why it is often so difficult to debate with climate change-skeptics.
What are socially adaptive beliefs?
Roughly, they are beliefs that we form because of their effects on other people. Such beliefs therefore adapt us to our social environments. By contrast, most of our beliefs are what might be called “representational beliefs”: we form these beliefs in order to represent the world accurately – or at least those parts of the world relevant to our goals. Of course, distinguishing between beliefs in this way is an idealization. Any given belief will have multiple interacting causes. For example, a given person’s political beliefs will no doubt be formed as a consequence of myriad interacting factors: the evidence to which they’ve been exposed, the people they have learned to trust, and the way in which others – especially people in the valued communities to which they belong – respond to them. The concept of socially adaptive beliefs nevertheless helpfully draws attention to an influence on belief formation that is often neglected in philosophy, psychology, and ordinary life: namely, how other people respond to our beliefs.
There is an interesting objection that you discuss, which claims that it would be more advantageous for us to simply pretend to adopt certain beliefs, rather than to really adopt these beliefs, if what we mainly care about is whether others think that we believe something. Why do you find this objection ultimately unpersuasive?
Of course, in many circumstances it is more advantageous to simply pretend to hold socially adaptive beliefs. Such conscious deception brings its own costs, however. For example, the process of maintaining a gulf between one’s private beliefs and professed beliefs can require substantial energy, and it can result in punishment if it is discovered. Sometimes people respond to such considerations by arguing that people are bad at detecting deception, which leads them to conclude that these costs are not very high. In fact, I think that we don’t have good evidence on how good people are at detecting deception in natural conditions, and in any case the question is not how frequently deception is detected, but the *expected costs* of deception, which can of course be high even if the frequency of detection is low. Given this, I think it is at least plausible that there are some contexts in which the costs of deceiving others about one’s beliefs outweigh the costs of genuine socially adaptive belief.
More importantly, I think that this objection overlooks another crucial feature of the relevant cost/benefit analysis. As I point out in the paper, there are some contexts in which there are very few – if any – benefits for holding accurate beliefs. In politics, for example, there are no practical costs to holding false beliefs. Because we (i.e. ordinary citizens) have a negligible impact on political decision-making, we can pretty much believe whatever we want without such false beliefs influencing political outcomes. In these conditions, we do not gain anything by maintaining a gulf between private (but reasonable) beliefs and publicly professed beliefs.
Of course, this kind of cost/benefit analysis does not get you very far. Whether socially adaptive beliefs exist is ultimately an empirical question. I argue in ‘Socially Adaptive Belief’ that there are some compelling reasons for thinking that they do, but I also think that this topic warrants more empirical research in the future.
You have argued that there are certain characteristics of human social life that give us incentives to abandon epistemic rationality. What are these characteristics?
Different characteristics are relevant in different cases. For example, humans are status-seeking creatures constantly involved in evaluating and being evaluated by other agents. In this context, we benefit from being able to persuade other agents of our social value (i.e. our intelligence, virtue, skills, etc.). Given this, it can be in our interests to develop unfounded beliefs about our social value if this help us persuade other agents of our social value more effectively. In addition, human beings are also a profoundly coalitional (or “tribal”) species. Coalitions (factions, cliques, religions, political groups, etc.) place a great value on commitment and loyalty, and very often the incentives to advertise one’s loyalty to an in-group come into conflict with sound reasoning and dispassionate judgment.
You have suggested that some of our beliefs might be fruitfully described as social signals. What does that mean?
The idea is not original to me. There is an excellent overview of this idea by Eric Funkhouser in his article ‘Beliefs as Signals’. Signals are objects with the function of communicating information. To say that beliefs qualify as social signals is therefore to say that certain beliefs have the function of conveying information to other agents. To have such a function, it is not enough that the belief *in fact* conveys information to other agents; rather, the relevant individual must have formed the belief at least in part *because* it conveys information to other agents. One question is how this causal influence operates. In my paper ‘Socially Adaptive Belief’, I argue that it happens through motivated cognition, but this is more of a placeholder for an explanation than an explanation.
Another question is what this signalling perspective adds to our understanding of belief formation. One idea that I am pursuing at the moment is that one can understand certain irrational beliefs in terms of the concept of costly signalling, where the working hypothesis is that the extreme irrationality of certain beliefs is a cost that renders them credible signals of in-group loyalty. Although this hypothesis has been advanced several times in the literature, it has not been developed in a theoretically or philosophically satisfying way yet.
In their recent book, The Elephant in the Brain, Kevin Simler and Robin Hanson argue that signaling is a ubiquitous phenomenon. In fact, in an interview Hanson has stated that "[i]n a rich society like ours", the prevalence of signaling might be "well over 90 percent". Do you agree with this? Is signaling so fundamental, or are other mechanisms at least equally important?
I agree that social signalling is a fundamental and often over-looked aspect of social life. People care desperately what other people think of them. Much of human behaviour is therefore concerned with influencing this impression. It is difficult to see how to evaluate a number like 90%, however. People have many goals that interact in complex ways. Consider much of what human life is taken up with: family, friends, work, artistic appreciation, and so on. There is no doubt that social signalling features heavily in many of these activities, but so do a thousand other goals (bonding, care, empathy, pleasure, etc.). Given this, I think it is probably more accurate to say that social signalling is relevant to our understanding of (at least) 90% of human life than that 90% of human life is just social signalling.
What is motivated ignorance, and how it is different from what you call "acquisitional ignorance"?
Acquisitional ignorance is ignorance driven by the motivation to avoid acquiring a piece of knowledge because its anticipated costs outweigh its anticipated benefits. For example, I would like to learn theoretical physics, but the relatively meagre benefits I would attain from this knowledge – impressing people with my understanding of quantum field theory – are outweighed by the massive investment in time and energy it would take to acquire this knowledge. Motivated ignorance, by contrast, is ignorance driven by the anticipated costs of possessing a piece of knowledge. This can range from the trivial – for example, avoiding “spoiler alerts” because you want to postpone consuming a film or novel to a future time – to more serious cases – for example, avoiding a medical test because you are afraid of discovering positive results.
Superficially, motivated ignorance sounds like a form of irrationality. However, you argue that this needn't be the case. When might such an attitude be rational?
Motivated ignorance is a form of epistemic irrationality, or at least it often is. This is consistent with its practical rationality, however. In my view, motivated ignorance is practically rational if one has good reason to believe that the costs of possessing a body of knowledge will in fact outweigh the benefits of possessing it. For example, a union leader might avoid polling a group of workers to improve her bargaining positioning. Although this means avoiding (potentially valuable) knowledge that she could obtain, it might also be practically rational if she has good reason to believe this ignorance will advance her strategic goals.
Does motivated ignorance require some kind of second-order awareness of the reasons why we might want to ignore certain information? For example, can I be motivated to ignore evidence in favor of natural selection unless I know, for instance, that the truth of natural selection might reduce the likelihood of some religious beliefs of mine?
This is a great question. One issue concerns the difference between conscious and unconscious knowledge. For example, it is common to say something like, “Deep down, Bob knows that his wife is having an affair.” In this case the individual might very well have a second-order awareness – albeit an unconscious second-order awareness – of the costs involved in conscious awareness of this fact. In general, though, I think that the motivation to avoid a body of knowledge in motivated ignorance can be guided by sensitivity to a range of cues that never elevates to the status of explicit second-order awareness (either conscious or unconscious) of this sort. Nevertheless, clarifying this phenomenon, and distinguishing it from cases in which people are merely inadvertently ignorant, is a topic for future research.
In contemporary democracies voters appear to ignore relevant facts about which experts and scientists have reached a certain consensus. It is often argued that this ignorance can be explained by appealing to the high costs of acquiring the relevant information. Why do you think this explanation is insufficient?
Although acquisitional ignorance clearly explains some voter ignorance (e.g. widespread ignorance of the difference between fiscal and monetary policy), it is inconsistent with many features of voter ignorance in other contexts – for example, the fact that voters are often systematically misinformed rather than merely uninformed, the fact that misinformed views are often held with substantial confidence and emotional conviction, and the fact that voters often expend an enormous amount of time and energy acquiring political information. The problem is therefore not that they do not invest time and energy in acquiring political information, but that much of this time and energy is invested in finding information that rationalizes beliefs that they are motivated to form for non-epistemic reasons. (This is a bit like trying to learn theoretical physics by spending lots of time studying Deepak Chopra). In addition to all of this, my ‘Motivated Ignorance’ paper also draws attention to another mysterious feature of voter ignorance in the political domain: when it comes to ignorance of, say, climate change, there is almost no correlation between beliefs about climate change and understanding of the science behind it. That is, many people who believe that climate change is a risk lack even rudimentary knowledge concerning how it works, and many people with extensive knowledge of the science surrounding climate change are nevertheless climate sceptics. This phenomenon is difficult to understand without appealing to motivational considerations.
What is identity-protective cognition, and why does it make it difficult for climate-change skeptics to change their minds?
At the most abstract level, identity-protective cognition is just treating information in ways that are intended to protect one’s identity rather than arrive at the truth. Of course, this generates lots of questions, the most obvious of which are: What is identity? And why would identity come into conflict with truth? I think that we do not have satisfactory answers to these questions in psychology, philosophy, or social science. For example, much of the work on identity in social psychology (e.g. in “social identity theory”) just ends up redefining identity in jargon-heavy ways and then calling such redescriptions a theory. For this reason, one of my central aims in future months and years is to develop a theoretically and philosophically satisfying answer to these questions.