Rational Deference and the Epistemic Shortcomings of "Thinking for Yourself"

Alex Pinheiro | April 2021

Rodin's "The Thinker" in a vintage postcard image of the Legion of Honor Museum in San Francisco, CA

O

n a scale from 0% to 100%, how strongly do you believe each of the following sentences?

I’m very sure of all of them – I have at least 97% credence in each -- and I’m in good company. But despite my high confidence, it’s not clear I have strong evidence for any of (a) though (d). I’m a layperson: The body of experimental evidence for those claims is unknown to me, too technical, or impossible for me to obtain. I don’t own any of the lab equipment. If I did, I couldn’t operate it. Even if I remembered the (incomplete) evidence high school taught me I’d be in no position to critically evaluate that evidence. In almost all cases, that evaluation requires specialized expert knowledge and technical training (like completing a PhD). 

After honestly considering my reasons for believing claims like (a) through (d), it’s clear I don’t understand those reasons, or why they’re strong evidence. Indeed, everyone holds many more beliefs than those they can be expected to justify themselves.  But I nonetheless go on believing (a) though (d). Is my confidence level impermissible given the apparent weakness of my reasons for belief? In other words, are those beliefs irrational?  

 

Is Deference Rational?

The natural response is “Okay, maybe I don’t have strong enough reasons to justify my confidence in (a) though (d). What I do have are reasons to believe that someone else – someone whom I recognize as an expert – has strong enough reasons to do so. And on that basis my belief is rational”. 

Philosopher John Hardwig thinks that’s the right answer. But it’s not obvious that reasoning this way is allowed, epistemically speaking (“epistemic” and “epistemological” are fancy words philosophers use -- they just mean relating to knowledge and its justification). Consider a more abstract case. Imagine two people, Hardwig says, Person A and Person B. Person A has reasons to believe it’s raining outside (a proposition hereafter represented by the letter P), but person B does not. What B does know is that A possesses reasons to believe P (maybe A told B this, and B has known A to be a trustworthy, proficient reasoner). B herself has no clue what A’s reasons are. But on the basis of her mere knowledge of A’s possession of good reasons, B concludes she has justification to believe P, that it’s raining. Can B justify her belief vicariously this way? 

 

Hardwig says yes, but for reasons which are bafflingly incomplete. His argument is as follows (I’m paraphrasing but, astonishingly, this isn’t a strawman):

Personally, I find nothing counterintuitive to the idea that a significant fraction of our beliefs are irrational (humans are notoriously poor reasoners). If we want to say that appealing to experts can rationally justify our beliefs, a more fruitful way to do it is to invoke Bayesian reasoning (the strategy of philosopher Alvin Goldman and others). Bayesianism says that all evidence gives probabilistic justification for a hypothesis. More formally put:

 

  1. B’s ‘evidence’–  the fact that some person A possesses reasons for believing P – is more likely in a world where P is true than in a world where P is false.

  2. Therefore, when B learns the evidence, she should update her beliefs by increasing her personal confidence in P (by an amount mathematically specified by how much more likely the evidence is if the hypothesis is true).

* * * * 

That’s all fine and good. But the kind of evidence for P that B used above exhibits an irregular property: The knowledge that A has reasons to believe P is not itself a fact that counts in favor of the truth of P.

To see why, consider some facts that would provide evidence for P (again, that it’s raining right now).

 

  1. The weather forecast predicted rain today.

 

  1. It’s dark out but it’s only 4:00 pm.
     

  2. Your roommate just walked into your living room carrying an umbrella.
     

  3. His clothes are wet.
     

  4. You hear a distinct pitter-patter noise on your roof.
     

These five facts share an important property. Knowing each of (1) through (5) independently counts towards the truth of P. In other words, your confidence in P should grow with each new fact you learn. If you only knew (1) and (2), for example, you would be less confident in P than if you knew all five. 

So, what about this one?

 

  1. There exists some person, A, who knows facts (1) – (5)
     

This is the knowledge possessed by B in the thought experiment. (6) differs from (1) - (5) in the following related ways:

  1. If you already knew the first five facts, learning this 6th fact wouldn’t make you more confident in P than you already were.
     

  2. Imagine you’re a lawyer, trying to persuade a hypothetical judge of P. You want to construct the strongest possible case, so you gather all the evidence in the world for P. What you have before you now is a list of facts -- all of which provide reasons to believe P. Intuitively, as the quantity of unique facts in the world which provide reason to believe P increases, the maximum possible strength of your case for P should also increase. But imagine a world where facts (1) – (5) obtained but where (6) did not obtain. That would be a world in which you could construct just as strong a case for P as the world in which all 6 obtained.

 

To my knowledge, Hardwig neglects to address these irregularities. But the Bayesian approach (that was the probabilistic argument for the rational permissibility of B’s reasoning which I mentioned as an alternative to Hardwig’s) defines evidence in a way that more-or-less sidesteps this oddity. For a Bayesian, ‘evidence’ for P is a subjective thing. It’s defined as ‘anything which, given someone’s current knowledge, should increase their personal confidence in P after the evidence is learned’. That permits (6) to be evidence for P only in cases where (1) through (5) are not known by the reasoner.    

The Irrationality of “Thinking for Yourself” 

So deferring to the authority of experts is part of the rationality playbook. When and how often should we do it? Hardwig convincingly argues that it’s rationally mandatory for laypersons in all cases (except those in which the layperson is themself an expert), maintaining that ‘thinking for oneself’ is less rational than uncritically deferring. 

Simply put, laypersons are epistemically helpless against experts. Consider an imaginary astronomer who informs a layperson that ‘there’s a supermassive black hole at the center of the Milky Way’ (this proposition hereafter represented by the letter Q). The astronomer then utters something-or-other about Sagittarius A, elliptical orbits, vector calculus, and pericenters -- whatever all that means. 

The layperson is in no position to understand the astronomer’s reasons for believing Q, or why they are good reasons. Nor are they in any position to independently verify the accuracy of Q – that would require expensive and hard-to-use telescopic equipment and dense astronomical math. The layperson could check an astronomy textbook, but that would be to consult another expert. Suppose the textbook had disagreed with our imaginary astronomer – what then? Checking the claim of one expert against the testimony of other experts doesn’t restore the epistemic autonomy of the layperson. On this view, it only grows the set of people to whom he is rationally required to defer.

The upshot is that the layperson is ill equipped to mount rational disagreement against the expert’s testimony. That is, it’s rationally impermissible for you to challenge expert authority on topics in which you possess zero (or lesser) expertise. If the layperson believes the expert is in fact an expert with regard to astronomy, it follows that the only epistemically rational move is to defer unquestioningly to his beliefs. The layperson is, unfortunately, in no position to do anything else.

* * * * 

Maybe the layperson could lob ad hominem attacks against the expert. “You’re a lousy astronomer, why should I listen to you?” Maybe. But how lousy? He’s probably still a less lousy astronomer than the layperson is. And how could someone with layperson-level expertise in astronomy ever justifiably know that? By deferring to other experts? Ad hominems are arguments only about who we should consider experts. Indeed, there are interesting philosophical problems about how to identify experts given limited lay knowledge. I’m not going to address those questions any further. The fact remains: we recognize at least some people as experts, so the question of the rationally optimal epistemic relationship to those people persists. 

Hardwig continues. Could anything other than ad hominem arguments restore the layperson’s epistemic autonomy? Maybe if he tries really, really hard he could obtain expertise that rivals that of the astronomer. That would take years of education and require significant specialized training. Should he succeed, he might regain epistemic autonomy regarding whether there’s a black hole at the center of our galaxy. But victory would be short-lived as there remains an impossibly large set of beliefs on which the layperson remains incompetent (economics, biology, philosophy, physics, sociology, to name a few). At best, laypersons may become epistemically independent of a small subset of experts -- by becoming one themselves. But deference on almost all topics would remain rationally mandatory.

Alternatively, if the layperson renounced his epistemic dependence on experts in totality what would result is an enormous number of unjustified beliefs. To pursue “epistemic autonomy across the board”, as Hardwig puts it, results in “holding relatively uninformed, unreliable, crude, untested, and therefore irrational beliefs.” 

* * * * 

From this it follows, for Hardwig, that ‘thinking for yourself’ at the expense of deferring to expertise is irrational in all cases except those in which you yourself are expert. (“Expert” being broadly construed. You are the world’s foremost expert on how good your breakfast tasted this morning). Call this conclusion the Authoritarian Theory of Knowledge, or ATK.

This is a radical view. To illustrate how it differs from our common-sense notion of justification it’s helpful to work through an example. Take a confident belief of yours (say, in the truth of Darwinian evolution). In common discourse, if someone asked why you hold this belief, you might reply:

“I believe evolution for a few reasons – first, the fossil record shows evolutionary descent with modification. Second, modern genetic science confirms the existence of random mutations. Third, it’s been observed in labs in microorganisms…” and so on.

According to ATK, that sentence reflects errant thinking. Taken alone, the reasons you gave can justify only a miniscule component of what rationally permits your high level of confidence in evolution. What you meant is that you believe in evolution because there is a trustworthy consensus of experts who themselves have reasons to be confident in evolution. The reasons they tend to give when publicly communicating their beliefs involve fossils, modern genetics and lab experiments, sure, but they also involve much more. They’ve also, crucially, encountered (and rejected) evidence against evolution about which you probably know nothing. The fact that expertise unanimously endorses evolution is where your confidence comes from, since what you know without the force of unanimous expert testimony about the existence of these things called fossils and genes is, alone, evidence far too weak to justify the strength of your belief in evolution. Truthfully, you barely even understand that stuff.

If you want to investigate some topic in which you’re not an expert, it’s rational to place utmost importance on finding what the experts believe. You can pay almost no attention to why. Only read about that if you’re interested. And definitely don’t allow that reading to sway you from the conclusion you initially drew. That would be thinking for yourself. 

The Consequences

The ATK has some counterintuitive implications. First, common sense tells us that rationality requires thinking, reasoning, and reflecting for oneself. We generally think being uncritical is an epistemic vice – not something that’s rationally required in almost all cases. Is it true that we ought to do a lot less thinking for ourselves if we want to be more rational? Apparently so. Appealing to authority even has a fallacy named after it. Honestly, that was a mistake from the outset. Reliable testimony from an authority can provide strong inductive evidence. But ATK implies something stronger: the fallacy should be flipped. We ought instead to teach our middle school English students to avoid the Thinking for Yourself Fallacy.

Second, it implies that a core feature of deliberative democracy is misguided. In theory, citizens are civically obligated to seek knowledge, debate the merits of policies with their peers, and come to a consensus on how the government should act. The democratic process is designed to reflect people’s beliefs in addition to their values. But few of us are experts with respect to government policy. What do we know about foreign affairs, healthcare, taxes, monetary policy, and immigration? The government, and the world, are really complicated. Not even the professionals who spend their whole careers studying those topics reach consensus. How can we expect a lay citizenry to fruitfully communicate their preferences by voting on topics so epistemically inaccessible to them? It appears that what’s epistemically impermissible is civically mandatory, dooming deliberative democracy to rule by the irrational.

Maybe that conclusion is too extreme. ATK dictates that the lay-citizenry must uncritically defer to experts on almost all political controversies, but maybe there’s some utility in a population deliberating over expert belief. If one candidate supports a $15 minimum wage and the other opposes it, the population is rationally obligated to uncritically find out what the experts think about that and then decide how to vote using their values (Relevant ones in this case might be fairness for the poor and a healthy economy, etc). 

Yet even this implies that almost all political discourse, as currently practiced, is irrational, since very few voters form political beliefs exclusively by inquiring into what the experts believe. It suggests further that the process of citizen deliberation is a clumsy middleman in the democratic process -- one whose primary function is to interpret the opinions of a tiny expert class -- rather than a core feature of a political system meant to reflect the views of the population. It reduces the process of civic deliberation to an annoying inefficiency.

But the fact that we might not like those consequences is hardly a reason Hardwig is wrong. 

And ATK has some palatable consequences too. For example, a rational requirement to defer captures our intuitions that we should (usually) dismiss conspiracy theories. Most of us aren’t informed enough to refute every argument an intensely researched anti-vaxxer could throw at us. Instead, we defer to experts -- who unanimously disagree with them. 

A Balanced Rationality

What does this mean for us as laypersons? Are we to unquestioningly accept the pronouncements of “experts” in a political climate where the very location of expertise is constantly disputed? Perhaps the ATK risks entrenching an “upper echelon” of technocratic social planners, implicating the neglect of our constitutional emphasis on individual liberties. There’s a danger to these ideas. Permissively interpreted, does Hardwig’s view imply we restrict voting rights to the “expert class?” Must we endow only the “qualified” or those with an IQ above a certain minimum with access to our political institutions?

There’s a line to draw between mindless deference to expertise and thorough consideration of that expertise’s motivations and financial interests. This is roughly the view of philosopher Steve Fuller, who famously disagreed with Hardwig. Laypersons must rationally think for themselves, he says, both to identify experts and to determine if their testimony is relevant to their goals. Too often, outside the closed room of theoretical epistemology and in the open air of reality, the problem of identifying experts is indeed muddied by a crisscrossing network of political and financial interests, whose goals might diverge from conveyance of the truth, especially in social science research.  

And perhaps pure rationality and technocratic efficiency are not the only objectives of our political and legal system. Democracies work not because each individual believes rationally, but because institutions produce superior results for the citizens they are intended to support. Maybe the application of abstract epistemological findings is incompatible with the democratic legal order. 

And maybe that’s okay.

1. Everyone possesses more beliefs than they can adequately justify themselves -- that is, without appealing to anyone’s testimony. 

2. So if it were the case that B’s type of evidence (namely, that there exists someone who has reason to believe P) didn’t provide reason for B to believe P then a large fraction of our beliefs are unjustified.

3. But it’s not the case that a large fraction of our beliefs are unjustified (by Hardwig’s intuition).

4. And furthermore, if it were the case that B’s type of evidence didn’t provide reason for B to believe P, then as society’s knowledge grows through scientific progress, the number of unjustified beliefs grows rather than shrinks.

5. But that’s not true. As society’s knowledge grows, it couldn’t be the case that people who compose that society become more irrational (by Hardwig’s intuition).

6. Conclusion: So, B’s evidence that there exists someone who has reason to believe P does provide justification for P.

(a) All matter is made of tiny particles called atoms.

(b) Earth is roughly 4.5 billion years old.

(c) Speciation occurs by Darwinian evolution.

(d) Diseases are caused by microorganisms known as germs.

Alex Pinheiro  is a sophomore at the University of Michigan studying Philosophy and Cognitive Science. His interests include philosophical topics including rationality, existential risk, minds, and more, which he may choose to study in graduate school. Outside of MC he obsesses over learning and playing online poker.