Put yourself in the shoes of a student — someone who wants to learn — and imagine the perfect world. In this reality, writers and scientists take care to sum up alternative theories accurately, so when you compare and contrast the alternatives you are guaranteed their faithful representation. One of the great benefits of such a world is the time saved, because students wouldn’t feel the need to read various sources, to judge the veracity of each of them individually. Unfortunately, such a world doesn’t exist; in our reality, most people know the positions of their opponents much less well than their own, and we cannot guarantee a faithful representation of alternative theories — if we aren’t skeptical, we fall into the trap of rejecting one or more theories, because their inaccurate representation misleads us when judging between alternatives.
We know that we don’t live in a perfect world, because there are constant complaints of misrepresentation and misinterpretation. Consider, for instance, the recent controversy over Paul Krugman’s post on unemployment insurance and demand-side recessions. Suppose that you, the student, never reads The Conscious of a Liberal, because you sympathize with the Austrians (or whoever) and you find reading Krugman a waste of time (your time is better spent reading other blogs). Instead, you read Bob Murphy, Russ Roberts, and David Henderson. Do they give you a good representation of Krugman’s argument? Not according to Scott Sumner, Daniel Kuehn, and Chris Dillow. My intention isn’t to weigh in on who’s right. Rather, I just want to point out that there are diverging interpretations of Krugman’s post; and, if the student were to read only one blog (that’s not Krugman’s), they run the risk (in this example, 50 percent probability, right?) of misinforming themselves on Krugman’s argument.
These considerations (in general, not the example above) are what prompted Bryan Caplan to propose an ideological Turing test (ITT). Presumably, failing an ITT leads to a loss in reputation. The ITT is a screening device, to help weed out economists who do not do their due diligence when researching and representing their intellectual opponents. The problems with the ITT are: (a) they can be easy to pass, if those being tested can deliver canned responses; and (b) an objective panel of judges can be difficult to construct.
Another problem is that it may not be rational to want to pass an ideological Turing test. Assume that the ultimate end of research is to showcase a persuasive explanation of x phenomenon; in other words, the benefits of your research are only a function of how persuasive your theory is. Suppose there are two cost curves, that look something like this (excuse my [lack of] drawing skills),
The more time you spend on other economists’ research, especially those you disagree with, the more accurate your representation of their argument, and the lesser the costs associated with misrepresentation. This is because, if your readers catch you misrepresenting the opposition, this will give them reason to doubt your conclusions in general. However, it’s also true that the more time you spend on others’ research the less time you spend on your own, and the weaker your argument will be. The combined cost curve looks something like this (warning, this graph is even more poorly drawn than the last two), Because the researcher has to balance costs, it’s reasonable to expect that she will not necessarily take the time requires to master another person’s theory. Instead, she will find a point where the benefit, on the margin, of focusing on her own research is just equal to the cost associated with misinterpreting the other argument. In terms of the combined cost curve illustrated above, the researcher will choose a distribution of time where costs are minimized (roughly, where the dotted line is). Any other distribution of time will lead to sub-optimal research, and that is definitely not good for the student. In other words, students/readers shouldn’t expect scientists to pass an ideological Turing test.
But, the costs of being misled can still be very high for the student, and there is an interest in reducing this cost. But, if we can’t expect to keep researchers/economists/scientists/bloggers “honest” (if we expect them to be rationally ignorant), we have to develop alternative means of monitoring the quality of their critiques.
The best method, in my opinion, is to be a skeptical reader. Knowing that the amount of time people spend on knowing the other side’s argument will probably be inadequate, we should always assume that the accuracy of the representation will be less than one. We can take a Bayesian approach and assign each blogger/scientist/researcher a probability of accuracy (all of this is in the context of getting the other side right). Based on this probability, we make a choice to corroborate their representation by actually reading the other side’s description of their argument. If we find that the argument was misrepresented or misinterpreted, we adjust that assigned probability downwards; if we find that the argument was well represented, we adjust that assigned probability upwards. Time is scarce, so the higher the probability of accuracy, the less impulse to corroborate you will have (although, initially — when you’ve looked at zero evidence —, you should probably corroborate, no matter what your gut tells you [e.g. even if you really, really like Paul Krugman's stuff]).
What I’m saying amounts to this: the reader should invest in precaution when reading what’s out there. If it doesn’t make sense to expect the people we’re reading to invest the necessary amount of time to study their arguments they’re criticizing (or trying to replace), because that would amount to a decrease in the quality of their output, the reader should take responsibility. The reader should be the judge in the intellectual Turing test. There are ways to make it easier on the reader to do this — blogs that specialize in “smackdowns,” for example —, but their value stems directly from the reader’s interest in corroborating what they read. And, the various means of keeping others’ honest are subject to their own biases and misinterpretations. The reader must know to be a skeptic.
Instead of spending time arguing about how much effort bloggers should put into being “honest,” we could spend the same amount of time persuading readers to be skeptical (and open their minds). It should always be assumed that the accuracy of an argument is less than one. That means there is some probability greater than zero that the argument being read is wrong. It behooves the reader to check this possibility out. We are all interested in finding “the truth,” but “the truth” is largely something we have to find for ourselves. We don’t necessarily need to do the research ourselves, but we do have to compare and contrast different arguments, evidence, and conclusions — not being skeptical, and not seeking to corroborate what you read, will undermine your progress as a student (as someone looking to learn about something). Of course, sellers in the market for ideas should always be honest, but honesty doesn’t always cut it; there are other sources of ignorance and misinterpretation, and it doesn’t pay to eliminate them. The reader should always make precautionary investments, which means being a skeptic.