This is the first time, in 18 years, that Atlético Madrid leads La Liga.
This is the first time, in 18 years, that Atlético Madrid leads La Liga.
Put yourself in the shoes of a student — someone who wants to learn — and imagine the perfect world. In this reality, writers and scientists take care to sum up alternative theories accurately, so when you compare and contrast the alternatives you are guaranteed their faithful representation. One of the great benefits of such a world is the time saved, because students wouldn’t feel the need to read various sources, to judge the veracity of each of them individually. Unfortunately, such a world doesn’t exist; in our reality, most people know the positions of their opponents much less well than their own, and we cannot guarantee a faithful representation of alternative theories — if we aren’t skeptical, we fall into the trap of rejecting one or more theories, because their inaccurate representation misleads us when judging between alternatives.
We know that we don’t live in a perfect world, because there are constant complaints of misrepresentation and misinterpretation. Consider, for instance, the recent controversy over Paul Krugman’s post on unemployment insurance and demand-side recessions. Suppose that you, the student, never reads The Conscious of a Liberal, because you sympathize with the Austrians (or whoever) and you find reading Krugman a waste of time (your time is better spent reading other blogs). Instead, you read Bob Murphy, Russ Roberts, and David Henderson. Do they give you a good representation of Krugman’s argument? Not according to Scott Sumner, Daniel Kuehn, and Chris Dillow. My intention isn’t to weigh in on who’s right. Rather, I just want to point out that there are diverging interpretations of Krugman’s post; and, if the student were to read only one blog (that’s not Krugman’s), they run the risk (in this example, 50 percent probability, right?) of misinforming themselves on Krugman’s argument.
These considerations (in general, not the example above) are what prompted Bryan Caplan to propose an ideological Turing test (ITT). Presumably, failing an ITT leads to a loss in reputation. The ITT is a screening device, to help weed out economists who do not do their due diligence when researching and representing their intellectual opponents. The problems with the ITT are: (a) they can be easy to pass, if those being tested can deliver canned responses; and (b) an objective panel of judges can be difficult to construct.
Another problem is that it may not be rational to want to pass an ideological Turing test. Assume that the ultimate end of research is to showcase a persuasive explanation of x phenomenon; in other words, the benefits of your research are only a function of how persuasive your theory is. Suppose there are two cost curves, that look something like this (excuse my [lack of] drawing skills),
The more time you spend on other economists’ research, especially those you disagree with, the more accurate your representation of their argument, and the lesser the costs associated with misrepresentation. This is because, if your readers catch you misrepresenting the opposition, this will give them reason to doubt your conclusions in general. However, it’s also true that the more time you spend on others’ research the less time you spend on your own, and the weaker your argument will be. The combined cost curve looks something like this (warning, this graph is even more poorly drawn than the last two), Because the researcher has to balance costs, it’s reasonable to expect that she will not necessarily take the time requires to master another person’s theory. Instead, she will find a point where the benefit, on the margin, of focusing on her own research is just equal to the cost associated with misinterpreting the other argument. In terms of the combined cost curve illustrated above, the researcher will choose a distribution of time where costs are minimized (roughly, where the dotted line is). Any other distribution of time will lead to sub-optimal research, and that is definitely not good for the student. In other words, students/readers shouldn’t expect scientists to pass an ideological Turing test.
But, the costs of being misled can still be very high for the student, and there is an interest in reducing this cost. But, if we can’t expect to keep researchers/economists/scientists/bloggers “honest” (if we expect them to be rationally ignorant), we have to develop alternative means of monitoring the quality of their critiques.
The best method, in my opinion, is to be a skeptical reader. Knowing that the amount of time people spend on knowing the other side’s argument will probably be inadequate, we should always assume that the accuracy of the representation will be less than one. We can take a Bayesian approach and assign each blogger/scientist/researcher a probability of accuracy (all of this is in the context of getting the other side right). Based on this probability, we make a choice to corroborate their representation by actually reading the other side’s description of their argument. If we find that the argument was misrepresented or misinterpreted, we adjust that assigned probability downwards; if we find that the argument was well represented, we adjust that assigned probability upwards. Time is scarce, so the higher the probability of accuracy, the less impulse to corroborate you will have (although, initially — when you’ve looked at zero evidence —, you should probably corroborate, no matter what your gut tells you [e.g. even if you really, really like Paul Krugman's stuff]).
What I’m saying amounts to this: the reader should invest in precaution when reading what’s out there. If it doesn’t make sense to expect the people we’re reading to invest the necessary amount of time to study their arguments they’re criticizing (or trying to replace), because that would amount to a decrease in the quality of their output, the reader should take responsibility. The reader should be the judge in the intellectual Turing test. There are ways to make it easier on the reader to do this — blogs that specialize in “smackdowns,” for example —, but their value stems directly from the reader’s interest in corroborating what they read. And, the various means of keeping others’ honest are subject to their own biases and misinterpretations. The reader must know to be a skeptic.
Instead of spending time arguing about how much effort bloggers should put into being “honest,” we could spend the same amount of time persuading readers to be skeptical (and open their minds). It should always be assumed that the accuracy of an argument is less than one. That means there is some probability greater than zero that the argument being read is wrong. It behooves the reader to check this possibility out. We are all interested in finding “the truth,” but “the truth” is largely something we have to find for ourselves. We don’t necessarily need to do the research ourselves, but we do have to compare and contrast different arguments, evidence, and conclusions — not being skeptical, and not seeking to corroborate what you read, will undermine your progress as a student (as someone looking to learn about something). Of course, sellers in the market for ideas should always be honest, but honesty doesn’t always cut it; there are other sources of ignorance and misinterpretation, and it doesn’t pay to eliminate them. The reader should always make precautionary investments, which means being a skeptic.
The economics page on Reddit is always good for keeping you up to date, and today you can find the following near the top,
What would you ask Jeffrey Sachs?
I’ve never participated in an “Ask Me Anything” session, but I have read a few (including, recently, one with Jerry Seinfeld). They seem to answer most questions, as long as the question is an educated one. So, if you have something you’d like to ask Sachs, I recommend logging on to Reddit tomorrow at noon EST.
TIME asks whether it can predict your political preferences. My score is 17% conservative, 73% liberal. I don’t necessarily think that’s wrong, but the questions that scored me as a conservative were,
My guess is that the basis for those questions is spurious correlation. Just like the “facts” that having a messy desk and liking fusion cuisine make me a liberal.
Originally, I wanted to wait to announce changes to the website until I had some concrete idea of when these changes were going to be made. Months later, I still have no idea. But, I’m making my goal for 2014 to get this website to where I want it to be.
Economicthought.net has been empty for as long as this blog has existed, which is just over two years now. When I first registered this domain and started the website, I wasn’t sure what I wanted to do with the main page. Then, John S., who frequently comments on this blog, had an idea for a new website and I asked him if I could use it. The idea is a Cato-Unbound type debate platform, but with some subtle differences (namely, that readers can vote for who they thought made the strongest case and there is some, low, monetary cost to voting).
My girlfriend is a graphic designer with a lot of experience in designing websites. So, I had her put something together for me (some of the links won’t be there in the final product, like “resume” — the main page isn’t about me, that’s what my blog is for),
It looks awesome, and I immediately went about looking for someone to code it. That’s where the endeavor has ended so far, but at some point this year the website is going to be put together and uploaded. The main obstacle so far is mainly monetary, but hopefully I can overcome that problem within these next couple of months.
I use this blog as a platform to educate myself. I comment on what I read, share what I read, and develop my thoughts. I’m lucky that there are a number of people out there who have expressed interest in these things and are willing to read what I write and comment. I really do appreciate it; I am very fortunate. I love to discuss economics with people. I like to be involved in the marketplace for ideas. I want this website to represent that sentiment, and this debate platform should do just that.
Like I said, I’m not sure when all of this is going to come together. But, my plan is to get it done by June 2014, at the latest.
This year’s top 10 most read posts,
The 11th post is: “The Case Against Anarchy.”
Yesterday’s “smackdown” of Wenzel was not my most polite post (although, I don’t think it’s overly rude, either), but I defended it invoking Krugman’s recent defense of snark on the blogosphere. I’ve gotten some criticism that I should have left the snark out. Maybe the critics are right, but let me offer a defense of my tactics (in the spirit of Krugman’s argument).
Generally, we’re interested in being able to distinguish between valuable contributors and not-so-valuable contributors. One way to do this is to create signals, such as being published in high-end, scholarly journals. These journals, in turn, create their own institutions that help maintain the quality and accuracy of the signal, such as peer review. Another signal is the quality of the department of the academic. Or, you’ve followed this particular person for some time, and you are familiar with the quality of his or her output.
Many of these institutions don’t apply to the blogosphere. So, we need to create alternatives. Krugman proposes snark. Snark acts, in a way, as a peer-review. It’s a direct, and clear, way of making your opinion on the quality of another person’s output known. There’s nothing more direct than “[this person] shouldn’t be taken seriously (on these matters”).”
Maybe it comes off as rude. But, it’s not any more rude than using the term “Keynesian” as a way of distinguishing economists you should and shouldn’t read (I’m referring to the tendency of denoting anything that Rothbard didn’t explicitly approve of as “Keynesian” or “non-Austrian”). In fact, it’s hard to think of something worse than condemning an idea to what amounts to a wastebasket category, just because one or two economists you follow disagreed with it (or don’t explicitly agree with it — which oftentimes is interpreted as disagreement). Yet, this kind of activity doesn’t get criticized as much as straight-out questioning the quality of a blog’s output.
I do sympathize with the idea that it sucks to be called out like that. I was embarrassed when David Glasner smacked me down, and directly called some of the ideas I was advocating fallacies. But, Glasner was ultimately right. And, if the ideas I was spouting were completely wrong, he was within his rights to call me out on it. And if I were to have a history of disseminating bad theory on this blog, I’d expect people to call that out too. It should serve as an incentive to make sure I know what I’m talking about (I don’t always follow this rule as much as I should, but I’m trying!).
Maybe there are better ways of signaling quality. The blogosphere is still relatively young. Maybe a few years down the road the way we interact through this medium will change. But, for the time being, snark — but, more generally, being direct when gauging the quality of others’ output — is a good alternative to institutions that work in the academic world, but not so much on the blogosphere.