…I see anarchists forwarding essentially the same argument again and again: If it would be immoral for you to do X as a private citizen, it doesn’t suddenly become moral when the state does it.
The thrust of the argument is that social roles should have no effect on our evaluation of an action… which is obviously complete nonsense. Otherwise, if it is immoral for me to break into your house, then it would be immoral for you to do so as well. If it is immoral for me to shut a stranger in a room, then it must be immoral for a parent to send a child to his room as well. If it is immoral for a student to alter another student’s grade, then it must be immoral for the teacher to do so as well.
Commenting on the ongoing debate regarding Reinhart’s and Rogoff’s spreadsheet and weighting errors, Robert Wenzel writes,
Austrian economics reject empirical data as a method to prove economic theory, for Austrians it is all about logical deductions. Thus, there is not much for Austrians to do, relative to the current Reinhart-Rogoff destruction at the hands of a U Mass graduate student, other than to grab some popcorn and watch with bemusement from the sidelines.
At the risk of being curt, these are probably the worst sentences I’ve read on the whole ordeal. I was a bit disappointed to find Wenzel’s comment quoted at Circle Bastiat, the Mises Institute blog. I’ve already commented there, but there are a lot of people who take what Wenzel writes, and what the Mises Institute buttresses, very seriously. I’m going to reiterate my point. While, at face value, I disagree very little with the sentiment, I find this to be an extremely unimaginative take on the empirical debate over austerity. While it may be true that empirical data cannot disprove theory, this doesn’t mean that the data is irrelevant.
Why can’t data disprove theory? All data must be interpreted. Without theory, history would be largely unexplainable, because the historian (maybe just ‘economist,’ or ‘scientist,’ would be better) wouldn’t be able to draw causal inferences. This creates a dilemma: how do we build accurate theory that can explain empirical phenomena? Most economists, more-or-less, adopt some method of falsificationism, where a model is built and is then tested against the data. The problem is that empirical causal proofs are hard, or even impossible, to discern from complex phenomena (Mises , p. 69; Caldwell  suggests Hayek came to a similar conclusion). Ludwig von Mises’ solution is an a prioristic method, where all economic theory is deduced from a specific axiom: that all human behavior relevant to economic theory is purposeful action. This is close to Lionel Robbins’ definition of economics as a study of human means and ends (Robbins ).
This has led some people, I think erroneously, to deny any role at all to empiricism. The problem is that, ultimately, theory can’t be divorced from reality. The purpose of theory is to interpret the real world, implying that if you avoid empirical research then you’re rendering theory useless. This is especially true when studying complex phenomena, because there are so many conditions present in the real world that we don’t know a priori which theory is relevant and which is not. It can be the case that a theory that our intuition tells us is useful is actually not.
For example, suppose we’re seeking to explain the circumstances of the Depression of 2112. Our intuition tells us that it was caused by malinvestment induced by intertemporal discoordination. In order to know this to be true with any certainty we have to test it against the data: not to disprove it, but to test its applicability. It may turn out that the theory our priors suggested was right can’t actually explain the data we’re looking at. Or, it may be that we find that intertemporal discoordination only explains 30 percent of the data we’re looking at. Maybe we also need to invoke compatible theories to explain other facets of the phenomena in question, such as a monetary glut or banking problems. All of this requires the tools of history, including econometrics — tools that many aren’t comfortable with, because they’ve erroneously abandoned empiricism with the justification of a priorism.
Another, softer case against a priorism comes from the premise that we are fallible beings, prone to errors in our reasoning. There is a case in being skeptical of the ability of the data to disprove theory, especially as the phenomenon becomes more complex and the data more abundant. There is also reason to be skeptical of our theory if the data seemingly contradicts it. The claim here isn’t that data can disprove theory, rather it’s that there’s no harm in approaching theory critically if we feel that the data clearly contradicts it. This may be the case, for instance, if you find that relatively unregulated and undistorted economies suffer more during depressions under austerity measures than those which benefit from macroeconomic stabilization programs: fiscal and monetary stimulus. If we are error-prone, which means that our theory is likely to be, in part, wrong, then rejecting any path of enlightenment seems counterproductive. This is why I adopt a form of methodological pluralism, or what Caldwell  calls critical pluralism.
A better response to the recent empirical research on austerity and growth is to tackle these interpretations head on (for example, see “Dealing with the Evidence“). I think that many of these studies suffer from omitted variable biases. This is partly the fault of austerity advocates, who have failed to add sufficient nuance to their arguments (although, this isn’t always true). Slow growth isn’t caused only by high government spending, but also by capital consumption, regulation (e.g. inflexible labor markets), and bad governance (e.g. an income distribution that wouldn’t occur in a free market). It could be that a government responds to a crisis by cutting 90 percent of spending, and there may still be slow growth. Does this disprove the theory behind austerity? Well, maybe. It depends on other factors. Again, we are studying complex phenomena. To fully understand these types of events we are going to have to apply a relatively large swath of theory — intertemporal discoordination, inflexible labor markets, government spending, et cetera, on their own aren’t going to explain very much. Austrians should engage the debate by offering multi-causal empirical applications of their causal explanations, otherwise we unnecessarily condemn ourselves to the sidelines.
Another point that isn’t made enough is that we shouldn’t be afraid of changing our minds. It’s always disconcerting to be proven wrong, but, rather than taking it personally, we should accept it for what it is: a process of improving our understanding of the world. Who cares if someone likes you less because you abandoned some key principle of some belief system? We aren’t in a beauty contest. We’re interested in seeking accurate explanations of real world phenomena: ideology and reputation are completely irrelevant.
Gene Callahan writes on the methods of history, arguing that it’s a mistake to think that we can only understand unique historical episodes by cataloging them with the help of “some general law.” His analogy illustrates what he means,
Consider: I say to you, “Here is my group of events: Jackie Robinson breaking the color barrier in baseball, Caesar crossing the Rubicon, the assassination of Abraham Lincoln, the fall of Constantinople, and the 9/11 terrorist attacks. OK, now let us draw up what theory connects them.” You will naturally tell me I am talking rubbish.
According to Gene, it doesn’t make sense to pool unique historical episodes and then try find common features. Any common features he finds amongst those things he lists are probably going to arbitrary and meaningless. He argues that we know that this approach is wrong, because we first approach each event individually, understand it, and then catalog it. He takes the opportunity to criticize Mises, implying that Mises’ method followed what he criticizes in his post and calling the approach “absurd.”
I interpret Mises differently. Mises (and before him, Weber) argued that to understand history we first need to develop theory, because theory is the only way we can make sense out of data, which otherwise would seem random to us. Our brains need some way of establishing causality between the data. We can create theoretical categories because of the internal logic of our theory. This doesn’t guarantee that these theories are empirically valid, but one purpose of history is to test the applicability of theory.
Just to make clear the difference between what Mises actually believed and what Gene thinks Mises believed, Mises never (at least, on my reading) argued that we look at a group of historical events and draw general inferences to categorize them. Maybe Mises did this in his early work (before his Theory of Money and Credit), when he was a practicing historicist, but this isn’t the Mises of 1920–…, when he began his serious work into epistemology and methodology. Instead, he built ideal types which were internally coherent, and then looked to see if individual events were characterized by the preconditions, assumptions, and outcomes that the theory calls for.
Gene is right that it is absurd to pool n individual events (say a random selection of historical events) and then try to categorize them by looking at them as a whole, rather than individually. I don’t know who approaches history like this, although who knows, but Mises certainly isn’t one of these people.
The following sentence really stuck with me,
Contrary to the rules of philosophers of science, who advise testing hypotheses by trying to refute them, people (and scientists, quite often) seek data that are likely to be compatible with the beliefs they currently hold.
— Daniel Kahneman, Thinking, Fast and Slow (New York: Farrar, Strous, and Giroux, 2011), p. 81.
It stuck with me so much that by the time I finished the book I really thought that Kahneman had written more directly about the philosophy of science. We all know that the human mind tends to look for confirming data, and that evidence without theory is not of much use is something we read a lot — especially among Austrians. What I’ve heard less often is the emphasis on falsification.
Most people do know that in science a theory is never really proven. Instead, we accept it as useful until somebody comes and falsifies it. But, the way I’ve — and I’m sure others — interpreted this method is simply as a general rule, not one that the individual necessarily needs to follow. In other words, I’ve never thought of being a good scientist as someone who actively looks to disprove her own understanding. I consider it important, and I’ve made similar points on this blog, but I’ve never framed it as something of fundamental importance.
A great tragedy is that non-academic media out of necessity has to sacrifice some element of science. Not too long ago, I read a short 2003 paper on beauty, productivity, and discrimination in the classroom by Daniel Hammermesh and Amy Parker, who at the time was one of his undergraduate students. The results were generalized and re-published in the New York Times by Hal R. Varian, who we all recognize as one of the most well-known microeconomists of our time. Something that stands out is that while Hammermesh and Parker are very careful about drawing conclusions, and they delineate where some of their results are tentative and where there still exists uncertainty, Varian’s column is much more certain. Varian understands what it means to be a scientist more than most other people and he wasn’t trying to mislead anybody. What happened is that stories about uncertain results are not popular, because it makes the reader wonder why the news is even relevant. There is little room for the pedantic objectivity that complex scientific questions call for.
You often see the same thing in the blogosphere. How many times has Krugman written that the evidence is on his side? How many times have I posted graphs of very general data suggesting it validates, at least within some limit, my beliefs? The answer is very often. To some degree, it’s justifiable. I like to draw attention to things I think others might miss. Krugman is interested in convincing people of points he thinks are important. No less, Krugman is not completely adverse to falsfication — there are plenty of examples of him changing his views. Finally, that some body of evidence is not necessarily at odds with our world view is worthy of consideration. But, in outlets read by people who don’t always operate with the understanding that we should challenge our beliefs — and that no evidence is really ever final —, I feel that focusing too much on evidence of us being right can be misleading.
Scientists with wide readership should always make the point that there are limits to our understanding, and that the probability of being wrong is very high — whether these mistakes are major or minor. Inculcating methods of dealing with our cognitive limitations is an important step in making the world a more educated place. I’d say that it’s more important than strict schooling, because even strict schooling is a lost cause for those who aren’t interested in exploring the limits of their knowledge.
… and may be unclear to everyone else?
The lesson of figure 5 is that predictable illusions inevitably occur if a judgment is based on an impression of cognitive ease or strain. Anything that makes it easier for the associative machine to run smoothly will also bias beliefs. A reliable way to make people believe in falsehoods is frequent repetition, because familiarity is not easily distinguished from truth.
— Daniel Kahneman, Thinking, Fast and Slow (New York: Farrar, Straus and Giroux, 2011), p. 62.
The lesson — and it’s one as much for me as for anybody else — is that the fact that our ideas are clear and plainly true to us doesn’t make it so obvious to anybody else. This may also be why people tend to become entrenched in their views. This doesn’t imply any bad faith or dishonesty, it’s just how our brains work. The more you work on your idea, the more familiar you become with your idea, and the truer it will seem to you over time.
Much of the book is spent distinguishing what Kahneman calls “System 1″ and System 2″ thinking. System 1 is an intuitive, automatic process that associates different words, muscle movements, emotions, et cetera, to allow your brain to unconsciously make sense out of events. This process isn’t causal or step-by-step, instead it’s associative and you can make many associations simultaneously. Most of the time, you’re unaware of the associations made, or even why these associations were made. For instance, an experiment found that subjects exposed to words which connote old age were more likely to walk slower after the experiment. System 2 has to be catalyzed by System 1, often when the latter is strained. Skepticism also triggers System 2 — it causes your brain to double check its first, intuitive response.
My System 1 made an association between this distinction between systems and the debate between a priorists and fallibilists. Sometimes the latter are accused of being nihilists, of denying the validity of immutable law. I don’t think a priori methodology is inconsistent with fallibility (indeed, our logic could be wrong), but this discussion of systems and the illusions our brain can play on us does suggest we ought to place a premium on admitting the possibility of being wrong. Because not only can we be wrong, but we can be legitimately convinced of being right all along.
In Misunderstanding Financial Crises, Gary Gorton provides a short overview of the U.S. banking experience between, roughly, the 1830s and 1913. Some of it is geared towards building up towards a justification of bank bailouts as a means of maintaining liquidity. His example of bank bailouts prior to the Federal Reserve and activist treasury policy was examples of clearinghouse associations pooling member banks’ assets and using these to back the clearinghouse’s own (temporarily issued) notes, which were used to satisfy interbank adverse clearings. This shored up the member banks’ position towards those who held their debt, meaning that the liquidity of their assets was preserved. Another example of a market solution to temporary illiquidity was temporary suspension of redemption, which usually carried along with it — by terms of contract — the promise to pay additional interest throughout the period of suspension.
Despite these examples, many libertarians — some of them quite erudite — will reject the premises behind certain forms of interventionism. Another example includes monetary disequilibrium: it’s not unusual for a libertarian, although maybe more accurate to say Austrian here, to totally reject any possible credible foundations to an advocacy of Fed countercyclical policy. In fact, to seal the example, many Austrians go as far as to argue that the “correct” “market” solution to an increase in the demand for money is a downward movement in the price level. I recognize that there’s an element of hubris in this discussion so far; I acknowledge that in both examples I could be wrong in implying that the market would offer different solutions. But, take these examples for what they are: illustrations of my broader point.
My argument boils down to the idea that because of certain predispositions that austro-libertarians hold and because of the limited cognitive ability of the human mind, we oftentimes fail to recognize the merits of opposing positions. But, there is evidence that these rejections can be premature. While sometimes it’s difficult to know for sure, some cases (such as the two above) suggest that we ought to treat with greater nuance theories we disagree with. (As a disclaimer, my arguments here have been influenced by my recent reading of Bruce Caldwell’s Hayek’s Challenge — but the idea is mine, and all errors are my own.)
Human society is complex, and it’s complex to several degrees. Its complexity makes it difficult to understand, and if we do understand it is mostly only superficially. Oftentimes what we do understand is from experience, and we don’t understand it well enough to make predictions on how institutions and organizations will develop, or evolve, over time. As a result, there are handicaps to everyones’ ability to know how the market, or privately developed institutions and organizations, would react to different adversities. To make matters worse, we’re oftentimes disallowed from experiencing specific manifestations of market solutions, because market institutions and organizations are oftentimes replaced by public, or quasi-public, features of the same nature. It’s convenient that both of my above examples have to do with banking, because it’s also the perfect example of how private changes have been all-too-often superseded by government alternatives (e.g. frequent prohibition of branch banking, the 10 percent tax on privately issued banknotes after the U.S. Civil War, the Federal Reserve system, deposit insurance, et cetera).
For the sake of argument, let’s assume that public alternatives to private institutions and organizations are inferior in the task they’re trying to accomplish. In banking, for example, public changes in the structure of the industry, and the rules that guide it, have not done a very good job at smoothing cyclical fluctuations. One might even argue that they have made the industry worse, and have exacerbated these fluctuations. This, I think, is a more-or-less universal theme in the libertarian literature. Consequently, there is a developing culture that stimulates an impulse to reject all things classified as interventionism. Some not only oppose the specific interventions, but even the arguments and premises behind them. But, as the clearinghouses and suspension examples above illustrate, this is not always the correct approach.
I’d argue that a good deal of the ideas behind certain theories we may reject, paradoxically, actually a lot of common ground with ideas we hold. Given that human society, including economics and economic relationships, can be incredibly complex, it’s true that none of us have a complete, or specific, explanation for various phenomena. However, I think that oftentimes everyone operates from a similar base — that is, everyone has a substantially broad, or general, idea of how things work —, and that it’s only a movement from that base that produces major differences between thinkers (influenced by a variety of factors, including ideologies). Furthermore, I’d argue that many of the general premises behind interventionist policy, or theories that might suggest interventionist policy recommendations, are true, because all people tend to have some part of the general picture right, even if they may take it in the wrong direction. Finally, social complexities and the fallibility of the human mind apply both to non-libertarians and libertarians, and a failure of some libertarians to see merit in others’ arguments is a product of ubiquitous ignorance (it works in the other direction too, of course).
It may be that an interventionist may recommend a certain policy because he’s unable to fathom how the market, through piecemeal-planned and/or spontaneous order, could accomplish a similar result in a superior way. That being said, the same is absolutely true the other way around. My point is that we should be wary of the possibility that our complete rejection of ideas we disagree with may be a result of the fact that we can’t begin to picture how a free society might ultimately commit to similar solutions to social problems. In other words, the critiques we oftentimes apply to “planners” are also equally as applicable to us. This doesn’t mean we should accept interventionism; I’m mostly writing out of interest in preserving some kind of intellectual march towards scientific improvement. It does mean that we should be careful not to throw the baby out with the bathwater, because we are fallible human beings and we can reject things we still don’t (and may never) fully understand (which, by the way, was more-or-less my initial reaction to Gorton’s discussion of “private” bank bailouts).
If you’re not comfortable with my examples, I’m sure there are many others to choose from. Also, in no way am I suggesting that this is the only cause of disagreement. Neither am I rejecting the possibility that much disagreement is justified. I’m only asking for people to remember that a more refined approach to weighting theses that may not adhere to your current world view is preferable, because there can be things you’ve missed — we all have the propensity of being wrong, as we often are.
★ ★ ★
“Keynesian theory is actually too hard for most libertarians to understand”
I responded as follows,
I disagree; as a libertarian, I think there is something of a culture of not making the effort to understand it.
I don’t mean this to be an opportunistic attack on libertarianism (and I think that first response to Noah Smith was borderline ignorant, in the worst possible sense) — after all I’m a libertarian (and I pick on libertarianism only because as a libertarian I want to best for our ideas) —, rather it’s meant to be constructive. Given what I wrote above on the fallibility of the human mind, it doesn’t make sense to me to promote an insular approach to science. Unfortunately, from my experience, these methods are too often advocated. In my opinion, this only increases the opportunity of being wrong. Think of intellectual progress through the metaphor of profit and loss. We abandon ideas that we find out are wrong and we keep those we think are correct. Ideas have to be tested on a trial-and-error basis. By narrowing your exposure to potential trials you’re also limiting your intellectual growth. Those who push for what is tantamount to insularity are doing their readers, and those they influence, a disservice.
This is a perfect opportunity to close by linking to a post by professor Callahan that builds on my discussion of Mises’ treatment of Keynes in yesterday’s post. He points out that by the time The General Theory was published and Mises decided to write on it, he might have been beyond the point of learning new things. He just wasn’t very good at including this as a caveat when he discussed things he might not have known enough about. In a comment to that post, I also point out (and Callahan reinforces in a response) that Mises was notorious for his poor reactions to criticism. Knowing that many of Keynes’ doctrines were misaligned with his own, Mises may have been confident enough in his own knowledge to essentially entirely dismiss whatever Keynes had to say (he doesn’t just dismiss this, but he fails to actually engage them).
Two economists come to mind that, in this sense, were almost complete opposites of Mises: Hayek and Lachmann. Both were economists who, it could almost be said, thrived on criticism. Hayek’s work was positively influenced by his critics throughout his life (and we are better off because of it). I think Lachmann oftentimes took wrong turns in older age (e.g. his apparent acceptance of Hicksian “fixed price” theory), but you also see a drastic evolution in his thought throughout his life work. I don’t intend to disparage Mises — he was a brilliant thinker who contributed positively in more ways that most people can approach —, but when it comes to acknowledging your own fallibility economists like Hayek and Lachmann are truly role models.
Yesterday, Robert Murphy posted an unconvincing excerpt, written by G.K. Chesterton, on moral relativism. Chesterton’s argument seems to be that if two moral systems can really be different, then why call them moral systems at all. He claims (correctly),
Of course, there is a permanent substance of morality, as much as there is a permanent substance of art; to say that is only to say that morality is morality and art is art.
I commented, pointing out — although, not explicitly targeting the above — that we can see morality as a sort of category, where individual moral systems can have a diverse set of beliefs regarding what is “right” and “wrong.” As such, that we can describe a general category and ascribe to it some constant qualities doesn’t mean that some of the elements within said category can be different, and even mutually incompatible (e.g. “slavery is bad” versus “slavery is good” [I chose this example, because it's an example of real world moral relativism]).
P.S. Huff, in a separate comment, agrees with, but rhetorically challenges people to find an example of two sets of moralities that are completely different,
Having said that, I would be shocked if anyone could produce an example of two societies with completely different moral codes. Significantly different, yes; radically different, maybe. But completely?
I agree; what’s more, I think it is a great example of the relevance of agency in sociology. There are varying degrees of agency, but essentially it’s a theory which argues that the individual forms an identity and ideology from within. This is contrasted with interpellation, which holds that the individual receives her ideology from outside influences. The truth is probably somewhere in between — and depending on the individual —, but there is some level at which agency is useful.
If there were no agency, similarities amongst moral systems would be the product of cultural export, or interaction between different societies. But, I think P.S. Huff’s caveat holds even for the most separated cultures: there are always some similarities in moral beliefs across otherwise separate peoples (although, this is probably more relevant for pre-modern societies, when there were various divisions of labors that weren’t associated with each other to a meaningful degree). Different people develop similar concepts of “right” and “wrong:” for example, murder is probably almost universally considered wrong (although “murder” can be defined in such a way that, for instance, “murder” can be wrong, but human sacrifice can be right).
These similarities, I think, are a product of a similar human condition. All humans are subject to the scarcity of economic goods, therefore most societies develop some kind of property rights (not necessarily the property rights that we know today in the United States or most of Europe; it could be of a more primitive kind), which then tends to impose a sentiment on right and wrong on matters of property. Similarly, as scarcity can lead to conflict, most societies develop rules on when it’s okay to enter into conflict and when it isn’t, also imposing parallel feelings of “right” and “wrong.”
This isn’t straightforward agency, because the individual is still subject to her environment (and it’s the environment which shapes the legal and moral code), but it helps explain how separate cultures can develop similar ethical constellations, despite having minimal interaction with each other. What’s interesting, though, is that the similarities tend to dissipate once we move on to the details, because the nuances aren’t as constrained by objective, physical limitations, or a society’s environment. Where there are less external factors pressuring somebody to adopt certain practices and habits, out of convenience or to solve problems, there is far more room to rationalize a broad set of varying ideologies.
Edit: Coincidentally, Douglass North talks about morality in the context of human consciousness, in Understanding the Process of Economic Change. He draws on Pascal Boyer, who argues that the “psychology of moral reasoning” is largely genetic — i.e. the product of biological evolution. This includes, as well, our propensity to cooperate with each other. In face of the evidence, it may be difficult to doubt the importance of genetics in determining moral beliefs, but clearly complex ethical systems are the product of both genetics and culture (experience, institutions, et cetera).