Category Archives: Espitemology

I Can’t Believe Economists Don’t Do X

You can exchange ‘economist’ for ‘scientist,’ in the title, and the point I’m about to make remains the same.

I saw, on Facebook, a comment relating to the apparent disinterest amongst economists for the philosophy of their science, on average. The person making the comment suggested that one should become familiar with the philosophy of the science, then the history, and then focus on “application of the discipline” (which, I assume, includes learning the discipline).

This approach should be met with suspicion, especially amongst economists. One way society becomes wealthier is through specialization, which is made possible by a growing division-of-labor. We focus on using a more specific set of skills, so that we can develop these skills and produce more. Further, by narrowing the skill set one needs to produce, you can focus on your greatest skills, and do away with those that hold you back. But, if we all continue to specialize, how does a single person still act in a way that jives with the division-of-labor in general?

To re-state the analogy in terms specific to the economics profession: if economists continue to specialize (e.g. spend more time learning and practicing labor economics, implying less time to learn and practice monetary economics), how does their work fit in with the rest of the discipline? If Joe the Labor Economist is conducting research that must fit in with the remainder of economic theory, how is he to accomplish this if increased specialization comes at the cost of general knowledge of your field?

Communication. In a division-of-labor, humans have developed — spontaneously or otherwise — methods of communicating useful knowledge, without having to actually spend a lot of time searching for it. The common example is the pricing process. Prices communicate certain pieces of information. If a steel buyer requires so much of that input for some project, but the only steel manufacturer suffers a fire in its factory, cutting the flow of output in half, prices can communicate this knowledge to the buyer. Similarly, humans use language to communicate, and use of language gradually becomes simpler and more direct — people want to economize their use of language. Norms, or rules and heuristics, serve a similar function: to allow us to operate in a socially beneficial way, without having to really be aware of the specifics of other cogs in the ‘machine,’ so to speak.

The same is true within the sciences. It is true that a loss of general knowledge comes at the cost of being cognizant of other parts of the science, which your own research has to be consistent with. However, it’s also true that the less you specialize, the less you can focus on your specific research. The more knowledge the field produces, the more we have to specialize, because — given decreasing marginal returns — we have to increase our productivity to produce something of equal value. This creates a strain, however, between specialization and the relevance of general knowledge (e.g. philosophy of science, or the science outside your narrow sub-field). Institutions have to arise to help us economize on that general knowledge.

We might lament the perceived lack of general knowledge amongst scientists. We might want to criticize an economist for not knowing much about economic history, for example. But, rather than seeing this as a weakness within the discipline, it’s better to interpret it as general progress. Economists are becoming more specialized, because improvements in the communication of general knowledge have made greater specialization possible.

One test as to whether I’m right is to think about how well the average piece of research fits into the overall puzzle of economic theory. If it’s true that the average economist does not know much about the philosophy, or history, of her science (I’m assuming it as true), and it’s true that greater specialization should lead to a loss of cohesion, then we’d expect the state of economics to worsen over time. We’d also predict that aforementioned loss of cohesion. But, that’s not what we’ve seen. Instead, economics is equally as unified as it used to be, and perhaps even more so — consider, for example, the ‘requirement’ of microfoundations. This despite the fact that modern economics can provide a much richer understanding of the real world than the discipline could 60 years ago, because specialization has continued to progress.

That the average economist might not know much about the history and/or philosophy of economics is not a problem. Instead, it’s evidence of the progress of the discipline. Methods of communicating ‘general knowledge’ have improved, allowing economists to focus on narrowing sub-fields. This, like specialization in any division-of-labor, allows for productivity increases. What this means for science is that specialization is an important cause of growth in scientific knowledge.

The Fetish of Consistency

When somebody accuses another person of being inconsistent, not for an immediate lapse in logic, but for holding supposedly inconsistent beliefs in different situations, I think of Daniel Kahneman’s Thinking, Fast and Slow. On building coherent — internally consistent — stories, Kahneman writes,

The measure of success for System 1 is the coherence of the story it manages to create. The amount and quality of the data on which the story is based are largely irrelevant. When information is scarce, which is a common occurrence, System 1 operates as a machine for jumping to conclusions. Consider the following: “Will Mindik be a good leader? She is intelligence and strong…” An answer quickly came to your mind, and it was yes. You picked the best answer based on the very limited information available, but you jumped the gun. What if the next two adjectives were corrupt and cruel?

— p. 85.

The majority of us are probably not capable of building, in our head, a truly consistent and accurate model, or even a set of rules. Instead, we use abstractions and heuristics to help us associate different ideas. Using these heuristics, for example, we can decide whether some policy fits our worldview (e.g. whether a libertarian should support countercyclical monetary policy). But, these stories that we come up with to justify our decisions are necessarily based on limited information, so there is some probability that part of the story is wrong.

One problem is when we judge an idea on the basis of its consistency with an existing body of beliefs. Suppose that idea A is inconsistent with the rest of our body of beliefs. Should we reject idea A, on the basis of this inconsistency? It doesn’t make sense to me to keep the body fixed, but to vary the idea. I like to think about the process of critical analysis by imagining separate ideas as puzzle pieces. We’re interested in fitting them together, but none of the puzzle pieces are fixed — they are all allowed to vary. Thus, if I find that idea A is inconsistent with part of the rest of the puzzle, I don’t only vary idea A, I also vary the other pieces. I do this because I’m aware that, somewhere, I could have gone wrong when forming the rest of the puzzle, so the rest of it is as susceptible to the charge of inconsistency as idea A is.

Is consistency important: sure. Should it be a standard by which to judge independent ideas: I don’t think so.

The Austrophysicists, Post Astrophysicists, and New Astrophysicists

Stephen Hawking poses a paradox for astrophysicists: the accepted story about event horizons may not be true. I’m not going to pretend like I know what all of this is about. I just wanted to make a quick observation that relates this to economics.

If astrophysics had the same weight in public policy as economics, there would be a significant chunk of scientists — and an army of amateurs — yelling about anti-scientism, obvious errors nobody but them are aware of, and how clearly the facts are on their side. And, their readers would be misguided into believing that there are easy answers in astrophysics, even when the complexity of the problem grows. Anybody who disagreed with these easy answers would be clearly corrupted by politics, ideology, and money.

But, astrophysics, like economics, deals with complex problems, and complex problems have complex solutions. The problem and the solution are usually very difficult to grasp without abstracting from some aspect of it, and our research will, at best, offer incomplete answers. Scientists will come up with divergent theories, because of the way they interpret the problem and depending on what they abstract from. This is a reality we should come to accept; it’s a reality that the institutions of science try to grapple with. The more cynical scientists are typically those who don’t fully embrace this reality.

Machlup on the Scientific Method

…[I]t ought to be said that there exists no method-oriented definition of science under which all parts and sections of physics, chemistry, biology, geology, and other generally recognized natural sciences could qualify as “sciences.” Definitions of science which stress the theoretical system, the network of logically interrelated hypotheses using mental constructions of ideal exactness, undoubtedly exclude large parts of chemistry and biology. Definitions stressing repeatable experiments and verified predictions clearly exclude the parts of biology, geology, and cosmology which deal with the evolution of life, of the earth and of the universe. And even within physics — the discipline which is the science par excellence because most definitions of science were formulated with physics in mind as the model — the authorities are by no means agreed as to whether the deductive system or the inductive technique constitutes its scientific nature.

— Fritz Machlup, “The Inferiority Complex of the Social Sciences,” in Mary  Sennholz and Vernelia Crawford (eds.), On Freedom and Free Enterprise: Essays in the Honor of Ludwig von Mises (Princeton: D. Van Nostrand Company, 1956), p. 164.

Machlup’s essay, suggested to me recently, is a great introduction to the idea that a strict definition of the “scientific method” — the one most commentators seem to have in mind when they judge economic’s claim to being a science — would exclude much of what we do consider science. It also serves as a reminder that what may call for a different method in economics is not a result of differences between the natural and the social sciences, but a problem that stems from the study of complex phenomena (something that exists in the natural sciences, as well).

The A Priori and the Empirical

Every once in a while, there is a local eruption of discussion on the a priori method, with specific reference to Mises’ praxeology. Jason Brennan, at Bleeding Heart Libertarians, discusses a recent run-in with a young Austrian and their conversation on behavioral economics and Austrian economics. Brennan makes the point that if an a priori economics cannot account for the various realities of human behavior, then its usefulness is put into doubt. I sympathize with Brennan, but I think the point can be put in another way that makes the implications a bit clearer.

My background makes me very sympathetic towards the praxeological method. At one point, I thought it was the end all, be all. I have defended Mises’ method from its critics before. But, I have also been influenced by the anti-rationalism of Hayek, and my beliefs have moved towards a position that acknowledges the positive attributes of an a prioristic method, but also the positive (and necessary) characteristics of an empirical method. One problem is that young scholars want to take an either/or position, but this doesn’t have to be the case. Both methods, at the extreme, have their shortcomings; it only makes sense to apply them both. This makes me something of what Bruce Caldwell calls a “critical pluralist.” There is no one perfect method, so we have to do what helps reduce human error the most, and this may change on a case-by-case basis.

Coming back to Brennan’s invocation of behavioral economics, I want to mention a Hayek-influenced argument that puts the shortcomings of a purely a priori method into perspective. It took me a while to get this point, and I should thank Greg Ransom for hammering it into me.

There is a good reason why an Austrian would say that there is a “difference” between behavior and action. Behavioral economics deals with how humans actually respond to different incentives, situations, and environments. This speaks to “neoclassical” economics, because these sets of models usually try to get an idea of how an agent behaves (or maximizes its function) given various constraints (call them incentives). Praxeologists, on the other hand, don’t try to model human behavior, but rather purport to show the outcomes of different actions. Then, depending on how real world humans behave, the economist can apply the set of theory that is relevant to that action.

The question that arises is how praxeology can offer insight on complex human phenomena. These are events that form over a sequence of actions, by some number of individuals. That is, they are events where one agent is responding to another. To make sense of these events, we have to have some idea of how humans respond to … constraints, incentives, and their environment. One method is to build a priori, analytical models that help us to get an idea of how a human would behave in these cases — this is what “neoclassicals” do. But, since these models are just analytical devices — simplified, abstracted, et cetera —, there is an empirical element that they can’t capture.

This is Hayek’s point in “Economics and Knowledge,”

I have long felt that the concept of equilibrium itself and the methods which we employ in pure analysis have a clear meaning only when confined to the analysis of the action of a single person and that we are really passing into a different sphere and silently introducing a new element of altogether different character when we apply it to the explanation of the interactions of a number of different individuals…

When I first read Brennan’s post and I thought about the issue, what came to mind are institutions. Institutions, the formal and informal rules that constrain human activity, are crucial to understanding real world exchange. They determine how these exchanges take place and their outcomes. Their importance is almost universally recognized by economists of all stripes. But, institutions have an empirical element that is difficult to analyze in a purely logical model. It’s difficult to talk about institutions unless we also talk about the environment they exist in, which ultimately helps decide what these rules will look like (this is what economists have in mind when they discuss something like efficient, but constrained institutions). What are the technological constraints? What kind of behavior are the institutions meant to regulate? These are ultimately empirical questions.

The business cycle is another example. The business cycle is a complex phenomenon; it occurs as a result of the interaction of millions of agents. There are common factors that we can pinpoint, such as informational biases due to non-randomly distorted prices. But, even these common factors ultimately tell us very little about what an industrial fluctuation will actually look like. To be able to paint an accurate picture, we need to know the expectations of the various human agents and how they will respond to interactions caused by other agents. We could build a purely logical theory, but if the set of possible human responses (behaviors) is very large and we have to consider a very large set of agents, the range of alternative models is going to approach infinity. Thus, to be able to understand and explain real world business cycles, we have to embrace some element of empiricism.

This doesn’t mean that we should abandon praxeology (or “the pure logic of choice”). Hayek, throughout his life, would grapple with what each method has to offer. Later on, he would go on to doubt some of the benefits of empirical economics in the study of complex phenomena. Specifically, he postured that falsification becomes increasingly more difficult as the event becomes more complex. (This shouldn’t be surprising; it seems to be related to the fact that very reason we opt for analytical models is because it’s too difficult — sometimes impossible — to make sense of complex observations.) But, this seems to support the contention that, ultimately, what we want is to balance the merits of empirical and logical approaches to understanding the world. This balance is context dependent; it’s, ironically, an empirical problem.

Consensus Does Not Make a Science

1. I am not a big fan of Raj Chetty’s recent op-ed for the New York Times. It’s not that I think economics is not a science. It’s because I don’t think we should hold consensus as the ultimate standard by which to judge a science. This sounds bad, because there are people who adamantly stick to a belief, despite the mountain of evidence against it. I’m not one of these people (at least, I try not to be). But, I do recognize that the probability of getting everything right is very low, and that challenging established beliefs allows us to explore what we’ve missed.

2. In any case, Mark White has convinced me to care less about whether economics is really a science. His second point is gold,

If economists want to claim the status of Science in order to earn some badge of honor in objectivity, I’m afraid they’ll be disappointed. This is where the criticism of the Shiller/Fama Nobel is instructive: if I cared about whether economics is a science, I would take the dual nature of this award as a positive sign because it shows vibrant disagreement that science should emulate, not frown upon. (Nancy Folbre, writing today in The New York Times, disagrees.) Disagreement reflects criticism and skepticism and it spurs on further investigation. It shows that truth is never found but forever sought; we can approach it and approximate it, but we should never presume to “have” it, because when we think we’ve found truth, we stop questioning it and we grow too comfortable in our “knowledge.”

In his third point, he compares the social sciences to the natural sciences, suggesting that the former will never have the predictive ability of the latter. A lot of the topics that natural scientists look at are not heavily advertised in the media — they aren’t as glamorous as economics. But, natural scientists also study complex phenomena, and in these areas, just like with the social sciences, there’s very little predictive power. The objective becomes not to predict, but to understand. (This being said, there seems to be a layman/specialist miscommunication. Oftentimes, when economists use the term “predict,” they’re questioning whether the outcome illustrated by a model shows up in the data. But, the reason we care about it is not so that we can predict future events, but to know whether the model can explain reality.)

3. This Paul Krugman post on the topic is not very good,

  1. Policy-making is not economics. The task of the economist is not to make policy recommendations, it’s to explain some facet of the real world;
  2. That someone disagrees with some “established” result does not strip them of the title of scientist. I agree that there are people out there who never acknowledge the strength of opposing arguments, and that these people should lose credibility (or, at least, that the specific set of theory should lose credibility), but Krugman applies this rule of thumb much more loosely than it should be. I, for example, don’t challenge Stiglitz’ scientific credentials because he continues to peddle the line that misaligned CEO compensation was one cause of the financial crisis, despite the fact that much of the empirical literature disagrees.

4. In his op-ed, Chetty talks about several things the empirical literature agrees on. Krugman also acknowledges some of these areas. They know the literature much better than I do, but my experience is very different than theirs. Usually, I find that results in economics are much more disputed. Empirical studies are often criticized for problems in the method, authors’ interpretations of the evidence are often questioned, and alternative methods provide alternative results. So, when someone tells me that such and such has been, for all intents and purposes, empirically proven, I’m skeptical, because in my experience that is almost never the case.

The Methodenstreit is Over

In a discussion on method, namely on whether there should be an empirical approach to economics, it’s common to invoke the arguments of economists like Menger and Mises. The latter, especially, spent a lot of time developing his own method and criticizing positivism. But, these were the methodological debates of the late 19th century and the first 2–3 decades of the 20th. Not only did pure theory (logic, deduction) win, but it did so because clearly the majority of well-known economists followed that route: Marshall, Keynes, Pigou, Fisher, et cetera. The empirical approach of today is not the same method.

Empirical economics today is mostly economic history, but it does go a little beyond this. Scientists say they can’t prove anything, but they can falsify their theories. This is an acceptance of human fallibility, and it assigns to any given theory some probability of being right — this probability is subjective, because two rivals may place two different probabilities on the same theory. What the empirical approach allows for is to look at and test the evidence, and based on this evidence we can revise that probability we attach to the theory. When economists want to falsify something, a downward revision in the probability of the theory being right is what they really have in mind.

There is no pure theory–empirical dichotomy. Most economists, except those on the fringe (is my guess), are theorists first, empiricists second. Empiricists test theories, they don’t develop them based on their tests (the test can make them re-think the theory, though). Economists are theorists first for a good reason. The bulk of good economic theory has been developed deductively, from Richard Cantillon and Adam Smith down to Paul Krugman and more modern economists. But, deduction isn’t perfection, and there were debates (e.g. J.B. Clark v. Böhm-Bawerk; Keynes v. Hayek; etc.) with no obvious winner, even before statistical empiricism was a big thing. Deductive economics has not been able to achieve consensus on important topics, and the number of topics grows as they become more complicated.

Empiricism provides a little bit of a safety net, providing us with another tool when there’s disagreement. It’s especially handy when we’re talking about complex phenomena, where the probability of logical error increases. The point isn’t to displace pure theory, but to reinforce it.

I think people who are skeptical of empiricism in economics do often implicitly acknowledge its strengths. There is an intuitive incentive to using empirical evidence to support your theory, not just as an illustration but as evidence. You want to persuade your rival that your theory is right and relevant, and empirical evidence does this. It works in reverse too. When we’re surprised by the data — it doesn’t look the way we thought it would —, most of us rethink our priors. This is empirical economics.

A Red Herring on Praxeology: A Reply to Lord Keynes

Before I critique the substantive portions of Lord Keynes’ article, I would like to applaud him for taking up the daunting task of attempting to make headway on the philosophy of economics. Although he rejects the method of praxeology, in my opinion erroneously, writings like his are nevertheless to be encouraged because they sharpen the mind and get to the heart of the issue.

As readers may know, I have some experience studying Kantian epistemology, and in particular, the status of the synthetic a priori. This is truly the starting point of praxeology and Austrian economics, and I am grateful for the attempt to render it more comprehensible. In the article, Lord Keynes attempts to force Misesian praxeology into either one of two boxes: the analytic a priori or the synthetic a posteriori. Like the positivists before him, Lord Keynes refuses to acknowledge the possibility of synethetic a priori.

Lord Keynes’s article begins by quoting Mises’ on praxeology and aprioristic reasoning in general. Immediately, LK makes the claim that Kant’s idea of the synthetic a priori is “untenable.” He cites the advance of non-Euclidean geometry as proof that Kant was mistaken to consider geometry synthetic a priori – and this mistake should, according to LK, cast serious doubt on the whole enterprise of the synthetic a priori in general.

Furthermore, he attempts to show by linking from another problematic article, that even if one were to grant plausibility to the category of the synthetic a priori, the “action axiom” cannot be a worthy candidate for it because it incorporates synthetic a posteriori knowledge.

But this is all beside the point. The real meat of the article is to illustrate why Mises fails to understand “Philosophy of Mathematics 101” in his inability to separate “pure geometry” and “applied (physical) geometry.” To describe the distinction, he introduces Rudolf Carnap, a logical positivist. Even further in the article, Lord Keynes correctly notes the limited nature of Euclidean geometry in describing a “universal theory of space.” Euclidean geometry, as we know from the theories of Einstein, instead reflect a special case — a subset — of the larger, more general category of non-Euclidean geometry, which can account for advances made in 20th century physics. LK asserts that because Mises misunderstands the nature of geometry, we can safely disregard his musings on philosophy of economics, and on praxeology in particular. Nowhere in the article does LK refute or directly challenge praxeology as a methodology, because he has no doubt done that elsewhere.

My contribution to this riveting discussion is merely to point out a few errors Lord Keynes makes. I will begin as he began.

I.

In the first place, Lord Keynes begins by inappropriate question-begging. He writes, “Kant’s belief in the synthetic a priori is false, and we know this now given the empirical evidence in support of non-Euclidean geometry: this damns Kant’s claim that Euclidean geometry – the geometry of his day – was synthetic a priori (Salmon 2010: 395).

Notice immediately that Lord Keynes tries to undermine Kant’s notion of the synthetic a priori by use of empirical evidence. This will not do. The synthetic a priori, as Kant formulates it, is a category of knowledge by which we come to understand synthetic claims (claims about the real world) by means of aprioristic reasoning (logical deduction). Pointing to an empirical event as falsifying or refuting a claim about a methodology misses the mark entirely. Kant’s ideas on the synthetic a priori may well be wrong or mistaken, but one must prove so by means of showing where the logical error lies. It is simply poor philosophy to argue that an empirical event can refute epistemic claims. This is a category mistake. Epistemic claims — claims of how we understand knowledge — are of an altogether different category than claims of knowledge themselves.

2.

In the second place, Kant’s musings on pre-Einsteinian geometry are fascinating, but hardly foundational for the synthetic a priori paradigm. As Mises says, men can make mistakes in their logical deductions. Just because Kant did not, or could not, imagine non-Euclidean geometric theorems does not invalidate his notions on the category of the synthetic a priori in general. For years, I too considered pure geometry to be analytic a priori; as an edifice of logic that does not necessarily refer to real constructs. This excerpt by Hans Hoppe is worth considering, however, given the context we are discussing:

“[T]he old rationalist claims that geometry, that is, Euclidean geometry is a priori and yet incorporates empirical knowledge about space becomes supported, too, in view of our insight into the praxeological constraints on knowledge. Since the discovery of non-Euclidean geometries and in particular since Einstein’s relativistic theory of gravitation, the prevailing position regarding geometry is once again empiricist and formalist. It conceives of geometry as either being part of empirical, aposteriori physics, or as being empirically meaningless formalisms. Yet that geometry is either mere play, or forever subject to empirical testing seems to be irreconcilable with the fact that Euclidean geometry is the foundation of engineering and construction, and that nobody there ever thinks of such propositions as only hypothetically true.”1

He continues,

“Recognizing knowledge as praxeologically constrained explains why the empiricist-formalist view [of geometry] is incorrect and why the empirical success of Euclidean geometry is no mere accident. Spatial knowledge is also included in the meaning of action. Action is the employment of a physical body in space. Without acting there could be no knowledge of spatial relations, and no measurement. Measuring is relating something to a standard. Without standards, there is no measurement; and there is no measurement, then, which could ever falsify the standard. Evidently, the ultimate standard must be provided by the norms underlying the construction of bodily movements in space and the construction of measurement instruments by means of one’s body and in accordance with the principles of spatial constructions embodied in it. Euclidean geometry, as again Paul Lorenzen in particular has explained, is no more and no less than the reconstruction of the ideal norms underlying our construction of such homogeneous basic forms as points, lines, planes and distances, which are in a more or less perfect but always perfectible way incorporated or realized in even our most primitive instruments of spatial measurements such as a measuring rod. Naturally, these norms and normative implications cannot be falsified by the result of any empirical measurement. On the contrary, their cognitive validity is substantiated by the fact that it is they which make physical measurements in space possible. Any actual measurement must already presuppose the validity of the norms leading to the construction of one’s measurement standards. It is in this sense that geometry is an a priori science; and that it must simultaneously be regarded as an empirically meaningful discipline, because it is not only the very precondition for any empirical spatial description, it is also the precondition for any active orientation in space.”2

I will conclude before the conversation becomes unwieldy. To my understanding, LK makes two errors: one minor and one monumental. The minor error he makes is to give us a red herring; that is, he attempts to use Mises’ misunderstandings (or not – depending on if Hoppe is correct) on the epistemic status of mathematics and geometry to entirely discount his contributions to philosophy of economics, and praxeology in particular. His larger and more alarming error is in not recognizing the validity of the category of the synthetic a priori. In so doing, Lord Keynes forces his mind into considering knowledge in only two ways: either analytic a priori (empty formalisms, logic games of which no relation to reality can be made) or synthetic a posteriori (real-world empirical claims, of which continuous testing is done to falsify or confirm a hypothesis). He does not recognize that action necessarily renders us knowledgeable of its logical implications, and because human action is a real world phenomenon, does indeed give us knowledge of the real world through the use of the rationalist, deductive process. Hoppe writes elsewhere that action — the substitution of one state of affairs for another — necessarily implies that the actor understands and comprehends a teleological, means-ends framework, and the existence of “time-invariantly operating causes” (the category of causality). No actor could make a decision about whether to interfere or not without understanding that events are connected in a casual framework. Even making empirical observations requires that the observer understand a causal framework, simply in order to make sense of his observations. The understanding of causality is thus inherent and irrefutable within every action. This renders causality to the status of the synthetic a priori and — that every event is interconnected with other events and causes — is both true logically, because every action demonstrates the actor must know this, and it also gives us usable and important information about the real world.

_________________________________

1. Hans-Hermann Hoppe, Economic Science and the Austrian Method, pg. 30.

2. Hoppe, ibid, 31.

Usefulness of Empiricism

Robert Murphy invokes Mises, who argues that history “does [not] permit [the historian to maintain] that an economic law was not valid in ancient Rome or in the empire of the Incas.” I agree that theory is immutable; what changes over time are the relevant conditions (i.e. which theory is empirically relevant may change over time). But, Mises may have been wrong to assert that, “[The historian’s understanding] must never contradict the theories developed by the nonhistorical sciences.”

Empiricism is useful in at least two ways,

  1. Let’s say that a phenomenon has many possible explanations. Take (price) inflation, for instance. For the sake of argument, assume that all the following explanations are internally consistent: (a) a larger supply of money; (b) a lower demand for money; (c) an output shock. Historical study will tell us which explanation is the relevant one. My interpretation of Mises is that this doesn’t fall under his methodological concern (theory is applied to historical interpretation, therefore we need to discriminate between theories and find the relevant one);
  2. What Misesian methodology may contest is the use of empiricism to falsify theory. But, say that through historical investigation we can determine that certain conditions existed at time t. We have a theory that predicts (i.e. explains) a specific outcome given specific conditions, or ab. If we observe a, but then observe outcome c, where cb, doesn’t this hint at the possibility that our theory is wrong? It suggests that the theory is not logically consistent.

An objection to the second point may be that this kind of observation is very difficult in the social scientists. That’s a valid objection, but I think that most people already accept it. Empirical studies tend to be scrutinized. Statistical methods are reviewed, and are oftentimes found wanting. But, I don’t see any reason why any of the above two uses of empiricism in economics are fundamentally (as opposed to practically) flawed.

Someone may argue that even if historical evidence can suggest a theoretical flaw, the task of falsification (proving a theory wrong) ultimately requires a logical refutation. But, humans are fallible beings, which is exactly why Austrians (especially Hayek) argue against hyper-rationalism. We may not always know how to logically disprove a theory, even if that theory is wrong. What empirical evidence can provide, then, is cause for reasonable doubt. If we assign some probability to the accuracy of a theory, evidence that suggests that the theory is wrong will lead us to revise that probability downwards. The stronger the evidence, the larger the downward revision.

Of course, that humans are fallible beings also means that our interpretations of the evidence have some likelihood of being wrong. That’s why, rather than strictly adhering to a single methodology, most economists are willing to embrace methodological pluralism (I’ve also seen it referred to as critical pluralism). Or, if they’re not, this would be a good starting point for a criticism of common methodology. To increase the probability of falsifying bad theory, we use various methodological tools — if one fails, we have others.

How Not to Approach Empirical Research

Commenting on the ongoing debate regarding Reinhart’s and Rogoff’s spreadsheet and weighting errors, Robert Wenzel writes,

Austrian economics reject empirical data as a method to prove economic theory, for Austrians it is all about logical deductions. Thus, there is not much for Austrians to do, relative to the current Reinhart-Rogoff destruction at the hands of a U Mass graduate student, other than to grab some popcorn and watch with bemusement from the sidelines.

At the risk of being curt, these are probably the worst sentences I’ve read on the whole ordeal. I was a bit disappointed to find Wenzel’s comment quoted at Circle Bastiat, the Mises Institute blog. I’ve already commented there, but there are a lot of people who take what Wenzel writes, and what the Mises Institute buttresses, very seriously. I’m going to reiterate my point. While, at face value, I disagree very little with the sentiment, I find this to be an extremely unimaginative take on the empirical debate over austerity. While it may be true that empirical data cannot disprove theory, this doesn’t mean that the data is irrelevant.

Why can’t data disprove theory? All data must be interpreted. Without theory, history would be largely unexplainable, because the historian (maybe just ‘economist,’ or ‘scientist,’ would be better) wouldn’t be able to draw causal inferences. This creates a dilemma: how do we build accurate theory that can explain empirical phenomena? Most economists, more-or-less, adopt some method of falsificationism, where a model is built and is then tested against the data. The problem is that empirical causal proofs are hard, or even impossible, to discern from complex phenomena (Mises [1998], p. 69; Caldwell [2004] suggests Hayek came to a similar conclusion). Ludwig von Mises’ solution is an a prioristic method, where all economic theory is deduced from a specific axiom: that all human behavior relevant to economic theory is purposeful action. This is close to Lionel Robbins’ definition of economics as a study of human means and ends (Robbins [1932]).

This has led some people, I think erroneously, to deny any role at all to empiricism. The problem is that, ultimately, theory can’t be divorced from reality. The purpose of theory is to interpret the real world, implying that if you avoid empirical research then you’re rendering theory useless. This is especially true when studying complex phenomena, because there are so many conditions present in the real world that we don’t know a priori which theory is relevant and which is not. It can be the case that a theory that our intuition tells us is useful is actually not.

For example, suppose we’re seeking to explain the circumstances of the Depression of 2112. Our intuition tells us that it was caused by malinvestment induced by intertemporal discoordination. In order to know this to be true with any certainty we have to test it against the data: not to disprove it, but to test its applicability. It may turn out that the theory our priors suggested was right can’t actually explain the data we’re looking at. Or, it may be that we find that intertemporal discoordination only explains 30 percent of the data we’re looking at. Maybe we also need to invoke compatible theories to explain other facets of the phenomena in question, such as a monetary glut or banking problemsAll of this requires the tools of history, including econometrics — tools that many aren’t comfortable with, because they’ve erroneously abandoned empiricism with the justification of a priorism.

Another, softer case against a priorism comes from the premise that we are fallible beings, prone to errors in our reasoning. There is a case in being skeptical of the ability of the data to disprove theory, especially as the phenomenon becomes more complex and the data more abundant. There is also reason to be skeptical of our theory if the data seemingly contradicts it. The claim here isn’t that data can disprove theory, rather it’s that there’s no harm in approaching theory critically if we feel that the data clearly contradicts it. This may be the case, for instance, if you find that relatively unregulated and undistorted economies suffer more during depressions under austerity measures than those which benefit from macroeconomic stabilization programs: fiscal and monetary stimulus. If we are error-prone, which means that our theory is likely to be, in part, wrong, then rejecting any path of enlightenment seems counterproductive. This is why I adopt a form of methodological pluralism, or what Caldwell [1989] calls critical pluralism.

A better response to the recent empirical research on austerity and growth is to tackle these interpretations head on (for example, see “Dealing with the Evidence“). I think that many of these studies suffer from omitted variable biases. This is partly the fault of austerity advocates, who have failed to add sufficient nuance to their arguments (although, this isn’t always true). Slow growth isn’t caused only by high government spending, but also by capital consumption, regulation (e.g. inflexible labor markets), and bad governance (e.g. an income distribution that wouldn’t occur in a free market). It could be that a government responds to a crisis by cutting 90 percent of spending, and there may still be slow growth. Does this disprove the theory behind austerity? Well, maybe. It depends on other factors. Again, we are studying complex phenomena. To fully understand these types of events we are going to have to apply a relatively large swath of theory — intertemporal discoordination, inflexible labor markets, government spending, et cetera, on their own aren’t going to explain very much. Austrians should engage the debate by offering multi-causal empirical applications of their causal explanations, otherwise we unnecessarily condemn ourselves to the sidelines.

Another point that isn’t made enough is that we shouldn’t be afraid of changing our minds. It’s always disconcerting to be proven wrong, but, rather than taking it personally, we should accept it for what it is: a process of improving our understanding of the world. Who cares if someone likes you less because you abandoned some key principle of some belief system? We aren’t in a beauty contest. We’re interested in seeking accurate explanations of real world phenomena: ideology and reputation are completely irrelevant.