Informational Limits to the Size of States

Sometimes the state is the least costly means of solving a public goods problem, but there’s good reason to believe that the size of effective states should be limited. Not for normative reasons, but for positive reasons of efficiency. In Calculus of Consent, Buchanan and Tullock introduce two cost curves. One of them shows the relationship between the external costs of collective action and the proportion of the population that has to consent to the decision, with zero external costs achieved at unanimity (if everyone agrees to a decision, nobody loses). At the same time, they consider the costs of reaching a decision based on the decision-making rule, where the closer to unanimity that a decision-making rule gets the costlier it is. This is because there are costs to bargaining with people, and there are costs to learning information that can put you in a bargaining position, et cetera. One direct conclusion is that the optimal decision-making rule is where costs are minimized, which is oftentimes not at unanimity (meaning some external costs to government are oftentimes worth bearing). Another indirect conclusion, that the authors suggest later, is that the costs to decision-making can be lowered if the voting population is reduced (if the state is limited to smaller populations).

There are at least two other reasons I can think of why this last proposition may be true.

The reason we may want bigger states is to overcome economies of scale problems with certain public goods. Economies of scale is when marginal costs fall with growing output, implying high fixed costs. In private industry, large economies of scale may serve as a barrier to entry, since firms may need to have significant budgets to overcome fixed costs and the possibility of temporary losses until production is increased.However, there are costs to large states.

As the externality becomes larger the more information is necessary to solve it. As the amount of people affected by the externality grows, the greater are the costs imposed on the state to extract information from each individual — in other words, the marginal cost curve to information collection is upward sloping. But, it’s not only the information the state can gather that’s relevant; also important is the information that has a low probability of ever being known. We can call this radical ignorance. We know that we don’t know everything, and to some extent it’s worth paying to be informed on random knowledge. This is why we read the newspaper even if we have no idea what that newspaper will publish. But, until we discover it, the characteristics of these data are simply unknown to us; that is, while we recognize that there is information we don’t know, we also don’t know what this information entails exactly. The larger the scope of decision-making and the more people it affects, the higher the probability that we don’t know things that may actually be relevant, therefore the higher the probability that our decision will be, in some way, wrong. This affects the state as much as it does the individual in the private sphere.

The implication is that informational costs — both the costs of acquiring and the costs of practically non-acquirable information — can impose theoretical size constraints on the state, despite whatever advantages due to economies of scale. This means that there are some public good problems that could, if we drop certain conditions, be efficiently solved through collective action, but because of informational costs are no longer worth solving. It may be the case that it’s less costly to let them continue.

Related to this discussion, although only tangentially, is that the larger the state grows the less efficient it becomes at solving local, or smaller-scale, externalities. One reason this may be true is because as the amount of citizens grows the less able the government is to distinguish between relevant population groups. Also, informational costs make it less likely that a larger state will allot the right amount of funding to each group affected by some externality. This is why some countries practice decentralization, or Federalism, where local governments take the place of a central authority in local decision-making. It does have to do with local representation and avoiding problems of being ignored by a larger, centralized state, but these concerns are directly related to the economic problems just referred to.

This whole discussion is all good evidence in favor of the proposition that smaller, localized governments are superior to larger, centralized governments. By local I don’t mean, necessarily, state-level governments. It could be city-level government, or even smaller than that. This is ultimately an empirical question. What this may suggest is that “anarchy,” ultimately, may just be institutional evolution towards smaller states. Economies of scale problems, if they exist, can be solved by imposing stronger constraints on any larger governments that may arise, prohibiting them from interfering in local affairs, but providing them the ability to solve larger scale public good problems (e.g. national defense, if national defense really does fit the conditions). This is, in fact, what we saw in the United States during the transition from the English monarchy (although there were already local democratic governments in many of the colonies) to loose confederacy to Federalist state. Although, in this real world example, the constraints aren’t strong enough.

13 thoughts on “Informational Limits to the Size of States

  1. JL

    On this topic, I think for every epoch and circumstante there exists an optimal size for the state. And I think the optimal was large states in the past (Less complexity, easier to defend, more powerful in war), and such optimal has been declining (More complexity, focus on voluntary relations and not war, inter-communitarian nets of trusts (markets, basically)) to the point the optimal of governance could now be, for many governmental roles, in the city (as you say) or in the neighborhood.
    Other roles may still require a larger state (national defense in conflictive geographical zones). Could be solver with a large state for defense, smaller governance structures for society.

    1. JCatalan

      I’m wary of calling any set of institutions optimal. You may find interesting a paper by Daron Acemoglu called “Why Not a Political Coase Theorem?” (if you Google the title and his last name, the first link should be a direct download). He puts emphasis on the distribution of power, and so the question becomes “optimal for whom.”

  2. A Dismal Economist

    This is a topic I’ve also been discussing over at my blog http://dismalecon.blogspot.ca. I’ve dubbed it the Too Big To Succeed problem. It seems that on average industries, corporations and governments continue to grow larger, concentrate market share or centralize decision making. The bigger something gets the further key decision makers are from all those relevant pieces of information as you mention. This increases the probability of making judgment errors. The rub is that the bigger something is, the more ties it has to everything else. Which means that an error has externalities that can cascade much further than it would from a smaller firm. The example I like to use is an airport. Hit Heathrow with some fog and you have a global flight epidemic. Had more flights gone out of a broader set of smaller regional airports the problem can be contained. National government policies are much more dangerous if they’re garbage than if they’re attempted at the state or municipal level first.

    What’s funny is that those in charge of these big organizations become overconfident as a result of their power and influence, compounding the probability of a major screw up.

    1. JCatalan

      I know Austrians have done some work with firm size and the calculation problem. I swear I’ve seen something similar recently in the non-Austrian literature, but I don’t remember where. And you’re point about cascading is a good one. A larger government may overinvest in some public good and underinvest in other public goods that would actually be better supplied by smaller, more local governments.

  3. Stadius

    Economies of scale is when marginal costs fall with growing output, implying high fixed costs.

    Falling marginal costs don’t imply fixed costs; here’s an example:

    Totalcosts1=100+2q-0.25q^2 Totalcosts2=1000+2q-0.25q^2

    =>Marginalcosts1=Marginalcosts2=2-0.5q

    So, I don’t know, maybe you maybe you meant

    Economies of scale is when average costs fall with growing output, implying high fixed costs.

    Though even in this case, fixed costs are just one potential explanation, the others being falling marginal costs, or both.

  4. Alex Salter

    Solid post. My only complaint is that efficiency, or at least its desirability as an end, is also normative. At the very least, the analytical egalitarianism on which efficiency rests represents an implicit value judgement about who gets counted in the welfare calculus.

  5. Dan

    It is impossible to demonstrate that even a single person values the benefit of the “public good” more then what he pays for it via taxes. Yet, this doesn’t apparently bother you and the public good theorists.. You construct cost functions and theorize about efficiency of “collective action”. This is after all, nothing more than attempts to overcome the calculation problem of socialism. You think that because some alleged “free marketeers” have constructed these cost functions, then they are more credible, but really, it all comes down to the fact that you cannot even solve the main problem – that before you talk about costs, benefits, and efficiency, you must have an exchange system where actors can actually demonstrate their preferences and values, and not have a few people just assume it for everybody..

    1. JCatalan

      I’m not sure you’re really that familiar with what’s being discussed. I think you have a distorted idea of how economists have approached the topic. For example, you throw out the term “cost function,” but I don’t know what relationship between these functions and the quantitative definition of a public good you see. Whatever your answer is, the actual answer is none. Nobody is pretending to be able to fix public goods theoretically, by building random cost functions. Nobody is forgetting the socialist calculation problem. You may be interested in actually reading some of the material.

      1. Dan(DD5)

        I am familiar with the Calculus of Consent. I do not need to address it specifically because you have just read it (and I have so way before you probably even heard of it). The fallacies are there right in your article and that is what I am addressing. People want to be coerced as long as everybody is coerced, or just forget about the coercion, and let’s talk about the government as just another agency providing service… Now let’s analyze the costs… All this talk about unanimity is just pure pure nonsense, and frankly, quite dangerous..

        1. JCatalan

          What fallacies? I never said, anywhere, that we can make a priori quantitative measures of public good problems. Neither did I write, “[p]eople want to be coerced as long as everybody is coerced.” I don’t even know what your last sentence has to do with the preceding ones! (Unanimity is used as a benchmark in the book, not as a description of real world political systems.)

          1. Dan(DD5)

            The “unanimity benchmark” is what exactly if not “people agree to be coerced as long as everybody is coerced”?

            Really, you have to read more than what the authors literally feed you in their writing.. You have to realize the logical implications of their assumptions.. This alleged “benchmark” is no benchmark, in the scientific sense of the term.

          2. JCatalan

            Unanimity implies that nobody is being coerced; the relationship between the state and members of that society is voluntary. If there is a mistake it’s that if unanimity is possible, then there’s a strong case that the same outcome can be accomplished privately — “the state” becomes some kind of arbitrator, or some venue by which people express their preferences.

            This alleged “benchmark” is no benchmark, in the scientific sense of the term.

            I’m not sure what you mean. Is your belief that it’s not scientific based only on your emphasis on coercion? If so, I don’t see your logic. Edit: What makes the unanimity rule what it is is that it proves that there are external costs to government — that’s how Buchanan and Tullock use it.

  6. Pingback: Poverty in the Libertarian Commonwealth | Economic Thought

Comments are closed.