Blogging Without Social Media

Anyone who has a successful blog knows that they have to market their content through social media, because otherwise potential readers aren’t going to find them. When I started Economic Thought, I didn’t expect people to find me through Google or some other search engine. Instead, I went out and advertised by linking to my posts on Facebook, Twitter, and what have you.

There are companies out there who want to tell you that blogging once a month no social media presence necessary (and no alternative advertisement methods) will help keep your local search rankings up, just because Google prizes “fresh content.”

I’ve been keeping data these past months, and my results show that Google couldn’t care less about these intermittent blogs with no social media outreach:

No blog effect 2

Yet, there are plenty of business owners spending money on intermittent blogging.

Google, Hayek, and the Future of Search Engine Optimization

I’ve been working on a website’s SEO (search engine optimization) this past week, in an attempt to push it back up to the first page of a local Google search. Even though I’m only about fourteen percent complete with the project, I did manage to get it on the first page of a Boston, MA search for “cosmetic dentist.” Cool story, right? Anyways, look at the image of the search below and notice which terms Google highlights:

Keyterms are becoming progressively less important in SEO.

Google related “cosmetic dentist” with “implant dentistry.” It does this by grouping certain terms together into categories and using these categories to help the algorithm provide more comprehensive, more precise search results. This represents progress in search engines’ ultimate objective of being better able to interpret complex sentences, or natural language processing. Google, in particular, has developed such a complex algorithm that keyword optimizing — at one point, SEO 101 — is becoming less and less important. I’m sure there are many, many people who have caught on to this, but still many haven’t. Based on my experience, I’m willing to bet that most content marketing firms are using antiquated SEO techniques. In part, it might be because it’s still not entirely clear what Google is doing. This is where Hayek comes in. Continue reading “Google, Hayek, and the Future of Search Engine Optimization” »

Quote of the Week

This paragraph is huge, so for the sake of making it easier to read I’m breaking it down (and the paragraph continues beyond the point the excerpt below ends):

What we perceive are never unique properties of individual objects, but always only properties which the objects have in common with other objects. Perception is thus always an interpretation, the placing of something into one or several classes of objects. The characteristic attributes of sensory qualities, or the classes into which different events are placed in the process of perception, are not attributes which are possessed by these events and which are in some manner ‘communicated’ to the mind; they consist entirely in the ‘differentiating’ responses of the organism by which the qualitative classification or order of these events is created; and it is contended that this classification is based on the connexions created in the nervous system by past ‘linkages.’

The qualities which we attribute to the experienced objects are, strictly speaking, not properties of objects at all, but a set of relations by which our nervous system classifies them. To put it differently, all we know about the world is of the nature of theories and all ‘experience’ can do is to change these theories. All sensory perception is necessarily ‘abstract’ in that it always selects certain aspects or features of a given situation. Every sensation, even the ‘purest,’ must therefore be regarded as an interpretation of an event in the light of the past experience of the individual or the species.

— Heinrich Klüver, “Introduction,” in F.A. Hayek, The Sensory Order (University of Chicago Press: Chicago, 1976 [1952]), pp. xviii–xix.

Before You Hate, Read it For Yourself

Might seem excessive to blog about this, but it’s a meme that has to stop.

The Intercollegiate Review has a list of the 50 worst books of the 20th century. This was too predictable:

John Maynard Keynes, The General Theory of Employment, Interest, and Money (1936)

This book did for Big Government what Rachel Carson’s Silent Spring did for the tse-tse fly.

1. The blurb is stupid and false.

2. Just because a book says something you disagree with doesn’t make it a bad book. Whenever someone tells you not to read a book because the content is disagreeable you should take it as a hint to do the exact opposite and read it.

Other entries include Karl Popper and John Rawls. Try telling an academic political philosopher that Rawls’ A Theory of Justice is one of the worst (non-fiction) books of the 20th century. It’s too bad people read and believe the misinformed trash these publications put out.

A Theory About Duplicate Content

This is more of a question for those who have more experience in search engine optimization. I’m asking because I haven’t yet tried to test this empirically and I’m wondering if it’s even worth the effort to do so — better to know I’m completely wrong before I invest, right?

It’s my understanding that Google categorizes different search terms to group those of common meaning. From another angle, what this means is that the search may use the category’s definition, rather than the individual words’. I wonder if Google does the same thing when judging duplicate content. Suppose that I have a website on Friedrich Hayek and the content sucks. One page might give a very superficial overview of his business cycle theory and the other three pages might cover exactly the same points, just written differently. My theory is that Google would know that the content is duplicate, because it can use term categories to figure out the general ideas being covered on each page.

A more concrete example of a site with duplicate content, but where each page has unique enough content to pass a simple duplication check (which might test only for copy and pasting) is Teeth Tomorrow™. Check out the product description page, the “Why Teeth Tomorrow™” page, and the “Benefits” page. They all pretty much cover the same exact points, just written in slightly different ways. My theory is that Google is penalizing that site — which, although fifth when you search for “Teeth Tomorrow”, is ranked below the doctor’s main site and below an infosite on dental implants (also owned by the same doctor), both of which have only one page on the product and not seven+.

Am I re-inventing the wheel? Am I wrong? Just wondering, because that should offer a clue on how content marketers should train their writers.

Quote of the Week

Today, two excerpts from the same section of the book. This should stir up some conversation — my comments are meant to provide some ideas of what could be discussed:

Tullock distinguishes, basically, between the relationship of exchange, which he calls the economic, and the relationship of slavery, which he calls the political. I use bold words here, but I do so deliberately. In its pure or ideal form, the superior-inferior relationship is that of the master and the slave. If the inferior has no alternative means of improving his own well-being other than through pleasing his superior, he is, in fact, a “slave,” pure and simple. This remains true quite independent of the particular institutional constraints that may or may not inhibit the behavior of the superior. It matters not whether the superior can capitalize the human personality of the inferior and market him as an asset. Interestingly enough, the common usage of the word “slavery” refers to an institutional structure in which exchange was a dominant relationship, In other words, to the social scientist at any rate, the mention of “slavery” calls to mind the exchange process, with the things exchanged being “slaves.” The word itself does not particularize the relationship between master and slave at all.

— James M. Buchanan, in the foreword to Gordon Tullock, Bureaucracy (Liberty Fund: Indianapolis, 2005), p. 7.

Keep in mind that the realm of the political, going on these definitions, extends to the organization of a firm. Buchanan gives meaning, most likely unintentionally, to the concept of “wage slavery.”

Tullock’s distinction here can also be useful in discussing an age-old philosophical dilemma. When is a man confronted with a free choice? The traveler’s choice between giving up his purse and death, as offered to him by a highwayman, is, in reality, no choice at all. Yet philosophers have found it difficult to define explicitly the line that divides situations into categories of free and unfree or coerced choices. One approach to a possible classification here lies in the extent to which individual response to an apparent choice situation might be predicted by an external observer. If, in fact, the specific action of the individual, confronted with an apparent choice, is predictable within narrow limits, no effective choosing or deciding process could take place. By comparison, if the individual response is not predictable with a high degree of probability, choice can be defined as being effectively free. By implication, Tullock’s analysis would suggest that individual action in a political relationship might be somewhat more predictable than individual action in the economic relationship because of the simple fact that, in the latter, there exists alternatives. If this implication is correctly drawn, the possibilities of developing a predictive “science” of “politics”would seem to be inherently greater than those of developing a science of economics.

— pp. 8–9.

Buchanan goes on to describe Tullock’s method, where he wants to appeal to the reader’s experience and intuition. My experience and intuition disagrees with Tullock, though. I don’t share this view of modern bureaucracy, and I don’t think the term “slavery” (as used by Buchanan) captures how modern hierarchical organizations work. Bureaucracies come against the same knowledge problem that affects markets, and so a good bureaucracy is one in which there is some process by which local agents have enough freedom of choice to exploit local knowledge and by which this local knowledge is delivered by proxy to others who may need it.

As it turns out, bureaucracies often do work like this, both within firms and within larger hierarchy organizations, such as government bureaus. Take the high school teacher, for example; he or she is afforded considerable, although not absolute, liberty in determining how to teach the students. Similarly, political agents often enjoy quite a bit of autonomy when it comes to carrying out daily activities at the “workplace” — while not a perfect example, think of how political agents do what they do in the show House of Cards. There are constraints in choice, often based on “orders from above,” but bureaucratic agents are rarely “slaves” in the sense of having only two choices: what your “master” wants you to do and something comparable to death.

Buchanan mentions that the “science of politics” may enjoy more predictive value than the “science of economics,” based on the idea that a political agent’s domain of choice is severely constricted. But, the explanatory power of public choice theory has been questioned. It can’t explain why people vote; it can’t always explain how politicians choose; et cetera. If the critics are right, that erodes the empirical meaning in Buchanan’s and Tullock’s exchange/slavery distinction between the economic and the political, respectively. (Ironically, in Calculus of Consent, Tullock and Buchanan seem to frame the political in the context of exchange.)

Finally, and totally separate from what we’ve discussed so far, Buchanan here seems to give credence to the philosophical position that the choice between accepting the lowest market wage and accepting starving to death is not really a choice at all, and that the laborer in that case truly is a wage slave. What are the normative implications, in the context of justice?

Fractional Reserve Banking Made Simple

I’m about to kick a dead horse, but every once in a while you see the horse’s ghost gallop about the internet. The notion that fractional reserve banking is “fraudulent” and “unstable” is a “brain worm” that deserves to be extinguished.

Part of what a bank does is intermediate between savers and borrowers. When you put your money into a savings account, the bank will lend it out. Fractional reserve banking works the same way, but deals with relatively liquid type of deposits. There’s nothing fraudulent about it.

I’m relatively young and I don’t make a big income, so I keep a good amount of money in a demand deposit. If I ever unexpectedly need it, it’s there. Most of the time, it just sits there. Anything not being presently consumed is being saved for future consumption, so those dollars are savings — just like a savings deposit, but (given regulatory rules) with no interest and greater liquidity. The bank will lend these savings out.

Are there two claims on the same money? In a sense, yes, but that’s true with just about any savings vehicle. The money you’re lending is yours, you’re just not currently spending it, so it can be lent out. You might argue that the problem with fractional reserves is when depositors go to the bank to withdraw their money. This isn’t an issue unique to deposit banking. It’s called a maturity mismatch and it can happen with any kind of asset. In fact, it’s something that is inherent to banking: banks borrow short and lend long.

The “trick” is to manage these different assets and rely on the law of large numbers to make sure you always have the sufficient liquidity to pay-off short-term liabilities. That’s what successful banks accomplish. Without the ability to juggle assets of different term lengths, the intermediation industry is going to be very inefficient.

What’s the relationship between fractional reserves and economic crises? Some see that many financial crises are preceded by bank runs, so they conclude that it must have been the maturity mismatch that was at fault. It’s strange, actually, that some Austrians would believe this, because they’re the ones always stressing about their peers mistaking the symptoms (the crash) for the cause (credit expansion). Business cycles are caused by excess supplies of money, which change the distribution of profits. When money supply growth begins to slow down this distribution changes — thus, the sudden loss in profitability for large swaths of industry.

Just because too much sugar is bad for you doesn’t mean all sugar is bad. There’s nothing inherently destabilizing with fractional reserve banking as long as excess money is minimized. What’s the difference between “excess money” and lending on fractional reserves?

Like with any other economic good, there is a point at which demand and supply are equal. Unlike many other economic goods, money has to clear in multiple markets. When the demand for money increases, ignoring for a minute the ability to increase supply, the prices of other goods that exchange for money have to decrease in order to clear against the higher relative value of the currency. If prices don’t clear and exchange suffers, we call that a shortage of money. On the other hand, if there’s more money than people are willing to hold, this is called excess money. It will continue to circulate (the “hot potato” effect) until it returns for redemption or the price level increases, the relative value of money falls, and demand and supply are again equal to each other.

That a bank lends on fractional reserves doesn’t really say anything about whether there is excess money. When the demand for money increases, the volume of deposits might swell (the amount of liabilities returning to the bank for redemption will fall) and it will allow the bank to issue credit. In this case, the banking system is increasing the supply of money to meet the heightened demand. That’s why, if you’re worried about the business cycle, blaming fractional reserve banking is the wrong way to go. What you should really be worried about is surplus money.

How do we accomplish limiting the ability of banks to create liabilities, without enforcing full reserves? Through coordinating monetary institutions (rules or constraints), which may include:

  • In a competitive banking system, banks holding other banks’ liabilities will send them in for redemption, draining on the issuing bank’s assets. If a bank over-issues money, it will suffer from illiquidity. If a bank under-issues money, they will be foregoing the revenue they could have earned had they maximized the use of their assets.
  • If banks could pay competitive interest on demand deposits, they’d have to raise this rate to attract new deposits to fund their lending. But, as the supply of loanable funds increases, the rate of interest on these loans will fall. If the latter rate (on loans) falls below the former (on deposits), the bank is making a loss.

That’s why it pains me when I read Austrians cheering for recent IMF studies and old Chicago research papers supporting 100 percent reserves. They’re worrying too much about the symptom and they don’t realize that they’re supporting the cause: bad monetary institutions (after all, it’s not like these IMF and old Chicago School economists are advocating for free banking).