Category Archives: Miscellaneous

Should Employers Stop Drug Testing?

My first article for The Daily Confidential,

When the cost of illicit drug consumption is very high, the only people who are likely to pay that price are addicts — people who are hooked, so they must consume at any price. Think of cigarettes. Why do tax increases on cigarettes have only a marginal impact on cigarette use? Because tobacco, and the other chemicals they put into that product crap, are addicting, and addicts are willing to pay higher prices. When the price falls, a broader class of people is likely to join the market. These are non-addicts who do not have an acquired impulse to consume at whatever cost, but who use recreationally. Thus, the more the cost of drug use falls, the more the average illicit consumer approaches the average person.

Don’t be a Bully

I apologize for the sparse activity. My professional life has been taking up more time than what’s typical. Hopefully, I will be able to return to my normal reading and blogging habits shortly.

I do, however, want to comment on something that builds on Daniel Kuehn’s recent sarcasm. He notes that a recent Mises.ca blog post on the minimum wage associates the movement to raise the minimum wage with Marxism. I think Thomas Salamanca’s piece is a little more nuanced than that, but it is true that it falls back on blaming support of the minimum wage on a misunderstanding of markets and general liberalism — the leftist kind of liberalism, that is. This is not the kind of approach I would take if I were to write a post on why the minimum wage is bad policy.

Salamanca’s article reminds me a little bit of the recent New York Times piece on Rand Paul. The authors clearly opted to attack the weakest parts of the libertarian movement (although, the content was accurate), especially the radicalism that some of its exponents radiate. They received a good deal of criticism for their writing strategy, and I think rightly so. In my comment on the piece, I considered the strategy “cowardly.”

Writers who attack the weakest version of an argument are akin to bullies. Bullies prey on the weakest, because it makes them feel powerful. The writer version of bullies are similar, because they attack the weak to make their argument as persuasive as possible. If the reader thinks that the ideas being criticized are absolutely ludicrous, then the writer bolsters his own argument by making the alternative too ridiculous to agree with. It is a disservice to the reader, and it is disingenuous on part of the author (and I apologize if I have ever done it).

Attacking the anti-market ideas associated with some of those who support the minimum wage is attacking the weakest exponents of the problem being tackled. The strongest arguments in favor of the minimum wage are the theoretical ideas that show that, if the conditions are right, a minimum wage can bring about a net welfare increase. The strongest arguments in favor of the minimum wage are those empirical studies that show, at worst, a neutral effect on employment. Salamanca offers one paragraph in reference to this literature, but quotes only from one (critical) study and dismisses the evidence by claiming that studies that do not show disemployment effects “are significantly outnumbered by studies demonstrating the opposite” — a claim I don’t think is necessarily true. And, a claim that ultimately doesn’t address what side of the literature offers the best arguments and the best evidence.

There are many learned people who support both the minimum wage and markets. They know what voluntary association and free trade have done for society. They understand that more radical departures from capitalism, such as communism, are for the worse. These are the people who drive policy, and these are the people who will not be persuaded by articles like Salamanca’s. Neither should a skeptical reader be persuaded. The skeptical reader should always want the strongest argument in favor or against a position.

I feel that I’m being too hard on Salamanca. Honestly, he’s not my target, and I don’t think his piece is necessarily bad. He’s just unfortunate that I’m using his article as my example. I’m addressing a broad audience. My point is: when you write on an issue, don’t be a bully. Seek the strongest arguments against your position and attack those. Arguments that attack weak positions are typically weak themselves, because they don’t address the stronger ideas. If you are being a bully, be aware that you are publishing a weak argument yourself.

The Skeptical Reader

Put yourself in the shoes of a student — someone who wants to learn — and imagine the perfect world. In this reality, writers and scientists take care to sum up alternative theories accurately, so when you compare and contrast the alternatives you are guaranteed their faithful representation. One of the great benefits of such a world is the time saved, because students wouldn’t feel the need to read various sources, to judge the veracity of each of them individually. Unfortunately, such a world doesn’t exist; in our reality, most people know the positions of their opponents much less well than their own, and we cannot guarantee a faithful representation of alternative theories — if we aren’t skeptical, we fall into the trap of rejecting one or more theories, because their inaccurate representation misleads us when judging between alternatives.

We know that we don’t live in a perfect world, because there are constant complaints of misrepresentation and misinterpretation. Consider, for instance, the recent controversy over Paul Krugman’s post on unemployment insurance and demand-side recessions. Suppose that you, the student, never reads The Conscious of a Liberal, because you sympathize with the Austrians (or whoever) and you find reading Krugman a waste of time (your time is better spent reading other blogs). Instead, you read Bob Murphy, Russ Roberts, and David Henderson. Do they give you a good representation of Krugman’s argument? Not according to Scott Sumner, Daniel Kuehn, and Chris Dillow. My intention isn’t to weigh in on who’s right. Rather, I just want to point out that there are diverging interpretations of Krugman’s post; and, if the student were to read only one blog (that’s not Krugman’s), they run the risk (in this example, 50 percent probability, right?) of misinforming themselves on Krugman’s argument.

These considerations (in general, not the example above) are what prompted Bryan Caplan to propose an ideological Turing test (ITT). Presumably, failing an ITT leads to a loss in reputation. The ITT is a screening device, to help weed out economists who do not do their due diligence when researching and representing their intellectual opponents. The problems with the ITT are: (a) they can be easy to pass, if those being tested can deliver canned responses; and (b) an objective panel of judges can be difficult to construct.

Another problem is that it may not be rational to want to pass an ideological Turing test. Assume that the ultimate end of research is to showcase a persuasive explanation of x phenomenon; in other words, the benefits of your research are only a function of how persuasive your theory is. Suppose there are two cost curves, that look something like this (excuse my [lack of] drawing skills),

Two Cost Curves of Research (2)

The more time you spend on other economists’ research, especially those you disagree with, the more accurate your representation of their argument, and the lesser the costs associated with misrepresentation. This is because, if your readers catch you misrepresenting the opposition, this will give them reason to doubt your conclusions in general. However, it’s also true that the more time you spend on others’ research the less time you spend on your own, and the weaker your argument will be. The combined cost curve looks something like this (warning, this graph is even more poorly drawn than the last two), Combined Cost CurveBecause the researcher has to balance costs, it’s reasonable to expect that she will not necessarily take the time requires to master another person’s theory. Instead, she will find a point where the benefit, on the margin, of focusing on her own research is just equal to the cost associated with misinterpreting the other argument. In terms of the combined cost curve illustrated above, the researcher will choose a distribution of time where costs are minimized (roughly, where the dotted line is). Any other distribution of time will lead to sub-optimal research, and that is definitely not good for the student. In other words, students/readers shouldn’t expect scientists to pass an ideological Turing test.

But, the costs of being misled can still be very high for the student, and there is an interest in reducing this cost. But, if we can’t expect to keep researchers/economists/scientists/bloggers “honest” (if we expect them to be rationally ignorant), we have to develop alternative means of monitoring the quality of their critiques.

The best method, in my opinion, is to be a skeptical reader. Knowing that the amount of time people spend on knowing the other side’s argument will probably be inadequate, we should always assume that the accuracy of the representation will be less than one. We can take a Bayesian approach and assign each blogger/scientist/researcher a probability of accuracy (all of this is in the context of getting the other side right). Based on this probability, we make a choice to corroborate their representation by actually reading the other side’s description of their argument. If we find that the argument was misrepresented or misinterpreted, we adjust that assigned probability downwards; if we find that the argument was well represented, we adjust that assigned probability upwards. Time is scarce, so the higher the probability of accuracy, the less impulse to corroborate you will have (although, initially — when you’ve looked at zero evidence —, you should probably corroborate, no matter what your gut tells you [e.g. even if you really, really like Paul Krugman’s stuff]).

What I’m saying amounts to this: the reader should invest in precaution when reading what’s out there. If it doesn’t make sense to expect the people we’re reading to invest the necessary amount of time to study their arguments they’re criticizing (or trying to replace), because that would amount to a decrease in the quality of their output, the reader should take responsibility. The reader should be the judge in the intellectual Turing test. There are ways to make it easier on the reader to do this — blogs that specialize in “smackdowns,” for example —, but their value stems directly from the reader’s interest in corroborating what they read. And, the various means of keeping others’ honest are subject to their own biases and misinterpretations. The reader must know to be a skeptic.

Instead of spending time arguing about how much effort bloggers should put into being “honest,” we could spend the same amount of time persuading readers to be skeptical (and open their minds). It should always be assumed that the accuracy of an argument is less than one. That means there is some probability greater than zero that the argument being read is wrong. It behooves the reader to check this possibility out. We are all interested in finding “the truth,” but “the truth” is largely something we have to find for ourselves. We don’t necessarily need to do the research ourselves, but we do have to compare and contrast different arguments, evidence, and conclusions — not being skeptical, and not seeking to corroborate what you read, will undermine your progress as a student (as someone looking to learn about something). Of course, sellers in the market for ideas should always be honest, but honesty doesn’t always cut it; there are other sources of ignorance and misinterpretation, and it doesn’t pay to eliminate them. The reader should always make precautionary investments, which means being a skeptic.

Ask Jeffrey Sachs Anything

The economics page on Reddit is always good for keeping you up to date, and today you can find the following near the top,

asksachs

What would you ask Jeffrey Sachs?

I’ve never participated in an “Ask Me Anything” session, but I have read a few (including, recently, one with Jerry Seinfeld). They seem to answer most questions, as long as the question is an educated one. So, if you have something you’d like to ask Sachs, I recommend logging on to Reddit tomorrow at noon EST.

Are You Liberal or Conservative?

TIME asks whether it can predict your political preferences. My score is 17% conservative, 73% liberal. I don’t necessarily think that’s wrong, but the questions that scored me as a conservative were,

  • You like dogs more than cats;
  • You prefer action movies to documentaries;
  • You’re proud of your country’s history.

My guess is that the basis for those questions is spurious correlation. Just like the “facts” that having a messy desk and liking fusion cuisine make me a liberal.

2014 New Years Resolution

Originally, I wanted to wait to announce changes to the website until I had some concrete idea of when these changes were going to be made. Months later, I still have no idea. But, I’m making my goal for 2014 to get this website to where I want it to be.

Economicthought.net has been empty for as long as this blog has existed, which is just over two years now. When I first registered this domain and started the website, I wasn’t sure what I wanted to do with the main page. Then, John S., who frequently comments on this blog, had an idea for a new website and I asked him if I could use it. The idea is a Cato-Unbound type debate platform, but with some subtle differences (namely, that readers can vote for who they thought made the strongest case and there is some, low, monetary cost to voting).

My girlfriend is a graphic designer with a lot of experience in designing websites. So, I had her put something together for me (some of the links won’t be there in the final product, like “resume” — the main page isn’t about me, that’s what my blog is for),

jon-homepgIt looks awesome, and I immediately went about looking for someone to code it. That’s where the endeavor has ended so far, but at some point this year the website is going to be put together and uploaded. The main obstacle so far is mainly monetary, but hopefully I can overcome that problem within these next couple of months.

I use this blog as a platform to educate myself. I comment on what I read, share what I read, and develop my thoughts. I’m lucky that there are a number of people out there who have expressed interest in these things and are willing to read what I write and comment. I really do appreciate it; I am very fortunate. I love to discuss economics with people. I like to be involved in the marketplace for ideas. I want this website to represent that sentiment, and this debate platform should do just that.

Like I said, I’m not sure when all of this is going to come together. But, my plan is to get it done by June 2014, at the latest.

Economic Thought’s Top Ten for 2013

This year’s top 10 most read posts,

  1. Chapter notes for chapters 1 and 2 of J.M. Keynes’ The General Theory;
  2. My review of Joseph E. Stiglitz’ The Price of Inequality;
  3. The lecture I gave in Toronto, Canada.
  4. Keynes versus Hayek: a short post where I argue that a big difference between Hayek and Keynes is how they interpreted the relationship between consumption and investment;
  5. Notes to chapter 12 of The General Theory;
  6. My review of W.H. Hutt’s The Theory of Idle Resources;
  7. Notes to chapter 11 of The General Theory;
  8. Socialism defined;
  9. Keynes and Say’s Law;
  10. Mattheus von Guttenberg’s “My Experience at PorcFest X.”

The 11th post is: “The Case Against Anarchy.”

The Institutions of the Blogosphere

Yesterday’s “smackdown” of Wenzel was not my most polite post (although, I don’t think it’s overly rude, either), but I defended it invoking Krugman’s recent defense of snark on the blogosphere. I’ve gotten some criticism that I should have left the snark out. Maybe the critics are right, but let me offer a defense of my tactics (in the spirit of Krugman’s argument).

Generally, we’re interested in being able to distinguish between valuable contributors and not-so-valuable contributors. One way to do this is to create signals, such as being published in high-end, scholarly journals. These journals, in turn, create their own institutions that help maintain the quality and accuracy of the signal, such as peer review. Another signal is the quality of the department of the academic. Or, you’ve followed this particular person for some time, and you are familiar with the quality of his or her output.

Many of these institutions don’t apply to the blogosphere. So, we need to create alternatives. Krugman proposes snark. Snark acts, in a way, as a peer-review. It’s a direct, and clear, way of making your opinion on the quality of another person’s output known. There’s nothing more direct than “[this person] shouldn’t be taken seriously (on these matters”).”

Maybe it comes off as rude. But, it’s not any more rude than using the term “Keynesian” as a way of distinguishing economists you should and shouldn’t read (I’m referring to the tendency of denoting anything that Rothbard didn’t explicitly approve of as “Keynesian” or “non-Austrian”). In fact, it’s hard to think of something worse than condemning an idea to what amounts to a wastebasket category, just because one or two economists you follow disagreed with it (or don’t explicitly agree with it — which oftentimes is interpreted as disagreement). Yet, this kind of activity doesn’t get criticized as much as straight-out questioning the quality of a blog’s output.

I do sympathize with the idea that it sucks to be called out like that. I was embarrassed  when David Glasner smacked me down, and directly called some of the ideas I was advocating fallacies. But, Glasner was ultimately right. And, if the ideas I was spouting were completely wrong, he was within his rights to call me out on it. And if I were to have a history of disseminating bad theory on this blog, I’d expect people to call that out too. It should serve as an incentive to make sure I know what I’m talking about (I don’t always follow this rule as much as I should, but I’m trying!).

Maybe there are better ways of signaling quality. The blogosphere is still relatively young. Maybe a few years down the road the way we interact through this medium will change. But, for the time being, snark — but, more generally, being direct when gauging the quality of others’ output — is a good alternative to institutions that work in the academic world, but not so much on the blogosphere.

Mean Reversion

I was surprised when Carlo Ancelotti, Real Madrid’s manager, suggested that Valencia’s managerial change made their team’s players more motivated. What I think he has in mind is the empirical regularity that teams which undergo managerial changes tend to win the debut match or the one after that. But, this empirical regularity might have nothing to do with the managerial change.

Typically, teams sack a manager after a string of poor results. We can think of this as the team preforming beneath its mean — its average quality, so to speak. We should expect the team to revert to its mean, and it will do this by registering a better result in subsequent matches. The farther below the mean the team performs (the more extreme the event), the higher the probability that they will improve in the next match (the subsequent event is more likely to be less extreme). When a squad has suffered three very poor results, we’d expect its form to improve sometime soon thereafter. This is why the team’s overall performance in a league season is usually a better signal of its quality than its performance, say, in a knock-out competition. Over a period of 38 trials (there are 38 matches in a La Liga season), the number of points a team accumulates at the end of the 38th match is likely to better reflect the mean than their Champions League performance. (Which is why international comparisons of teams are so difficult).

A more obvious example is a two-sided coin. The probability of the coin landing on either side is 50 percent. We expect this probability to be reflected as the number of tosses approaches infinity. Even if the first four tosses are all heads, as the sample size grows the distribution of heads and tails should approach 50 percent. The outcomes will regress towards their mean. Another example, from Thinking, Fast and Slow, is the performance of Israeli air force pilots and the disciplining process. An Israeli officer had noticed that pilots who were rewarded for good performances tended to preform more poorly in subsequent flights, but pilots who were disciplined for poor performances tended to better subsequently. Thus, he concluded that disciplining is more effective than rewarding. Daniel Kahneman corrected the officer: neither rewarding nor disciplining are likely responsible for those outcomes. Rather, a pilot who does really well in flight A — performing over his mean — is likely to perform less well during the next flight.

The reason why I’m surprised by Ancelotti’s remarks is that statistics is becoming an important part of all sports, including football (soccer). Ancelotti is one of the world’s most successful managers, and I have the feeling successful managers tend to have a good command over relevant statistics (check out Atlético Madrid’s style of play, and tell me that isn’t influenced by statistics). But, I guess this isn’t always true.

(Valencia ended up losing 2–3 to Real Madrid, but the former’s average quality is much lower than the latter’s.)