Candidate for Best Scene

I hadn’t seen Terminator 2 for probably close to half a decade, but in the past few months I’ve seen it three times. It’s definitely one of my favorite movies, and, honestly, I think it’s incredibly underrated. It’s also,in my opinion, James Cameron’s best film. The first time I saw the movie recently was after the Jets–Patriots game where the Pats scored 35 points in the second quarter. I was at a buddy’s house and they played the movie twice in a row, so we watched it twice in a row. I remember thinking that Terminator 2 is definitely one of the most action packed movies I’ve ever seen (well, action packed, but with substance in the narrative). I don’t know if it’s in my “top 10,” but it’s definitely close.

Leave a Reply

  1. The crazy part is that Arnold was actually very small in this movie, and also that that knife wasn’t that long. Draw from that what you will.

  2. Great film. For action, I still don’t think it’s been topped (for sci-fi, Blade Runner is another favorite).

    This may seem off-the-wall, but do you think that future technological change gets way too little discussion by economists? Both the Terminator/Matrix and Kurzweil Singularity scenarios seem like unlikely extremes, but one thing I feel certain of is that future technologies will massively influence nearly all economic and social parameters. Yet aside from the potential effects of automation on labor, economists (and the public as a whole) have been somewhat silent on this issue.

    To me, this is a mistake. If we want to steer the future in a beneficial direction, we need to think about (and prepare for) all the potential ways it can unfold–both positive and negative.

    • The farther into the future you’re trying to look, and the more complex the phenomenon you’re trying to predict, the less accurate your prediction will be. We simply cannot know what the future will be like, because we cannot know what the different actions that all relevant agents will take between the present and the future.

      • Certainly, we can’t *know* what the future holds. But that doesn’t mean that we shouldn’t make our best guesses and plan for multiple scenarios in advance. Every plan of action we (as well as firms, consumers, and investors) undertake is unavoidably based on numerous implicit predictions.

        It seems that in every blog policy debate (e.g. “peak oil,” global warming, demographic trends, even the war on terror), one glaring factor that doesn’t seem to be even considered as a possibility is strong (i.e. superhuman) AI. Yet considering the fantastic development of technology in our own lifetimes (I’m sure you remember the pre-Internet days), I don’t think you have to be a Kurzweil fanboy to conclude that there is some nonzero probability of strong AI emerging this century.

        In short, while we can’t know exactly what the future holds, a review of the past 500 years (or even 50) makes it quite likely that the technological environment of 2050 will be radically advanced over ours. While the specific technologies can’t be predicted with complete certitude, the broad contours (i.e. Kurzweil’s technology map) do seem observable, and should at least be considered in making policy decisions today (which I feel is not the case now).

        • I get the draw of wanting to prepare for the future, but if there infinite possibilities then how can we justify any policy in particular? If the policy is wrong, we not only risk starting back at square one, but starting from a worse position that we’d otherwise attain. It doesn’t make sense to me to make decisions that require knowledge we don’t hold. Even in the case of something like global warming, where there’s a lot of evidence pointing in favor of enacting policies like pollutant rights, it could be argued that these rights constrain the possibility that firms will develop technologies of production that will reduce pollutant output on their own (say that a policy is enacted in the 1950s that handicaps the invention of the internet — we could have lost one of the most ecologically beneficial technologies we’ve ever come across).

          • “It doesn’t make sense to me to make decisions that require knowledge we don’t hold.”

            I agree; I generally oppose most of the green tech/global warming proposals for restrictions, taxes, and subsidies. It seems very likely to me that clean energy/production tech will be feasible within several decades thanks in large part to strong AI. In fact, I believe we should free markets up as much as possible precisely to increase speed up our capacity to engineer solutions to world hunger, disease, and environmental problems.

            However, I do think that more dialogue (and potentially, govt action) might be required wrt the issue of steering AI in a “friendly” direction. Kurzweil’s Singularity Institute is facilitating some dialogue on this front, but perhaps the free market might not be willing to undertake the required research burden. The Skynet scenario is somewhat akin to the proverbial “giant asteroid”–except the former is far more likely than the latter to occur within our lifetimes. Wouldn’t it pay to invest a bit extra now to make sure the transition to the strong AI era happens as smoothly as possible?