Category Archive: Justice

  1. DEBATE: Incarceration in America

    Leave a Comment

    Why does America put so many people in jail? Is it because we have lots of guns? Lots of criminals? Or lots of laws turning nonviolent people into criminals? Watch this UNSAFE SPACE debate featuring Heather Mac Donald and Prof. Thaddeus Russell.

    UNSAFE SPACE is a live show and podcast where comedians do standup on controversial topics, then have a discussion with experts and the audience. See more at UnsafeSpaceShow.comTo view the debate in its entirety, see the full episode here.

  2. Reddit AMA with Professor Ilya Somin of George Mason University

    Comments Off on Reddit AMA with Professor Ilya Somin of George Mason University

    Ilya Somin is Professor of Law at George Mason University. His research focuses on constitutional law, property law, and the study of popular political participation and its implications for constitutional democracy.

    Professor Somin has written extensively on constitutional theory, federalism, political ignorance, property rights, immigration, and a wide range of other important policy issues. He is a prolific contributor to the Volokh Conspiracy blog hosted by the Washington Post, and his work has been featured in other major publications such as the Wall Street Journal, New York Times, Los Angeles Times, CNNUSA Today, and Forbes. He’s also the author of several well-received books, including Democracy and Political Ignorance: Why Smaller Government is Smarter (Stanford University Press, Second Edition, 2016), and The Grasping Hand: Kelo v. City of New London and the Limits of Eminent Domain (University of Chicago Press, 2015).

    Fans of Learn Liberty will recognize Professor Somin as the star of our popular video, I Can’t Breathe: How to Reduce Police Brutality, and as a regular contributor to our blog, where he has written about the politics of sci-fi and fantasy series such as Star Wars, Star Trek, and Game of Thrones.

    Mark your calendar for Tuesday, September 19th at 3:00pm ET and join us for a conversation at Reddit.com/r/Politics where you can ask him anything!


    Update: The AMA is now live!


  3. Breaking the wheel of Westeros: why heroes aren’t enough

    Comments Off on Breaking the wheel of Westeros: why heroes aren’t enough

    In a famous scene in Season 5 of Game of Thrones, Daenerys Targaryen compares the struggle for power in Westeros to a spinning wheel that elevates one great noble house and then another. She vows that she does not merely intend to turn the wheel in her own favor: “I’m not going to stop the wheel. I’m going to break the wheel.”

    In the world of the show, Daenerys’s statement resonates because the rulers of Westeros have made a terrible mess of the continent. Even those who are not sadistic (like King Joffrey), or venal (like many of the leaders of the great houses) do little to benefit the common people, and often end up making their lot even worse than before. Their conflicts have left Westeros devastated and poorly prepared to face the menace of the undead White Walkers, who are about to invade from the north. Even such seemingly idealistic leaders as Ned and Robb Stark and Stannis Baratheon end up exacerbating the carnage rather than improving things.

    Even in earlier, more peaceful times, the ruling class mostly preyed on the people rather than provide useful public goods. Both George R.R. Martin’s Song of Ice and Fire book series and the HBO series based on it drive home the point that Westeros’s political system is dysfunctional and that its problems go beyond the flaws of any one ruler.

    Daenerys’s desire to “break the wheel” suggests the possibility of a better approach. But, what exactly, does breaking the wheel entail?

    Good Intentions and Flawed Execution

    Even in the late stages of the still-ongoing Season 7, Daenerys seems to have little notion of what it means beyond defeating her enemies and installing herself as Queen on Westeros’s Iron Throne. She recognizes that Westeros’s previous rulers — including her father, the “Mad King” Aerys – committed grave injustices. But it is not clear how she intends to avoid a repetition of them.

    Even if Daenerys herself can be trusted to rule justly and wisely as an absolute monarch, what will happen after she is gone? Recent occupants of the Iron Throne have had a short life expectancy. None of the last five have died a natural death. In a recent episode, Daenerys’s chief adviser, Tyrion Lannister, asked: “After you break the wheel, how do you make sure it stays broken?” Daenerys has no good answer to this important question.
    Unlike most of the other rulers we see in the series, Daenerys has at least some genuine interest in improving the lot of ordinary people. Before coming to Westeros, she and her army freed tens of thousands of slaves on the continent of Essos. She delayed her departure from Essos long enough to try to establish a new government in the liberated areas that would — hopefully — prevent backsliding into slavery.

    Nonetheless, it is not clear whether Daenerys has any plan to prevent future oppression and injustice other than to replace the current set of evil rulers with a better one: herself. The idea of “breaking the wheel” implies systemic institutional reform, not just replacing the person who has the dubious honor of planting his or her rear end on the Iron Throne in King’s Landing. If Daenerys has any such reforms in mind, it is hard to say what they are.

    Daenerys most recently restated her desire to break the wheel in episode 4 of season 7, when she announced it to a group of captured enemy soldiers. Immediately afterwards, she proceeded to execute two of the prisoners, Lord Randyll Tarly and his son Dickon, because they refused to swear allegiance to her. Daenerys orders one of her dragons to burn them to death.

    Lord Tarly is a far from sympathetic character, one who has committed significant injustices. Dickon was, arguably, complicit in some of them. Nonetheless, this is an example of Daenerys ordering a brutal execution of prisoners without any due process, primarily because they refused to “bend the knee” to her.

    It is not a massive injustice on the scale of those committed by her enemies and predecessors. But it also does little to reassure the people that the new regime will be fundamentally different from the old. Life and death are still decided by the word of the king or queen, with no institutional safeguard against the abuse of such arbitrary power.

    The King in the North

    Daenerys’s failure to give serious consideration to institutional problems is shared by the other great leader beloved by fans of the show: Jon Snow, the newly enthroned King in the North. Perhaps even more than Daenerys, Jon has a genuine concern for ordinary people. He at one point even sacrificed his life in an attempt to save them (he was later, of course, resurrected). Unlike Daenerys — to say nothing of the other contenders for the Iron Throne — Jon seems to have little in the way of lust for power. He clearly did not really want the northern lords to make him King in the North, and views the position as more a burden than a privilege.

    To an even greater extent than Daenerys, however, Jon does not have any real notion of institutional reform. Almost by default, he accepts traditional institutional forms, including the kingship of the North itself. In fairness, Jon has been preoccupied first with retaking the North from the villainous Ramsay Bolton, and later with preparing for the war against the White Walkers. But there is little evidence that he even perceives the need for institutional change, much less has a plan to effectuate it.

    Heroes and Villains vs. Institutions

    What kind of institutional reform can realistically be achieved in Westeros? It is difficult to say with certainty. The continent is, after all, a fantasy world, and only its creators can really say what might be possible there.

    But in Medieval Europe, on which Westeros is roughly based, parliaments, merchants’ guilds, autonomous cities, and other institutions eventually emerged to challenge and curb the power of kings and nobles. These developments gradually helped lead to the Renaissance, the Enlightenment, and the economic growth that led to modern liberal democracy. Few if any such developments are in evidence in Westeros, which seems to have had thousands of years of economic, technological, and intellectual stagnation.

    The characters in the books and the TV show are not the only ones who largely ignore the need for institutional change. We the fans are often guilty of the same sin. Few fans watch the show with an eye to institutional questions.

    Rather, we are fascinated by the doings of the more prominent characters. Who will prevail in the struggle for power? Who will score an impressive victory in battle or single combat? Will Cersei ever completely alienate her increasingly disillusioned brother Jaime, with whom she has had a longstanding incestuous relationship? Will Daenerys and Jon finally develop the long-foreshadowed incestuous relationship of their own? Unbeknownst to either, she is likely his aunt.

    These are the kinds of questions that excite many fans. Relatively few wonder whether and when Westeros will get a parliament, secure property rights, or establish some semblance of the rule of law.

    All of this is entirely understandable. Most of us read fantasy literature and watch TV shows to be entertained, not to get a lesson in political theory. And it is much easier to develop an entertaining show focused on the need to replace a villainous evil ruler with a good, heroic, and virtuous one, than to produce an exciting story focused on institutional questions. Writers and showrunners tend to follow the former approach.

    The Star Wars series, one of the few sci-fi/fantasy franchises even more popular than Game of Thrones, is just one of many pop culture products that exemplify the same trend. Game of Thrones/Song of Ice and Fire is comparatively unusual in even raising the possibility that institutional reform is the real solution to its fictional world’s problems, and in making this idea one of the central themes of the story.

    The Real World Has a Dangerous Wheel of Its Own

    However understandable, the pop culture fixation on heroic leaders rather than institutions reinforces a dangerous tendency of real-world politics. The benighted people of Westeros are not the only ones who hope that their problems might go away if only we concentrate vast power in the hands of the right ruler. The same pathology has been exploited by dictators throughout history, both left and right.

    It is also evident, in less extreme form, in many democratic societies. Donald Trump won election by promising that he could solve the nation’s problems through his brilliant leadership if only we gave him enough power: “I alone can do it,” he famously avowed at the 2016 Republican National Convention. Before him, Barack Obama promised that he could transcend the ordinary limitations of politics and bring “change we can believe in.”

    More generally, voters are prone to support charismatic leaders who promise to change the flawed status quo, without giving much thought to the possibility that the new policies may be as bad or worse than the old. They also rarely consider the likelihood that real improvements require institutional reform, not merely a new leader. The spinning wheel of Westeros has its counterpart in the wheel of American politics, where one set of dubious politicians replaces another, each promising that they are the only ones who can give us the “change” we crave.

    For all its serious flaws, our situation is not as bad as that of Westeros. But we too could benefit from more serious consideration of ways to break the wheel, as opposed to merely spin it in another direction. And our popular culture could benefit from having more stories that highlight the value of institutions, as well as heroic leaders. However much we love Daenerys and Jon, they and their real-world counterparts are unlikely to give us a better wheel on their own.

  4. Expert Answers on the Drug War: Highlights from Prof. Jeff Miron’s AMA

    Comments Off on Expert Answers on the Drug War: Highlights from Prof. Jeff Miron’s AMA

    Last week, Professor Jeffrey Miron joined us on Reddit for an “Ask Me Anything” conversation as part of the Learn Liberty Reddit AMA Series.

    The conversation focused on Dr. Miron’s 30+ years of study on the effects of drug criminalization. Check out some of the highlights below.

     


    GPSBach

    While there seems to be an emerging consensus on legalization of marijuana in the US, pot specific policies might not be completely applicable to other, harder drugs, especially in light of the ongoing opioid crisis. Do you have thoughts on opinions on the efficacy of blanket legalization and/or decriminalization vs. piecemeal changes?

    jeffreymiron

    My first choice is full legalization of all drugs: the negatives from prohibition relate mainly to the adverse incentives and effects caused by prohibition, not the specific effects of one drug versus another.

    That said, partial measures are generally better than nothing.


    Sulimonstrum

    Are there currently any countries in the world that have a decent drug policy in your mind? Can be in both directions I suppose, either a more successful ‘war on drugs’ or a sensible policy of tolerance.

    jeffreymiron

    Essentially all countries prohibit most / all drugs; but many enforce to a far lesser degree than the U.S. (e.g., Netherlands, Portugal, and to varying degrees, much of Europe and elsewhere). Changing the formal laws is important; but the harms from prohibition do decline as enforcement declines.


    hcwt

    Hi Dr. Miron, thanks for the AMA.

    Do you think there’s a good model the US can move towards? I’ve always though of Portugal as a good example.

    How do you think the recent legalization of recreational marijuana will go in MA? My town decided to moronically pass up on the tax revenue, so I’m looking forward to seeing what happens in the rest of the state.

    jeffreymiron

    The best model is the U.S. before 1914: no prohibition of any drug or alcohol. As a second best, Portugal is a quite good.

    MA’s legalization seems likely to be somewhat tortured; the public health community is trying hard to undo the ballot initiative.


    Sayter

    Prof, thoughts on Portugal’s drug decriminalization in 2001? (go Crimson)

    jeffreymiron

    A huge step in right direction. Ideally would go farther: full legalization. But current UN treaties make that awkward.


    Ghost_of_Trumps

    In your opinion would legalizing drugs lead to fewer overdoes because purity is more manageable or would we see more because of increased access?

    jeffreymiron

    Exactly. Most of the overdoses come from non-medical use that arises when people are cut off from medical or other legal supply (e.g., methadone maintenance). Use of almost anything is much riskier in a black market because quality control is worse.


    compacct27

    I live in San Francisco. What the hell is going on with people doing heroin? There are needles in the streets, and there doesn’t seem to be a way out for most addicts. I’ve seen 2 people shoot up heroin in public. That’s apparently a low number in this city.

    What are the current “ways out” of heroin addiction, and what programs could I donate to to help?

    jeffreymiron

    The way out of the heroin problem is heroin legalization: it would be cheaper, so far fewer people would inject; and people would know the dosage, so far fewer would OD.

    Realistically, the best path is to support Medication Assisted Treatment, i.e., methadone and buprenorphine.


    Ezzeia

    Dr. Miron, first off, thank you for doing an AMA. Looking forward to reading all the responses.

    Second, how do we draw the line today and in the future between ‘harmless’ drugs and ‘harmful’ drugs, especially when new variants or types of drugs pop up all the time? How do we create a robust system to differentiate drugs?

    jeffreymiron

    I don’t think we can differentiate in a meaningful way, because the main negatives come from prohibition, independent of the properties of the prohibited good. If we outlawed caffeine, we would have a violent black market with poor quality control in which people suffered far more adverse effects from caffeine than now.


    ClittyLitter 

    I heard someone talking about this on NPR a few days ago. He brought up an aspect I hadn’t thought of–that organized criminals in states with more medical/recreational cannabis have shifted their black market endeavors to things like identity theft, manufacture of counterfeit IDs, human trafficking, etc.

    He wasn’t making an argument for continued prohibition, rather that the underlying social issues of poverty cycles, gangs, low-education, and recidivism need to be addressed if we want to reduce criminal activity.

    Lifting prohibition isn’t a panacea. If drugs are legalized, what solutions do you propose to address those social issues that incubate and perpetuate criminal activity?

    The war on drugs has destroyed countless lives. Thank you for your time.

    jeffreymiron

    [I] agree that legalization is not a panacea. To some degree, the other policies we need to reduce crime are also reductions or eliminations of prohibitions, however. For example, manufacture of counterfeit IDs is a big deal because we restrict immigration; human trafficking is a problem in part because we outlaw [prostitution].

    Nevertheless, policies that improve education, e.g., are also important.


    CassiopeiaStillLife

    Hi, Jeff! What’s your favorite song off of Lost in the Dream? “Red Eyes”? “An Ocean in Between the Waves”?

    I kid, of course-you’re asking about the other War on Drugs. Do you think that public opinion will shift to the point where opposing the War on Drugs isn’t a dealbreaker?

    jeffreymiron

    For marijuana, has roughly shifted that much so far. For other drugs, it’s going to take a while.


    empiregrille

    In your mind, what is the key difference between drug legalization and decriminalization?

    jeffreymiron

    Legalization brings the supply side above ground. That eliminates the violence and quality control problems, and allows normal taxation.


    DSSK-7

    Given known levels of drug use, demand, and price, approximately how much tax revenue would the U.S. stand to collect if we legalized and regulated all drug use, taxing it at a similar rate as alcohol?

    jeffreymiron

    ballpark $50 billion per year. google “miron waldock cato.”

    DSSK-7

    Thanks for the response! That is indeed a hell of a lot of money. Here’s the report for anyone else interested: https://www.cato.org/publications/white-paper/budgetary-impact-ending-drug-prohibition


    phillyguy667

    On a sort of opposite note from many questions already posted: Can you describe what some of the adverse economic effects stemming from overall legalization might be, and how they might be meaningfully addressed? I understand that the potential adverse effects from legalization of a drug like heroin may be different from legalization of a drug like marijuana, but are there any unifying characteristics?

    Thanks for stopping by!

    jeffreymiron

    The only real negative I can imagine is that a few people who do not currently use will perhaps try newly legalized drugs and, in some cases, have bad experiences. But evidence suggests that’s a a modest number, and of course has to be balanced against all the benefits of legalization.


    DeepBlueSeaz

    Hi Professor Miron. As a young, millennial, graduate student in government, I was wondering to what degree you feel drug policy is affected by the older generation as opposed to the younger. Further, in what ways do you expect anti-drug sentiment to shift as millennials begin to age and take more prominent roles in policy?

    Thanks!

    jeffreymiron

    Well, I am a lot older than a graduate student, and I grew up hearing that as the baby boom generation matured, legalization would occur. Happened a bit, but not to an overwhelming degree. I guess many people get more conservative, at least about drugs, as they age. So, we have to convince old folks too!


  5. Reddit AMA with Professor Jeffrey Miron of Harvard University

    Comments Off on Reddit AMA with Professor Jeffrey Miron of Harvard University

    The Learn Liberty Reddit AMA Series continues on Wednesday, August 9th, with renowned economist and professor, Jeffrey Miron, senior lecturer and director of undergraduate studies in the Department of Economics at Harvard University.

    Dr. Miron has written over 100 op-eds for publications such as the New York Times, Washington Times, Boston Herald, CNN, Time, Huffington Post, The Daily Caller, and Newsweek. He has also written several books, including Drug War Crimes: The Consequences of Prohibition (2004) and Libertarianism: from A to Z (2010). You may recognize him as the star of one of Learn Liberty’s all-time fan-favorite videos: “Top Three Myths of Capitalism.”

    Mark your calendar and join us for the conversation on Reddit, Wednesday, August 9th at 3:00pm ET, where you’ll have the chance to ask him anything!


    UPDATE: The AMA is now live!


  6. What Charles Darwin owes Adam Smith

    Comments Off on What Charles Darwin owes Adam Smith

    The following is a lightly edited, slightly condensed transcript of the talk “Adam Darwin: Emergent Order in Biology and Economics,” presented by Matt Ridley at the Adam Smith Institute in 2012.


    I’ve called my lecture “Adam Darwin” to stress how congruent the philosophies of Adam Smith and Charles Darwin are. The common theme, of course, is emergence — the idea that order and complexity can be bottom-up phenomena; both economies and ecosystems emerge. But my purpose really is to explore not just the history and evolution of this shared idea but its future: to show that in the age of the Internet, Adam-Darwinism is the key to understanding how the world will change.

    The Common Ancestry of Evolution and Economics

    Darwin’s debt to the political economists is considerable. He spent formative years in Edinburgh among the ghosts of Hume, Hutchinson, Ferguson, and Smith. When he was at Cambridge in 1829, he wrote, “My studies consist in Adam Smith and Locke.” At his grandfather Josiah Wedgwood’s house in Staffordshire, Darwin often met the lawyer and laissez-faire politician Sir James Mackintosh, whose daughter married Charles’s brother-in-law (and had an affair with his brother).

    On the Beagle, he read the naturalist Henri Milne-Edwards, who took Adam Smith’s notion of the division of labor and applied it to the organs of the body. After seeing a Brazilian rainforest, Darwin promptly reapplied the same idea to the division of labor among specialized species in an ecosystem: “The advantage of diversification in the inhabitants of the same region is in fact the same as that of the physiological division of labor in the organs of the same individual body — subject so well elucidated by Milne-Edwards.”

    Back in England in the 1830s, through his brother Erasmus, Darwin fell in with the radical feminist and novelist Harriet Martineau, who had shot to fame because of her series of short fictional books called Illustrations of Political Economy. These were intended to educate people in the ideas of Adam Smith, “whose excellence,” she once said, “is marvelous.” I believe it was probably at Martineau’s suggestion that, in October 1838, Darwin came to reread Malthus (a person with whom Martineau was on very close terms) and to have his famous insight that death must be a non-random and therefore selective force.

    Parenthetically, it’s worth recalling the role of anti-slavery in bringing Martineau and Darwin together. Darwin’s grandfather Josiah Wedgwood was one of the leaders and organizers of the anti-slavery movement, a friend of Wilberforce, and the maker of the famous medallion “Am I not a man and a brother?” which was the emblem of the anti-slavery movement. Charles Darwin’s aunt Sara gave more money to the anti-slavery movement than any woman in Britain. Darwin had been horrified by what he called, “The heart-sickening atrocities of slavery in Brazil.” Abolition was almost the family business. Meanwhile, Harriet Martineau had just toured America speaking against slavery and had become so notorious that there were plans to lynch her in South Carolina.

    Today, to a bien pensant intellectual, it might seem surprising to find such a left-wing cause alongside such a right-wing enthusiasm for markets, but it should not be. So long is the shadow cast by the top-down determinism of Karl Marx, with his proposal that the state should be the source of reform and welfare, that it’s often forgotten how radical the economic liberalism of the political economists seemed in the 1830s. In those days, to be suspicious of a strong state was to be left-wing (and, if you’ll forgive the pun, quite right, too).

    Today, generally, Adam Smith is claimed by the right, Darwin by the left. In the American red states, where Smith’s emergent decentralized philosophy is all the rage, Darwin is often reviled for his contradiction of dirigiste creationism. In the average British university by contrast, you will find fervent believers in the emergent decentralized properties of genomes and ecosystems, who nonetheless demand dirigiste policy to bring order to the economy and society. Yet, if the market needs no central planner, why should life need an intelligent designer, or vice versa?

    Ideas evolved by descent and modification just as species do, and the idea of emergence is no exception. Darwin at least partly got the idea from the political economists, who got it from the empirical philosophers. To put it crudely, Locke and Newton begat Hume and Voltaire, who begat Hutchinson and Smith, who begat Malthus and Ricardo, who begat Darwin and Wallace. Darwin’s central proposition was that faithful reproduction, occasional random variation, and selective survival, can be a surprisingly progressive and cumulative force. It can gradually build things of immense complexity. Indeed, it can make something far more complex than a conscious deliberate designer ever could. With apologies to William Paley and Richard Dawkins, it can make a watchmaker.

    Each time a baby is conceived, 20,000 genes turn each other on and off, in a symphony of great precision, building a brain of 10 trillion synapses, each refined and remodeled by early and continuing experience. To posit an immense intelligence capable of comprehending such a scheme, rather than a historical emergent process, is merely to exacerbate the problem — who designed the designer?

    Likewise, as Leonard Reed pointed out, each time that the pencil is purchased, tens of thousands of different people collaborate to supply the wood, the graphite, the knowledge, and the energy, without any one of them knowing how to make a pencil. Says Smith, if you like, “This came about by bottom-up emergence, not top-down dirigism.” In both cases, nobody’s in charge, and crucially, nobody needs to understand what’s being done.

    Why Innovation Happens

    So far, I’m treading a well trodden path in the steps of Herbert Spencer, Frederick Hayek, Karl Popper, and many others who’ve explored the parallels between evolutionary and economic theory. But the story has grown a lot more interesting in the last few years, I think, because of developments in field of cultural and technological evolution. Thanks especially to the work of three anthropologists — Rob Boyd, Pete Richardson, and Joe Henrich — we are beginning now to understand the extraordinary close parallels between how our bodies evolved and how our tools and rules evolve. Innovation is an evolutionary process. That’s not just a metaphor, it’s a precise description. I need you to re-examine a lot of your assumptions about how innovation happens to disenthrall yourself of what you already know.

    First, innovation happens mainly by trial and error. It’s a tinkering process, and it usually starts with technology, not science, by the way, as Terrence Keeley has shown. The trial and error may happen between firms, between designs, between people, but it happens. If you look at the tail planes of early airplanes, there’s a lot of trial and error, there’s a lot of different designs being tried and eventually one is decided.

    Exchange is crucial to innovation, and innovation accelerates in societies that open themselves up to internal and external exchange through trade and communication — Ancient Greece, Song China, Renaissance Italy, 16th century Holland, 19th century Britain — whereas innovation falters in countries that close themselves off from trade — Ming China, Nero’s India, Communist Albania, North Korea.

    More ever, every innovation, as Brian Arthur has argued, is a combination of other innovations. As L.T.C. Rolt, the historian of engineering put it, “The motorcar looks as if it was sired by the bicycle out of the horse carriage.” My favorite example of this phenomenon is the pill camera, which takes a picture of your insides on the way through. It came about after a conversation between a gastroenterologist and a guided missile designer.

    Adam Smith in other words, has the answer to an evolutionary puzzle: what caused the sudden emergence of behaviorally modern human beings in Africa in the past hundred thousand years or so? In that surprisingly anthropological first chapter of The Wealth of Nations, Smith saw so clearly that what was special about human beings was that they exchanged and specialized.

    Neanderthals didn’t do this — they only ever used local materials. In this cave in Georgia, the Neanderthals used local stone for their tools. They never used tools from any distance away, from any Neanderthal sites. But when modern human beings move into this very same area, you find stone from many miles away being used to make the tools, as well as local stone. That means that moderns had access to ideas, as well as materials from far away. Just as sex gives a species access to innovations anywhere in its species, so exchange gives you access to innovation anywhere in your species.

    When did it first happen? When was trade invented? At the moment, the oldest evidence is from about a 120,000 years ago. That’s when obsidian axes in Ethiopia and snail-shell beads in Algeria start traveling long distances. These beads are made from marine shells, but they’re found a hundred miles inland. And we know from modern Aborigines in Australia that long-distance movement of man-made objects happens by trade, not migration. So it’s not that people are walking all the way to the Mediterranean and picking up shells and walking all the way back again; they’re getting them hand-to-hand by trade.

    Now that’s 120,000 years ago — ten times as old as agriculture — but I suspect it goes back further still. There’s a curious flowering of sophisticated tool kits in Africa around a 160,000 years ago, in a seashore dwelling population, as evidenced by excavations at a place called “Pinnacle Point.” It came and went, but careful modeling by some anthropologists at the University College London suggests that this might be a demographic phenomenon: a rich food supply led to a dense population, which led to a rich toolkit. But that’s only going to be true if there is exchange going on, if the ideas are having sex — dense populations of rabbits don’t get better tools. Once exchange and specialization are happening, cultural evolution accelerates if population density rises, and decelerates if it falls.

    We can see this clearly from more recent archeology in a study by Melanie Klien and Rob Boyd. In the Pacific, in pre-Western contact times, the sophistication of fishing tackle depends on the amount of trading contact between islands. Isolated islands, control for island size, will have simpler fishing tackle than well-connected islands. And indeed, if you cut people off from exchange networks, human progress not only stalls, it can go backwards.

    The best example of this is Tasmania, which became an island ten thousand years ago when sea levels rose. Not only did the Tasmanians not get innovations that happened after this time, such as the boomerang, they actually dis-invented many of their existing tools. They gave up making bone tools altogether, for example. As Joe Henrich has argued, the reason for this is that their population was too small to sustain the specialization needed to collaborate in the making of some of these tools. Their collective brain was not big enough — nothing to do with their individual brains, it’s the collective intelligence that counts.

    As a control for this idea, notice that the same thing did not happen in Tierra Del Fuego. The Fuegan Indians continue to progress technologically. The reason for this is that the Magellan Strait is narrower than the Bass Strait, so trade continued and the Feugan Indians had access to a collective brain the size of South America. Whereas, as the Tasmanians had access to a collective brain only the size of Tasmania.

    The Collectivism of Markets

    Now for me one of the most fascinating implications of this understanding of the collective brain is just how touchy-feely liberal it is. I’m constantly being told that to believe in markets is to believe in selfishness and greed. Yet I think the very opposite is true. The more people are immersed in markets, the more they collaborate, the more they share, the more they work for each other. In a fascinating series of experiments, Joe Henrich and his colleagues showed that people who play ultimatum games — a game invented by economists to try and bring out selfishness and cooperation — play them more selfishly in more isolated and self-sufficient hunter-gatherer societies, and less so in more market-integrated societies.

    History shows that market-oriented, bottom-up societies are kinder, gentler, less likely to go to war, more likely to look after their poor, more likely to patronize the arts, and more likely to look after the environment than societies run by the state. Hong Kong versus Mao’s China, 16th century Holland versus Louis the XIV’s France, 20th century America versus Stalin’s Russia, the ancient Greeks versus the ancient Egyptians, the Italian city-states versus the Italian papal-states, South Korea versus North Korea, even today’s American versus today’s France, and so on.

    As Voltaire said, “Go into the London stock exchange and you will see representatives of all nations gathered there for the service of mankind. There the Jew, the Mohammedan, and the Christian deal with each other as if they were of the same religion, and give the name of infidel only to those who go bankrupt.”

    As Deirdre McCloskey reminds us, we must not slip into apologizing for markets, for saying they are necessary despite their cruelties. We should embrace them precisely because they make people less selfish, and they make life more collective, less individualistic. The entire drift of human history has been to make us less self-sufficient and more dependent on others to provide what we consume and to consume what we provide. We’ve moved from consuming only as widely as we produce to being much more specialized as producers and much more diversified as consumers.

    That’s the very source of prosperity and innovation. It’s time to reclaim the word “collectivism” from the statists on the left. The whole point of the market is that it does indeed “collectivize” society, but from the bottom-up, not the top-down. We surely know by now after endless experiments that a powerful state encourages selfishness.

    Let me end with an optimistic note. If I’m right, that exchange is the source of innovation, then I believe that the invention of the Internet, with its capacity to enable ideas to have sex faster and more promiscuously than ever, must be raising the innovation rate. And since innovation creates prosperity by lowering the time it takes to fulfill needs, then the astonishingly rapid lifting of humanity out of poverty that has happened all over the world, particularly in the last 20 years, can surely only accelerate. Indeed, it is accelerating. Much of Africa is now enjoying Asian Tiger-style growth. Child mortality is plummeting at a rate of five percent a year in Africa. In Silicon Valley recently, Vivek Wadhwa showed me a $35 tablet computer that will shortly be selling in India. Think what will be invented when a billion Indians are online.

    In terms of human prosperity, therefore, we ain’t seen nothing yet. And because prosperity is an emergent property, an inevitable side effect of human exchange, we could not stop it even if we wanted to. All we could do is divert it elsewhere on the planet (which is what we in Europe seem intent on doing). “Adam Darwin” did not invent emergence: his was an idea that emerged when it was ripe. And like so many good ideas, it was already being applied long before it was even understood. And so I give you Adam-Darwinism as the key to the future.

  7. Freaking out over tiny risks: A case study from a moral panic

    Comments Off on Freaking out over tiny risks: A case study from a moral panic

    Diandra Toyos claims that she and her children were nearly victims of human trafficking. In a Facebook post that quickly went viral, she wrote of a recent visit to her local Ikea with her three children:

    “I noticed a well dressed, middle aged man circling the area, getting closer to me and the kids. At one point he came right up to me and the boys, and instinctively I put myself between he and my mobile son. I had a bad feeling. He continued to circle the area, staring at the kids.”

    Continuing:

    “Something was off. We knew it in our gut. I am almost sure that we were the targets of human trafficking. This is happening all over. Including the United States. It’s in our backyards.”

    A back-of-the-envelope critical review reveals this claim to be nonsense on its face. We are to believe that this woman spent over half an hour in the store (by her own admission), all the while (apparently) thinking someone was attempting to kidnap her children.

    One can only assume Diandra’s inner monologue went something like this: “Gee, I really do want to avoid having my children kidnapped and sold into slavery, but I really need a sofa at an affordable price. Ours is rather old and lumpy. So I’m going to putter around the store for another 30 minutes. Ooooh! Is that a Taiga desk? I just love midcentury modern!”

    “Strains credulity” doesn’t even come close. An explanation more compatible with Occam’s Razor is that some woman thought a dude in Ikea was creepy and wrote about it on Facebook. Film at 11.

    And yet — as anyone who has a social media account and friends with children is no doubt aware these stories are popping up everywhere. A Snopes article documents incidents reported from a Longview, Texas Target store, a Dillard’s in Denton, Texas, and a Kroger in Brownstown Township, Michigan. The full list is even longer, but you get the gist.

    Why are people falling for these urban legends when there is so much real danger we need to avoid?

    Moral panics vs. data

    We are naturally drawn to narratives that inspire panic. This is a phenomenon in American culture that goes back at least as far as the “white slavery epidemic” of the early 1900s. It resurfaced in the 1980s, as we became increasingly paranoid about satanic cults embedded in daycares selecting children for ritualistic abuse (no evidence of such a phenomenon ever materialized). Today, the term “sex trafficking” has made its way into our common discourse as if it were an identifiable phenomenon.

    But the data indicate otherwise. As Elizabeth Nolan Brown documents, a Department of Justice report indicates that such incidents of child abduction are exceedingly rare. Such “stereotypical” kidnappings (e.g., by a stranger or “slight acquaintance,” with the intent to detain the child indefinitely, etc.) occurred in the U.S. only 105 times in 2011 (the same number of occurrences as 1997so no, things are not getting worse). A whopping 92 percent of the time, an abducted child was recovered.

    Let’s put this in perspective: in terms of relative risk, compared to the nightmare Ikea scenario outlined above (where presumably a child would be abducted and never returned to his or her family), a child is more likely to experience the following on a yearly basis:

    (Regarding that last item, I can’t help but point out: the real danger in an Ikea store may be their furniture. One wonders if Ms. Toyos has safely secured her purchased goods to a wall, as Ikea recommends, or if she was too busy fanning the flames of moral panic on social media.)

    Cognitive errors and moral panics

    Cognitive psychologists have long recognized that people have a tendency to overestimate the occurrence of certain types of rare events. We routinely utilize heuristics mental shortcuts that are easy ways of performing “quick and dirty” calculations to make decisions. Heuristics are essential, as it would be impossible to rationally analyze every potential outcome for every decision we make in a disinterested, rational manner. However, heuristics are not without cost.

    One classic (and relevant) example is the availability heuristic: we tend to think that events we recall more readily are more likely to occur. A child being abducted from a public place and (presumably) sold into slavery is horrific, to be sure. As such, it occupies a disproportionate amount of space in our minds. We can easily recall that terrible story we read on Facebook about a child who was (supposedly) almost abducted from a grocery store, so we assume that it is a real danger.

    But just because a scenario is easy to imagine does not mean that it is likely to occur. Real dangers are far more mundane. And more specifically, the vast majority of kidnappings that occur do not fit the “stranger at Target” motif: they are often committed by non-custodial parents or other family members

    Heuristics and the real cost of moral panics

    This brings me to my final point: the scenario Toyos outlines is implausible, but the fear it creates is real. The energy we expend worrying about this kind of event could easily supplant efforts to avoid real dangers.

    For example: the vast majority of sexual abuse perpetrators are known to their victims. Logically, I should be far more concerned about my children’s babysitters, teachers, and ministers than a random patron in a grocery store. Such conclusions are counterintuitive. I like my children’s babysitters. It’s easy to think that evil lurks in every corner of my local Ikea store, but difficult to imagine that a respected member of my community might actually be dangerous.

    If we’re applying proper reasoning, however, we will apply greater scrutiny to people we know than strangers.

    This is but one example of how cognitive biases can adversely affect our lives. There are others. For example, the base rate fallacy refers to the fact that we routinely fail to recognize the low underlying rates of a given phenomenon and incorporate that fact into our assessments of risk. Yes, it may be true that one bad sunburn will increase your risk of developing skin cancer by 50 percent. Scary, no? But in fact, roughly 2.2 percent of the population will develop skin cancer in their lifetime — a substantial portion of which have had at least one sunburn (so the risk is baked in). This puts the risk of developing skin cancer — to say nothing of dying of it — as a result of a single sunburn at a fraction of what we would normally assume.

    Simply being aware of how heuristics affect our judgments can be helpful. When you come across an article over social media that plays on your fears, ask yourself: how are my emotions being manipulated? What is the real risk? How are heuristics fooling me?

    The answer may surprise you.

  8. The Long History of Music Piracy

    Comments Off on The Long History of Music Piracy

    When you’re a historian, people expect you to write history. So, twelve years ago, when I told people I was writing a dissertation about music piracy, the typical response was, “But… that’s not history.”

    I couldn’t blame them. The dirt was still fresh on Napster’s grave at the time, and challenges to online services such as Grokster, Limewire, and even YouTube were still wending their way through the courts. The days of using cassettes to make mixtapes or lurching to press the “record” button to capture songs from the radio were not far behind. If anything, a few older folks might have dim memories of shaggy-haired hippies swapping Grateful Dead bootlegs in the 1970s.

    All of this seemed too fresh to be “History” in the way of Adolf Hitler and the Peloponnesian War and the like. But, in fact, piracy has a history as long as sound recording — even as long as written music itself. Jazzheads swapped copies of shellac discs in the 1930s, and shady operators even copied music in the wax cylinder era of the 1910s. Sheet music was bootlegged in the nineteenth century, just as printed materials had been since Gutenberg unleashed the printing press four hundred years earlier.

    Music, though, has proven more vexing to regulate than other copyrighted works. A piece of sheet music is cheaper and easier to photocopy than an entire book. And anyone can play his own version of a song in a way that another writer cannot “play” The Grapes of Wrath. American copyright law did not even cover music until 1831 — originally, only books, maps, and charts were protected.

    As a matter of fact, I discovered that sound recordings were not protected in the United States until 1972. How could this be?

    The Enlightenment Origins of American Copyright

    Part of the reason lay in the United States Constitution itself. Our founding document is a notoriously succinct one, outlining the structure of government and spelling out a handful of basic responsibilities for federal authorities — one of which was copyright. The founders empowered Congress:

    To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.

    Children of the Enlightenment, the founders believed that the spread of knowledge contributed to the public good, and government ought to encourage it. (As Thomas Jefferson put it in 1813, “He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his taper at mine, receives light without darkening me.”) Thus, government should incentivize “Authors and Inventors” to create — but rights to their works should also be “limited,” so as not to strangle the free exchange of art and ideas.

    And when they said “limited,” they meant it. The first copyright term lasted a measly fourteen years, and Congress only reluctantly added new kinds of works — written music, photography, film — to the scope of copyright over the course of the nineteenth and early twentieth centuries. Seeing copyright as a monopoly, a sort of necessary evil, they were loath to expand its domain unless absolutely necessary.

    The Trouble with Music… and Sound

    John Philip Sousa

    The fate of sound recording shows how true this is. After Thomas Edison worked out the first truly effective method for inscribing and replaying sound waves in 1877, an era of freewheeling piracy ensued. By 1905, Congress was besieged by songwriters, music publishers, and “talking machine” companies with cries for help. Famous composers such as Victor Herbert and John Philip Sousa pled their case, citing the unfairness that a band of rogues profited off their works. (Of course, Sousa also slyly conceded, “I can compose better if I get a thousand dollars than I can for six hundred.”)

    A player piano roll

    But there was a rub. How exactly would copyright reform work? The songwriters were mad because the companies making disks, wax cylinders, and player piano rolls used their music without authorization or compensation; they demanded both. The talking machine companies wanted to record performances of the written music for free. If forced to pay royalties to composers, then they wanted to have a copyright for their own recordings too.

    Congressmen were perplexed. If Sousa owns the copyright for his written composition, then how could the talking machine company own a separate copyright for a recorded performance of it? Isn’t it the same music? What if two different companies recorded two different versions of the same song? Were there copyrights for each recording?

    The debate may seem technical, arcane, even alien to twenty-first century ears, but the politicians contemplated the question long before there were music videos or sampling or remixes. It seems obvious to us today that Frank Sinatra and Sid Vicious’s versions of “My Way” are two distinct works. Obviously, one work — a song — can exist in an almost infinite number of unique permutations.

    Muddling through in the Age of Jazz

    Congress decided to punt on the issue (as it does) — and it turned out to be a good deal for songwriters, record companies, and consumers. With the Copyright Act of 1909, lawmakers set up a system that let songwriters and music publishers earn a royalty when their songs were recorded — but the rate for each “use” (each disk or piano roll manufactured and sold) was a flat one, set by the government. And artists and labels were more or less free to record versions of songs as they pleased.

    What Congress did not decide to do was to provide copyright for sound recordings themselves. It was just too confusing, and in the Progressive Era, anti-monopoly sentiment remained strong in American society. Copyright still looked like too much of a monopoly.

    The curious result was that sound recordings seemed to lack copyright protection — and pirates noticed. For decades, bootleggers operated in the shadows of the US economy, recording live performances of operas, copying out-of-print jazz and blues records for connoisseurs, and sometimes simply making a quick buck. (The Mafia occasionally pirated pop hits, though many bootleggers were just enthusiasts of hard-to-find music.) Throughout, they could point to the law and say they were not violating it — because sound recordings weren’t protected under the Copyright Act.

    The arguments may seem flimsy, but both courts and lawmakers struggled from the 1930s to the 1960s to figure out how to square the circle. Sometimes judges ruled against bootleggers under the doctrine of “unfair competition,” arguing that the pirates freeloaded off the original label’s financial investment in producing and promoting a record. (By making a record or an artist popular, judges reasoned, the label generated “good will” with the public, which the pirate unfairly exploited.)

    The Rise of Stronger, Longer Copyright

    But the problem remained, since Congress was still reluctant to act on copyright reform. It took the outbreak of widespread bootlegging in the rock counterculture of the 1960s to push the issue to the front burner. Armed with cassette tapes, hippie bootleggers copied unreleased Bob Dylan recordings (“the basement tapes”) and captured Jimi Hendrix concerts for an eager youth audience.

    Finally, in 1971, Congress passed a law that provided record labels with protection for their products. And in 1973, the Supreme Court ruled that states could pass their own anti-piracy laws, even though copyright had traditionally been understood as a responsibility of the federal government, and state laws potentially allowed infinite protection for recordings — arguably violating the “limited times” provision of the Constitution.

    In a deindustrializing America of the 1970s, though, the cries of the record industry resonated — as did those of other “information” businesses. Makers of albums and movies and software argued that their firms needed protection more than ever in a post-industrial economy, where information was the currency of the age.

    The old anti-monopoly sentiments of the Progressive Era melted like butter. Beginning in 1976, Congress embarked on a program that lengthened the term of copyright from 56 years to the life of the author plus 50 years; increased penalties for infringement; and expanded the scope of what could be copyrighted and patented (for example, software and genetically modified organisms).  Congress even arbitrarily added 20 years to the length of copyright in 1998 — a law critics dubbed the “Mickey Mouse Protection Act,” since the beloved cartoon character’s copyright was about to lapse at the time.

    The Future of Piracy

    Where does this story leave us today — in a post-Napster world of YouTube, SoundCloud, and BitTorrent? A fan could illegally download Prince’s entire discography within minutes of the artist’s passing in 2016, but he or she could not stream his songs on Spotify because the Purple One had the legal right to keep them off all streaming platforms.

    Prince’s case illustrates the paradox: copyright is stronger and longer than it has ever been before, and yet it is arguably flouted more often than ever too. The US economy still generates a great deal of “information,” but information travels more or less freely. One could argue that the postindustrial economy thrives on the very fact that it is as easy as pressing ctrl-C to copy a word, image, or sound.

    America and the world could do with a bit more of the anti-monopoly spirit of old. I do not need the incentive of a lengthy copyright term to write. (If I live another 50 years, the copyright for this article would not lapse until 2137. Is that really necessary?) And the penalties for copyright infringement do not need to be so punitive that so-called “copyright trolls” can use the law to intimidate a lowly blog out of existence with extortionate demands for using a photo without permission.

    Congress once actually had it right — as hard as those words are to type. Copyright ought to be a pragmatic bargain between artists, business, and consumers that promotes creativity, not a right of vast scope, consequence, and duration that stifles it. Hopefully lawmakers will realize that less state-enforced monopoly power, rather than more, would be good for both the economy of innovation and the public interest as a whole.

     

  9. How the Bureau of Prisons locked down “compassionate release.”

    Comments Off on How the Bureau of Prisons locked down “compassionate release.”

    I learned about Mr. Raymond the way I so often hear about such cases — from a family member in a phone call.[1] By “such cases” I mean elderly, sick, even dying federal prisoners trying to secure early release using the Bureau of Prisons’ (BOP) compassionate release program. The call from Mr. Raymond’s daughter was like hundreds I have taken over the years from family members: she was frustrated, frightened, and completely in the dark about the status of her father’s application for release.

    At 74 years old, Mr. Raymond had served more than 15 years of his 20-year sentence, and Corrections staff thought he had done enough time. He clearly met the BOP criteria for elderly compassionate release, so they helped him submit his request to the warden on December 22, 2015.

    Then the waiting began.

    Mr. Raymond was not dying, but he was aged and ailing. The BOP had funded his very expensive medical care; he has a heart condition and received a pacemaker while incarcerated. As months passed without word about his request, stress-test results caused his prison doctors grave concerns. Then his vision began to deteriorate. Staff sent updated medical reports to the BOP’s central office. Mr. Raymond considered seeking a transfer to a prison hospital in North Carolina. Finally, after a full year of silence, the BOP approved his request.

    Why, his family asked me, is this process so hard?

    The Rules of “Compassion”

    Once available only to prisoners on the verge of death — and even then very rarely — the compassionate release program was expanded in August 2013 in response to sharp scrutiny from the Inspector General of the Department of Justice and to criticism from advocates such as FAMM. The revised rules extended eligibility to elderly prisoners with or without medical conditions, among others. At first glance, the new rules are laudable for their apparent breadth. But scratch the surface and one finds a program notable only for its neglect. Compassionate release is the exception rather than the rule.

    Take, for example, the plight of elderly prisoners like Mr. Raymond. The BOP’s revised rules were to cover prisoners 65 years old or older who have served at least 50 percent of their sentence and who suffer from chronic or serious medical conditions. Healthy prisoners 65 or older who have served the greater of ten years or 75 percent of their sentences were also deemed eligible. But, only a scant handful is ever recommended for release.

    The DOJ’s Inspector General found that in the first full year after the expansion of compassionate release, 93 elderly prisoners applied under the non-medical provision but only 2 were released. None of the 203 elderly prisoners with medical conditions who applied made it out. While the numbers picked up in 2015, in FY 2016, there were only 5 such releases.

    How, you might ask, can a program with “compassion” in its name The answer lies in who administers it. Modern-day compassionate release was set up by Congress in the Sentencing Reform Act of 1986. The SRA strictly limited the ability of federal courts to revisit finalized convictions. Parole was eliminated, and federal prisoners were expected to serve the sentence imposed, with a small credit for good time. With very few exceptions, “Do the crime, do the time” sentencing became the law, and courts lost jurisdiction to revisit sentences.

    Congress made an exception, however, if prisoners developed “extraordinary and compelling” reasons justifying early release.[3] In 28 U.S.C. § 994(t), Congress directed the U.S. Sentencing Commission to identify criteria for what constituted extraordinary and compelling reasons. The BOP was given the job of identifying prisoners who met the criteria and of petitioning the court for their release. The U.S. Attorney represents the BOP in court and files the motion for a reduction in sentence. Finally, judges are responsible for deciding whether the prisoner meets the early release criteria and whether they deserve to be released, based on their history, crime, and conduct in prison. If so, the court orders their release.

    Jailer and Judge

    Congress did not give prisoners the right to petition the court directly or appeal an adverse decision from the BOP to the court. The BOP’s decision is final and unreviewable. This means the power to free a prisoner is placed in the hands of the prosecutor who worked hard to convict him and the jailer whose job it is to keep him locked up. Their reluctance to promote release is hardly surprising. In practice, the BOP and the DOJ make decisions that Congress intended should be left to judges. Because Congress made the BOP the gatekeeper and gave prisoners no independent power to petition the courts or appeal an adverse decision, the BOP is the jailer and the judge in all compassionate release cases.

    The BOP’s stinginess has drawn fire even from the staid Sentencing Commission. That body revisited and updated compassionate release criteria in 2016. At a hearing on the subject, commissioners had sharp questions for the BOP and its sparing use of compassionate release. The criteria the Commission adopted included a pointed directive to the BOP, saying in essence that the BOP should confine itself to determining if a prisoner meets the criteria and bringing a motion for reduction in sentence to the court if so as Congress intended.

    Compassion aside, there are other sound reasons to make it a practice to release elderly prisoners who have served a significant portion of their sentence, many of whom are suffering from age-related illnesses or conditions that make their continued incarceration inhumane and expensive.

    Aging prisoners are increasing as a proportion of the federal prison population. According to a report by the Urban Institute, there were slightly more than 5,000 prisoners 65 and older in federal prison in 2011 (3 percent of the BOP population), and they are expected to triple in number by 2019. That number is driven by punitive charging practices and sentencing policies. The fiscal burden of housing aging prisoners threatens the BOP’s budget. The costs of incarcerating such prisoners are three to five times higher than younger prisoners. One study found that medical costs alone for prisoners 55 and over were five times those of younger prisoners.

    Moreover, recidivism declines with age. The Office of the Inspector General found that the 15 percent recidivism rate of prisoners 50 years and older is much lower than the 41 percent re-arrest rate for all federal prisoners. Prisoners released through the compassionate release program had the lowest recidivism rates of all, 3.5 percent.

    If the BOP is unable or unwilling to treat the compassionate release program as Congress intended, Congress should take steps to ensure that prisoners denied or neglected by the BOP nonetheless get their day in court. Congress can do so by giving prisoners the right to appeal a BOP denial to court or to seek a decision from the BOP in cases such as that of Mr. Raymond, in which delays stretch out over months or even years. Such a right to an appeal will restore to the courts the authority that the BOP has usurped: to determine whether a prisoner meets compassionate release criteria and if so, whether he deserves to be released.

    A closing note: Mr. Raymond was one of the lucky few. Not so his prison mate, who like Mr. Raymond sought compassionate release but unlike him died while awaiting a decision.

    [1] I have used pseudonyms to protect the privacy of the family involved.

    [2] The Bureau of Prisons published a notice in the Federal Register last year, proposing to change the name of the program from “Compassionate Release” to “Reduction in Sentence in Extraordinary and Compelling Circumstances.”

    [3] See 18 U.S.C. § 3582(c)(1)(A)(i).

  10. Is Judicial Review Undemocratic?

    Comments Off on Is Judicial Review Undemocratic?

    America just got a civics lesson from a U.S. Senator on the role of the Supreme Court.  In his opening statement during the nomination hearing of Neil Gorsuch, Senator Ben Sasse explained the proper (albeit uncommonly-realized) role of a Supreme Court justice. According to Sasse, the Supreme Court, when it appropriately exercises the power of judicial review, defends the long-term will of the people. Sasse is right, and those who wish to defend limited government and the will of the people in the United States should be passionate both about defending judicial review and also about limiting judicial review within proper constitutional bounds.

    Defending Judicial Review

    Why isn’t judicial review undemocratic? Why is it alright for the elected representatives of the American people (i.e. Congress) to pass a law only to have it “struck down” by a panel of unelected, dour ivy-leaguers in black robes (i.e. the Supreme Court)?

    Before we get to the answer, a brief refresher in American civics:  American constitutionalism as understood by the framers of the U.S. Constitution requires that the “will of the people” exists not in any single law passed by Congress but only in the fundamental law that is the U.S. Constitution. It is the Constitution that embodies the long-term will of the people.

    The Constitution established an essentially-popular government, but the problem with all popular governments is the constant tendency of majorities to oppress minorities, particularly during temporary periods of political passion.  The framers therefore institutionalized certain checks against the temporary ambition of the majority through such features as the bicameral legislature (Article I, Sec. 2-3) and the executive veto (Article I, Sec. 7), while respecting the popular foundation of American political authority in the form of an original ratification of the U.S. Constitution in the people of the several states (Article VII) and of regular revisions to the fundamental law through amendments to the U.S. Constitution when a supermajority agrees to it (Article V).

    So, the Constitution, taken as a whole, represents the will of the people bound by certain constraints to prevent tyranny of the majority. Any action of a congressman, president, or Supreme Court justice at odds with the U.S. Constitution therefore is at odds with the will of the people.  We have a word for that: unconstitutional.

    What shall we say then of judicial review? If Congress passes an unconstitutional law, that law cannot in any true sense represent the will of the people, especially if it were to represent only some temporary spasm of political desire on the part of a majority of the country. This, in any case, was Alexander Hamilton’s argument in Federalist 78. According to Hamilton, when Congress passes a law that it had no authority to pass, it effectively “enable[s] the representatives of the people to substitute their will to that of their constituents.” When this happens, the Supreme Court may lawfully act as “an intermediate body between the people and the legislature, in order, among other things, to keep the latter within the limits assigned to their authority.” In this way, Hamilton explains the essentially-democratic nature of the practice of judicial review: “If there should happen to be an irreconcilable variance” between the Constitution and a law of Congress, “that which has the superior obligation and validity ought of course to be preferred; or in other words, the constitution ought to be preferred to the statute, the intention of the people to the intention of their agents.”

    Rather than being undemocratic, judicial review, rightly understood and rightly exercised, defends the long-term will of the people. As Sasse explained during the Gorsuch hearing, “When Congress passes an unconstitutional law, it is in fact the Congress that is violating the long-term will of the people, for the judiciary is there to assert the will of the people as embodied in our shared Constitution over and against that unconstitutional but perhaps temporarily popular law.”

    The Limits of Judicial Review

    While judicial review rightly-understood constitutes an essential feature of the American political system, unrestrained judicial review constitutes a dangerous deviation from democratic principles. Hamilton explains in Federalist 78 that judicial review does not “suppose a superiority of the judicial to the legislative power. It only supposes that the power of the people is superior to both.” When the U.S. Supreme court strikes down laws of Congress that are not in “irreconcilable variance” with the U.S. Constitution, the Supreme Court effectively substitutes its own will for the long-term will of the people as embodied in the U.S. Constitution as the final measure according to which all laws are judged.[1]

    To be clear, this renders the American polity an oligarchy instead of a democratic republic, and it is no better than Congress passing laws that it has no authority to pass. Both constitute an attempt by our governors to substitute their own will for the long-term will of the people as embodied in the Constitution. In fact, Madison and Hamilton were clear in the Federalist Papers that although all three departments of government play a role in the interpretation of the U.S. Constitution (interpretations which receive institutional force in powers such as the legislative power of Congress and the executive power of the president), the “people themselves…can alone declare its true meaning and enforce its observance” through such things as elections and, of course, through amendments to the U.S. Constitution. The point is that the Supreme Court does not, any more than the president or Congress, provide a final interpretation of the Constitution for which there can never be an appeal. If that were the case, “the people will have ceased to be their own rulers, having to that extent practically resigned their Government into the hands of that eminent tribunal.”[2]

    So, is judicial review undemocratic? No! Rightly understood, judicial review is an essential bulwark of American liberty. But wrongly understood, judicial review is an abuse of court power, an abuse made more dangerous by many Americans’ lack of awareness of the importance of the American people – not the legislature, the court, or the president’s legal counsel – being the final judge of the meaning of the Constitution, which is itself the will of the people.


    [1] These were Lincoln’s words in his First Inaugural Address when he was responding to the Supreme Court’s decision in Dred Scott v. Sandford (1857) in which the court held the Missouri Compromise of 1820 to be unconstitutional because it violated an alleged constitutional right of people to own other human beings as property that was protected under the 5th Amendment.

    [2] The United States Supreme Court has, at various points, asserted, either implicitly or explicitly, that its constitutional interpretation, rather the Constitution itself, is the supreme law of the land.  One example of an explicit assertion to this effect occurred in Cooper v. Aaron (1958); for a discussion of this as a problem, see Edwin Meese, III, “The Law of the Constitution,” Tulane Law Review, Vol. 61: 979-990.