Category Archive: Justice
Comments Off on What Charles Darwin owes Adam Smith
The following is a lightly edited, slightly condensed transcript of the talk “Adam Darwin: Emergent Order in Biology and Economics,” presented by Matt Ridley at the Adam Smith Institute in 2012.
I’ve called my lecture “Adam Darwin” to stress how congruent the philosophies of Adam Smith and Charles Darwin are. The common theme, of course, is emergence — the idea that order and complexity can be bottom-up phenomena; both economies and ecosystems emerge. But my purpose really is to explore not just the history and evolution of this shared idea but its future: to show that in the age of the Internet, Adam-Darwinism is the key to understanding how the world will change.
The Common Ancestry of Evolution and Economics
Darwin’s debt to the political economists is considerable. He spent formative years in Edinburgh among the ghosts of Hume, Hutchinson, Ferguson, and Smith. When he was at Cambridge in 1829, he wrote, “My studies consist in Adam Smith and Locke.” At his grandfather Josiah Wedgwood’s house in Staffordshire, Darwin often met the lawyer and laissez-faire politician Sir James Mackintosh, whose daughter married Charles’s brother-in-law (and had an affair with his brother).
On the Beagle, he read the naturalist Henri Milne-Edwards, who took Adam Smith’s notion of the division of labor and applied it to the organs of the body. After seeing a Brazilian rainforest, Darwin promptly reapplied the same idea to the division of labor among specialized species in an ecosystem: “The advantage of diversification in the inhabitants of the same region is in fact the same as that of the physiological division of labor in the organs of the same individual body — subject so well elucidated by Milne-Edwards.”
Back in England in the 1830s, through his brother Erasmus, Darwin fell in with the radical feminist and novelist Harriet Martineau, who had shot to fame because of her series of short fictional books called Illustrations of Political Economy. These were intended to educate people in the ideas of Adam Smith, “whose excellence,” she once said, “is marvelous.” I believe it was probably at Martineau’s suggestion that, in October 1838, Darwin came to reread Malthus (a person with whom Martineau was on very close terms) and to have his famous insight that death must be a non-random and therefore selective force.
Parenthetically, it’s worth recalling the role of anti-slavery in bringing Martineau and Darwin together. Darwin’s grandfather Josiah Wedgwood was one of the leaders and organizers of the anti-slavery movement, a friend of Wilberforce, and the maker of the famous medallion “Am I not a man and a brother?” which was the emblem of the anti-slavery movement. Charles Darwin’s aunt Sara gave more money to the anti-slavery movement than any woman in Britain. Darwin had been horrified by what he called, “The heart-sickening atrocities of slavery in Brazil.” Abolition was almost the family business. Meanwhile, Harriet Martineau had just toured America speaking against slavery and had become so notorious that there were plans to lynch her in South Carolina.
Today, to a bien pensant intellectual, it might seem surprising to find such a left-wing cause alongside such a right-wing enthusiasm for markets, but it should not be. So long is the shadow cast by the top-down determinism of Karl Marx, with his proposal that the state should be the source of reform and welfare, that it’s often forgotten how radical the economic liberalism of the political economists seemed in the 1830s. In those days, to be suspicious of a strong state was to be left-wing (and, if you’ll forgive the pun, quite right, too).
Today, generally, Adam Smith is claimed by the right, Darwin by the left. In the American red states, where Smith’s emergent decentralized philosophy is all the rage, Darwin is often reviled for his contradiction of dirigiste creationism. In the average British university by contrast, you will find fervent believers in the emergent decentralized properties of genomes and ecosystems, who nonetheless demand dirigiste policy to bring order to the economy and society. Yet, if the market needs no central planner, why should life need an intelligent designer, or vice versa?
Ideas evolved by descent and modification just as species do, and the idea of emergence is no exception. Darwin at least partly got the idea from the political economists, who got it from the empirical philosophers. To put it crudely, Locke and Newton begat Hume and Voltaire, who begat Hutchinson and Smith, who begat Malthus and Ricardo, who begat Darwin and Wallace. Darwin’s central proposition was that faithful reproduction, occasional random variation, and selective survival, can be a surprisingly progressive and cumulative force. It can gradually build things of immense complexity. Indeed, it can make something far more complex than a conscious deliberate designer ever could. With apologies to William Paley and Richard Dawkins, it can make a watchmaker.
Each time a baby is conceived, 20,000 genes turn each other on and off, in a symphony of great precision, building a brain of 10 trillion synapses, each refined and remodeled by early and continuing experience. To posit an immense intelligence capable of comprehending such a scheme, rather than a historical emergent process, is merely to exacerbate the problem — who designed the designer?
Likewise, as Leonard Reed pointed out, each time that the pencil is purchased, tens of thousands of different people collaborate to supply the wood, the graphite, the knowledge, and the energy, without any one of them knowing how to make a pencil. Says Smith, if you like, “This came about by bottom-up emergence, not top-down dirigism.” In both cases, nobody’s in charge, and crucially, nobody needs to understand what’s being done.
Why Innovation Happens
So far, I’m treading a well trodden path in the steps of Herbert Spencer, Frederick Hayek, Karl Popper, and many others who’ve explored the parallels between evolutionary and economic theory. But the story has grown a lot more interesting in the last few years, I think, because of developments in field of cultural and technological evolution. Thanks especially to the work of three anthropologists — Rob Boyd, Pete Richardson, and Joe Henrich — we are beginning now to understand the extraordinary close parallels between how our bodies evolved and how our tools and rules evolve. Innovation is an evolutionary process. That’s not just a metaphor, it’s a precise description. I need you to re-examine a lot of your assumptions about how innovation happens to disenthrall yourself of what you already know.
First, innovation happens mainly by trial and error. It’s a tinkering process, and it usually starts with technology, not science, by the way, as Terrence Keeley has shown. The trial and error may happen between firms, between designs, between people, but it happens. If you look at the tail planes of early airplanes, there’s a lot of trial and error, there’s a lot of different designs being tried and eventually one is decided.
Exchange is crucial to innovation, and innovation accelerates in societies that open themselves up to internal and external exchange through trade and communication — Ancient Greece, Song China, Renaissance Italy, 16th century Holland, 19th century Britain — whereas innovation falters in countries that close themselves off from trade — Ming China, Nero’s India, Communist Albania, North Korea.
More ever, every innovation, as Brian Arthur has argued, is a combination of other innovations. As L.T.C. Rolt, the historian of engineering put it, “The motorcar looks as if it was sired by the bicycle out of the horse carriage.” My favorite example of this phenomenon is the pill camera, which takes a picture of your insides on the way through. It came about after a conversation between a gastroenterologist and a guided missile designer.
Adam Smith in other words, has the answer to an evolutionary puzzle: what caused the sudden emergence of behaviorally modern human beings in Africa in the past hundred thousand years or so? In that surprisingly anthropological first chapter of The Wealth of Nations, Smith saw so clearly that what was special about human beings was that they exchanged and specialized.
Neanderthals didn’t do this — they only ever used local materials. In this cave in Georgia, the Neanderthals used local stone for their tools. They never used tools from any distance away, from any Neanderthal sites. But when modern human beings move into this very same area, you find stone from many miles away being used to make the tools, as well as local stone. That means that moderns had access to ideas, as well as materials from far away. Just as sex gives a species access to innovations anywhere in its species, so exchange gives you access to innovation anywhere in your species.
When did it first happen? When was trade invented? At the moment, the oldest evidence is from about a 120,000 years ago. That’s when obsidian axes in Ethiopia and snail-shell beads in Algeria start traveling long distances. These beads are made from marine shells, but they’re found a hundred miles inland. And we know from modern Aborigines in Australia that long-distance movement of man-made objects happens by trade, not migration. So it’s not that people are walking all the way to the Mediterranean and picking up shells and walking all the way back again; they’re getting them hand-to-hand by trade.
Now that’s 120,000 years ago — ten times as old as agriculture — but I suspect it goes back further still. There’s a curious flowering of sophisticated tool kits in Africa around a 160,000 years ago, in a seashore dwelling population, as evidenced by excavations at a place called “Pinnacle Point.” It came and went, but careful modeling by some anthropologists at the University College London suggests that this might be a demographic phenomenon: a rich food supply led to a dense population, which led to a rich toolkit. But that’s only going to be true if there is exchange going on, if the ideas are having sex — dense populations of rabbits don’t get better tools. Once exchange and specialization are happening, cultural evolution accelerates if population density rises, and decelerates if it falls.
We can see this clearly from more recent archeology in a study by Melanie Klien and Rob Boyd. In the Pacific, in pre-Western contact times, the sophistication of fishing tackle depends on the amount of trading contact between islands. Isolated islands, control for island size, will have simpler fishing tackle than well-connected islands. And indeed, if you cut people off from exchange networks, human progress not only stalls, it can go backwards.
The best example of this is Tasmania, which became an island ten thousand years ago when sea levels rose. Not only did the Tasmanians not get innovations that happened after this time, such as the boomerang, they actually dis-invented many of their existing tools. They gave up making bone tools altogether, for example. As Joe Henrich has argued, the reason for this is that their population was too small to sustain the specialization needed to collaborate in the making of some of these tools. Their collective brain was not big enough — nothing to do with their individual brains, it’s the collective intelligence that counts.
As a control for this idea, notice that the same thing did not happen in Tierra Del Fuego. The Fuegan Indians continue to progress technologically. The reason for this is that the Magellan Strait is narrower than the Bass Strait, so trade continued and the Feugan Indians had access to a collective brain the size of South America. Whereas, as the Tasmanians had access to a collective brain only the size of Tasmania.
The Collectivism of Markets
Now for me one of the most fascinating implications of this understanding of the collective brain is just how touchy-feely liberal it is. I’m constantly being told that to believe in markets is to believe in selfishness and greed. Yet I think the very opposite is true. The more people are immersed in markets, the more they collaborate, the more they share, the more they work for each other. In a fascinating series of experiments, Joe Henrich and his colleagues showed that people who play ultimatum games — a game invented by economists to try and bring out selfishness and cooperation — play them more selfishly in more isolated and self-sufficient hunter-gatherer societies, and less so in more market-integrated societies.
History shows that market-oriented, bottom-up societies are kinder, gentler, less likely to go to war, more likely to look after their poor, more likely to patronize the arts, and more likely to look after the environment than societies run by the state. Hong Kong versus Mao’s China, 16th century Holland versus Louis the XIV’s France, 20th century America versus Stalin’s Russia, the ancient Greeks versus the ancient Egyptians, the Italian city-states versus the Italian papal-states, South Korea versus North Korea, even today’s American versus today’s France, and so on.
As Voltaire said, “Go into the London stock exchange and you will see representatives of all nations gathered there for the service of mankind. There the Jew, the Mohammedan, and the Christian deal with each other as if they were of the same religion, and give the name of infidel only to those who go bankrupt.”
As Deirdre McCloskey reminds us, we must not slip into apologizing for markets, for saying they are necessary despite their cruelties. We should embrace them precisely because they make people less selfish, and they make life more collective, less individualistic. The entire drift of human history has been to make us less self-sufficient and more dependent on others to provide what we consume and to consume what we provide. We’ve moved from consuming only as widely as we produce to being much more specialized as producers and much more diversified as consumers.
That’s the very source of prosperity and innovation. It’s time to reclaim the word “collectivism” from the statists on the left. The whole point of the market is that it does indeed “collectivize” society, but from the bottom-up, not the top-down. We surely know by now after endless experiments that a powerful state encourages selfishness.
Let me end with an optimistic note. If I’m right, that exchange is the source of innovation, then I believe that the invention of the Internet, with its capacity to enable ideas to have sex faster and more promiscuously than ever, must be raising the innovation rate. And since innovation creates prosperity by lowering the time it takes to fulfill needs, then the astonishingly rapid lifting of humanity out of poverty that has happened all over the world, particularly in the last 20 years, can surely only accelerate. Indeed, it is accelerating. Much of Africa is now enjoying Asian Tiger-style growth. Child mortality is plummeting at a rate of five percent a year in Africa. In Silicon Valley recently, Vivek Wadhwa showed me a $35 tablet computer that will shortly be selling in India. Think what will be invented when a billion Indians are online.
In terms of human prosperity, therefore, we ain’t seen nothing yet. And because prosperity is an emergent property, an inevitable side effect of human exchange, we could not stop it even if we wanted to. All we could do is divert it elsewhere on the planet (which is what we in Europe seem intent on doing). “Adam Darwin” did not invent emergence: his was an idea that emerged when it was ripe. And like so many good ideas, it was already being applied long before it was even understood. And so I give you Adam-Darwinism as the key to the future.
Comments Off on Freaking out over tiny risks: A case study from a moral panic
Diandra Toyos claims that she and her children were nearly victims of human trafficking. In a Facebook post that quickly went viral, she wrote of a recent visit to her local Ikea with her three children:
“I noticed a well dressed, middle aged man circling the area, getting closer to me and the kids. At one point he came right up to me and the boys, and instinctively I put myself between he and my mobile son. I had a bad feeling. He continued to circle the area, staring at the kids.”
“Something was off. We knew it in our gut. I am almost sure that we were the targets of human trafficking. This is happening all over. Including the United States. It’s in our backyards.”
A back-of-the-envelope critical review reveals this claim to be nonsense on its face. We are to believe that this woman spent over half an hour in the store (by her own admission), all the while (apparently) thinking someone was attempting to kidnap her children.
One can only assume Diandra’s inner monologue went something like this: “Gee, I really do want to avoid having my children kidnapped and sold into slavery, but I really need a sofa at an affordable price. Ours is rather old and lumpy. So I’m going to putter around the store for another 30 minutes. Ooooh! Is that a Taiga desk? I just love midcentury modern!”
“Strains credulity” doesn’t even come close. An explanation more compatible with Occam’s Razor is that some woman thought a dude in Ikea was creepy and wrote about it on Facebook. Film at 11.
And yet — as anyone who has a social media account and friends with children is no doubt aware — these stories are popping up everywhere. A Snopes article documents incidents reported from a Longview, Texas Target store, a Dillard’s in Denton, Texas, and a Kroger in Brownstown Township, Michigan. The full list is even longer, but you get the gist.
Why are people falling for these urban legends when there is so much real danger we need to avoid?
Moral panics vs. data
We are naturally drawn to narratives that inspire panic. This is a phenomenon in American culture that goes back at least as far as the “white slavery epidemic” of the early 1900s. It resurfaced in the 1980s, as we became increasingly paranoid about satanic cults embedded in daycares selecting children for ritualistic abuse (no evidence of such a phenomenon ever materialized). Today, the term “sex trafficking” has made its way into our common discourse as if it were an identifiable phenomenon.
But the data indicate otherwise. As Elizabeth Nolan Brown documents, a Department of Justice report indicates that such incidents of child abduction are exceedingly rare. Such “stereotypical” kidnappings (e.g., by a stranger or “slight acquaintance,” with the intent to detain the child indefinitely, etc.) occurred in the U.S. only 105 times in 2011 (the same number of occurrences as 1997—so no, things are not getting worse). A whopping 92 percent of the time, an abducted child was recovered.
Let’s put this in perspective: in terms of relative risk, compared to the nightmare Ikea scenario outlined above (where presumably a child would be abducted and never returned to his or her family), a child is more likely to experience the following on a yearly basis:
- Death due to an automobile accident (obviously)
- Death due to being struck by a thrown, projected or falling object
- Drowning in a bathtub
- Being bitten or struck by a dog
- Death due to contact with hot tap-water
- Being crushed by falling furniture
(Regarding that last item, I can’t help but point out: the real danger in an Ikea store may be their furniture. One wonders if Ms. Toyos has safely secured her purchased goods to a wall, as Ikea recommends, or if she was too busy fanning the flames of moral panic on social media.)
Cognitive errors and moral panics
Cognitive psychologists have long recognized that people have a tendency to overestimate the occurrence of certain types of rare events. We routinely utilize heuristics — mental shortcuts that are easy ways of performing “quick and dirty” calculations — to make decisions. Heuristics are essential, as it would be impossible to rationally analyze every potential outcome for every decision we make in a disinterested, rational manner. However, heuristics are not without cost.
One classic (and relevant) example is the availability heuristic: we tend to think that events we recall more readily are more likely to occur. A child being abducted from a public place and (presumably) sold into slavery is horrific, to be sure. As such, it occupies a disproportionate amount of space in our minds. We can easily recall that terrible story we read on Facebook about a child who was (supposedly) almost abducted from a grocery store, so we assume that it is a real danger.
But just because a scenario is easy to imagine does not mean that it is likely to occur. Real dangers are far more mundane. And more specifically, the vast majority of kidnappings that occur do not fit the “stranger at Target” motif: they are often committed by non-custodial parents or other family members
Heuristics and the real cost of moral panics
This brings me to my final point: the scenario Toyos outlines is implausible, but the fear it creates is real. The energy we expend worrying about this kind of event could easily supplant efforts to avoid real dangers.
For example: the vast majority of sexual abuse perpetrators are known to their victims. Logically, I should be far more concerned about my children’s babysitters, teachers, and ministers than a random patron in a grocery store. Such conclusions are counterintuitive. I like my children’s babysitters. It’s easy to think that evil lurks in every corner of my local Ikea store, but difficult to imagine that a respected member of my community might actually be dangerous.
If we’re applying proper reasoning, however, we will apply greater scrutiny to people we know than strangers.
This is but one example of how cognitive biases can adversely affect our lives. There are others. For example, the base rate fallacy refers to the fact that we routinely fail to recognize the low underlying rates of a given phenomenon and incorporate that fact into our assessments of risk. Yes, it may be true that one bad sunburn will increase your risk of developing skin cancer by 50 percent. Scary, no? But in fact, roughly 2.2 percent of the population will develop skin cancer in their lifetime — a substantial portion of which have had at least one sunburn (so the risk is baked in). This puts the risk of developing skin cancer — to say nothing of dying of it — as a result of a single sunburn at a fraction of what we would normally assume.
Simply being aware of how heuristics affect our judgments can be helpful. When you come across an article over social media that plays on your fears, ask yourself: how are my emotions being manipulated? What is the real risk? How are heuristics fooling me?
The answer may surprise you.
Leave a Comment
Anarcho-Capitalism. Prof. Bryan Caplan admits it “sounds really crazy.” Could we actually privatize law, courts, and all of government? Full interview here
Comments Off on The Long History of Music Piracy
When you’re a historian, people expect you to write history. So, twelve years ago, when I told people I was writing a dissertation about music piracy, the typical response was, “But… that’s not history.”
I couldn’t blame them. The dirt was still fresh on Napster’s grave at the time, and challenges to online services such as Grokster, Limewire, and even YouTube were still wending their way through the courts. The days of using cassettes to make mixtapes or lurching to press the “record” button to capture songs from the radio were not far behind. If anything, a few older folks might have dim memories of shaggy-haired hippies swapping Grateful Dead bootlegs in the 1970s.
All of this seemed too fresh to be “History” in the way of Adolf Hitler and the Peloponnesian War and the like. But, in fact, piracy has a history as long as sound recording — even as long as written music itself. Jazzheads swapped copies of shellac discs in the 1930s, and shady operators even copied music in the wax cylinder era of the 1910s. Sheet music was bootlegged in the nineteenth century, just as printed materials had been since Gutenberg unleashed the printing press four hundred years earlier.
Music, though, has proven more vexing to regulate than other copyrighted works. A piece of sheet music is cheaper and easier to photocopy than an entire book. And anyone can play his own version of a song in a way that another writer cannot “play” The Grapes of Wrath. American copyright law did not even cover music until 1831 — originally, only books, maps, and charts were protected.
As a matter of fact, I discovered that sound recordings were not protected in the United States until 1972. How could this be?
The Enlightenment Origins of American Copyright
Part of the reason lay in the United States Constitution itself. Our founding document is a notoriously succinct one, outlining the structure of government and spelling out a handful of basic responsibilities for federal authorities — one of which was copyright. The founders empowered Congress:
To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.
Children of the Enlightenment, the founders believed that the spread of knowledge contributed to the public good, and government ought to encourage it. (As Thomas Jefferson put it in 1813, “He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his taper at mine, receives light without darkening me.”) Thus, government should incentivize “Authors and Inventors” to create — but rights to their works should also be “limited,” so as not to strangle the free exchange of art and ideas.
And when they said “limited,” they meant it. The first copyright term lasted a measly fourteen years, and Congress only reluctantly added new kinds of works — written music, photography, film — to the scope of copyright over the course of the nineteenth and early twentieth centuries. Seeing copyright as a monopoly, a sort of necessary evil, they were loath to expand its domain unless absolutely necessary.
The Trouble with Music… and Sound
The fate of sound recording shows how true this is. After Thomas Edison worked out the first truly effective method for inscribing and replaying sound waves in 1877, an era of freewheeling piracy ensued. By 1905, Congress was besieged by songwriters, music publishers, and “talking machine” companies with cries for help. Famous composers such as Victor Herbert and John Philip Sousa pled their case, citing the unfairness that a band of rogues profited off their works. (Of course, Sousa also slyly conceded, “I can compose better if I get a thousand dollars than I can for six hundred.”)
But there was a rub. How exactly would copyright reform work? The songwriters were mad because the companies making disks, wax cylinders, and player piano rolls used their music without authorization or compensation; they demanded both. The talking machine companies wanted to record performances of the written music for free. If forced to pay royalties to composers, then they wanted to have a copyright for their own recordings too.
Congressmen were perplexed. If Sousa owns the copyright for his written composition, then how could the talking machine company own a separate copyright for a recorded performance of it? Isn’t it the same music? What if two different companies recorded two different versions of the same song? Were there copyrights for each recording?
The debate may seem technical, arcane, even alien to twenty-first century ears, but the politicians contemplated the question long before there were music videos or sampling or remixes. It seems obvious to us today that Frank Sinatra and Sid Vicious’s versions of “My Way” are two distinct works. Obviously, one work — a song — can exist in an almost infinite number of unique permutations.
Muddling through in the Age of Jazz
Congress decided to punt on the issue (as it does) — and it turned out to be a good deal for songwriters, record companies, and consumers. With the Copyright Act of 1909, lawmakers set up a system that let songwriters and music publishers earn a royalty when their songs were recorded — but the rate for each “use” (each disk or piano roll manufactured and sold) was a flat one, set by the government. And artists and labels were more or less free to record versions of songs as they pleased.
What Congress did not decide to do was to provide copyright for sound recordings themselves. It was just too confusing, and in the Progressive Era, anti-monopoly sentiment remained strong in American society. Copyright still looked like too much of a monopoly.
The curious result was that sound recordings seemed to lack copyright protection — and pirates noticed. For decades, bootleggers operated in the shadows of the US economy, recording live performances of operas, copying out-of-print jazz and blues records for connoisseurs, and sometimes simply making a quick buck. (The Mafia occasionally pirated pop hits, though many bootleggers were just enthusiasts of hard-to-find music.) Throughout, they could point to the law and say they were not violating it — because sound recordings weren’t protected under the Copyright Act.
The arguments may seem flimsy, but both courts and lawmakers struggled from the 1930s to the 1960s to figure out how to square the circle. Sometimes judges ruled against bootleggers under the doctrine of “unfair competition,” arguing that the pirates freeloaded off the original label’s financial investment in producing and promoting a record. (By making a record or an artist popular, judges reasoned, the label generated “good will” with the public, which the pirate unfairly exploited.)
The Rise of Stronger, Longer Copyright
But the problem remained, since Congress was still reluctant to act on copyright reform. It took the outbreak of widespread bootlegging in the rock counterculture of the 1960s to push the issue to the front burner. Armed with cassette tapes, hippie bootleggers copied unreleased Bob Dylan recordings (“the basement tapes”) and captured Jimi Hendrix concerts for an eager youth audience.
Finally, in 1971, Congress passed a law that provided record labels with protection for their products. And in 1973, the Supreme Court ruled that states could pass their own anti-piracy laws, even though copyright had traditionally been understood as a responsibility of the federal government, and state laws potentially allowed infinite protection for recordings — arguably violating the “limited times” provision of the Constitution.
In a deindustrializing America of the 1970s, though, the cries of the record industry resonated — as did those of other “information” businesses. Makers of albums and movies and software argued that their firms needed protection more than ever in a post-industrial economy, where information was the currency of the age.
The old anti-monopoly sentiments of the Progressive Era melted like butter. Beginning in 1976, Congress embarked on a program that lengthened the term of copyright from 56 years to the life of the author plus 50 years; increased penalties for infringement; and expanded the scope of what could be copyrighted and patented (for example, software and genetically modified organisms). Congress even arbitrarily added 20 years to the length of copyright in 1998 — a law critics dubbed the “Mickey Mouse Protection Act,” since the beloved cartoon character’s copyright was about to lapse at the time.
The Future of Piracy
Where does this story leave us today — in a post-Napster world of YouTube, SoundCloud, and BitTorrent? A fan could illegally download Prince’s entire discography within minutes of the artist’s passing in 2016, but he or she could not stream his songs on Spotify because the Purple One had the legal right to keep them off all streaming platforms.
Prince’s case illustrates the paradox: copyright is stronger and longer than it has ever been before, and yet it is arguably flouted more often than ever too. The US economy still generates a great deal of “information,” but information travels more or less freely. One could argue that the postindustrial economy thrives on the very fact that it is as easy as pressing ctrl-C to copy a word, image, or sound.
America and the world could do with a bit more of the anti-monopoly spirit of old. I do not need the incentive of a lengthy copyright term to write. (If I live another 50 years, the copyright for this article would not lapse until 2137. Is that really necessary?) And the penalties for copyright infringement do not need to be so punitive that so-called “copyright trolls” can use the law to intimidate a lowly blog out of existence with extortionate demands for using a photo without permission.
Congress once actually had it right — as hard as those words are to type. Copyright ought to be a pragmatic bargain between artists, business, and consumers that promotes creativity, not a right of vast scope, consequence, and duration that stifles it. Hopefully lawmakers will realize that less state-enforced monopoly power, rather than more, would be good for both the economy of innovation and the public interest as a whole.
Comments Off on How the Bureau of Prisons locked down “compassionate release.”
I learned about Mr. Raymond the way I so often hear about such cases — from a family member in a phone call. By “such cases” I mean elderly, sick, even dying federal prisoners trying to secure early release using the Bureau of Prisons’ (BOP) compassionate release program. The call from Mr. Raymond’s daughter was like hundreds I have taken over the years from family members: she was frustrated, frightened, and completely in the dark about the status of her father’s application for release.
At 74 years old, Mr. Raymond had served more than 15 years of his 20-year sentence, and Corrections staff thought he had done enough time. He clearly met the BOP criteria for elderly compassionate release, so they helped him submit his request to the warden on December 22, 2015.
Then the waiting began.
Mr. Raymond was not dying, but he was aged and ailing. The BOP had funded his very expensive medical care; he has a heart condition and received a pacemaker while incarcerated. As months passed without word about his request, stress-test results caused his prison doctors grave concerns. Then his vision began to deteriorate. Staff sent updated medical reports to the BOP’s central office. Mr. Raymond considered seeking a transfer to a prison hospital in North Carolina. Finally, after a full year of silence, the BOP approved his request.
Why, his family asked me, is this process so hard?
The Rules of “Compassion”
Once available only to prisoners on the verge of death — and even then very rarely — the compassionate release program was expanded in August 2013 in response to sharp scrutiny from the Inspector General of the Department of Justice and to criticism from advocates such as FAMM. The revised rules extended eligibility to elderly prisoners with or without medical conditions, among others. At first glance, the new rules are laudable for their apparent breadth. But scratch the surface and one finds a program notable only for its neglect. Compassionate release is the exception rather than the rule.
Take, for example, the plight of elderly prisoners like Mr. Raymond. The BOP’s revised rules were to cover prisoners 65 years old or older who have served at least 50 percent of their sentence and who suffer from chronic or serious medical conditions. Healthy prisoners 65 or older who have served the greater of ten years or 75 percent of their sentences were also deemed eligible. But, only a scant handful is ever recommended for release.
The DOJ’s Inspector General found that in the first full year after the expansion of compassionate release, 93 elderly prisoners applied under the non-medical provision but only 2 were released. None of the 203 elderly prisoners with medical conditions who applied made it out. While the numbers picked up in 2015, in FY 2016, there were only 5 such releases.
How, you might ask, can a program with “compassion” in its name The answer lies in who administers it. Modern-day compassionate release was set up by Congress in the Sentencing Reform Act of 1986. The SRA strictly limited the ability of federal courts to revisit finalized convictions. Parole was eliminated, and federal prisoners were expected to serve the sentence imposed, with a small credit for good time. With very few exceptions, “Do the crime, do the time” sentencing became the law, and courts lost jurisdiction to revisit sentences.
Congress made an exception, however, if prisoners developed “extraordinary and compelling” reasons justifying early release. In 28 U.S.C. § 994(t), Congress directed the U.S. Sentencing Commission to identify criteria for what constituted extraordinary and compelling reasons. The BOP was given the job of identifying prisoners who met the criteria and of petitioning the court for their release. The U.S. Attorney represents the BOP in court and files the motion for a reduction in sentence. Finally, judges are responsible for deciding whether the prisoner meets the early release criteria and whether they deserve to be released, based on their history, crime, and conduct in prison. If so, the court orders their release.
Jailer and Judge
Congress did not give prisoners the right to petition the court directly or appeal an adverse decision from the BOP to the court. The BOP’s decision is final and unreviewable. This means the power to free a prisoner is placed in the hands of the prosecutor who worked hard to convict him and the jailer whose job it is to keep him locked up. Their reluctance to promote release is hardly surprising. In practice, the BOP and the DOJ make decisions that Congress intended should be left to judges. Because Congress made the BOP the gatekeeper and gave prisoners no independent power to petition the courts or appeal an adverse decision, the BOP is the jailer and the judge in all compassionate release cases.
The BOP’s stinginess has drawn fire even from the staid Sentencing Commission. That body revisited and updated compassionate release criteria in 2016. At a hearing on the subject, commissioners had sharp questions for the BOP and its sparing use of compassionate release. The criteria the Commission adopted included a pointed directive to the BOP, saying in essence that the BOP should confine itself to determining if a prisoner meets the criteria and bringing a motion for reduction in sentence to the court if so as Congress intended.
Compassion aside, there are other sound reasons to make it a practice to release elderly prisoners who have served a significant portion of their sentence, many of whom are suffering from age-related illnesses or conditions that make their continued incarceration inhumane and expensive.
Aging prisoners are increasing as a proportion of the federal prison population. According to a report by the Urban Institute, there were slightly more than 5,000 prisoners 65 and older in federal prison in 2011 (3 percent of the BOP population), and they are expected to triple in number by 2019. That number is driven by punitive charging practices and sentencing policies. The fiscal burden of housing aging prisoners threatens the BOP’s budget. The costs of incarcerating such prisoners are three to five times higher than younger prisoners. One study found that medical costs alone for prisoners 55 and over were five times those of younger prisoners.
Moreover, recidivism declines with age. The Office of the Inspector General found that the 15 percent recidivism rate of prisoners 50 years and older is much lower than the 41 percent re-arrest rate for all federal prisoners. Prisoners released through the compassionate release program had the lowest recidivism rates of all, 3.5 percent.
If the BOP is unable or unwilling to treat the compassionate release program as Congress intended, Congress should take steps to ensure that prisoners denied or neglected by the BOP nonetheless get their day in court. Congress can do so by giving prisoners the right to appeal a BOP denial to court or to seek a decision from the BOP in cases such as that of Mr. Raymond, in which delays stretch out over months or even years. Such a right to an appeal will restore to the courts the authority that the BOP has usurped: to determine whether a prisoner meets compassionate release criteria and if so, whether he deserves to be released.
A closing note: Mr. Raymond was one of the lucky few. Not so his prison mate, who like Mr. Raymond sought compassionate release but unlike him died while awaiting a decision.
 I have used pseudonyms to protect the privacy of the family involved.
 The Bureau of Prisons published a notice in the Federal Register last year, proposing to change the name of the program from “Compassionate Release” to “Reduction in Sentence in Extraordinary and Compelling Circumstances.”
 See 18 U.S.C. § 3582(c)(1)(A)(i).
Comments Off on Is Judicial Review Undemocratic?
America just got a civics lesson from a U.S. Senator on the role of the Supreme Court. In his opening statement during the nomination hearing of Neil Gorsuch, Senator Ben Sasse explained the proper (albeit uncommonly-realized) role of a Supreme Court justice. According to Sasse, the Supreme Court, when it appropriately exercises the power of judicial review, defends the long-term will of the people. Sasse is right, and those who wish to defend limited government and the will of the people in the United States should be passionate both about defending judicial review and also about limiting judicial review within proper constitutional bounds.
Defending Judicial Review
Why isn’t judicial review undemocratic? Why is it alright for the elected representatives of the American people (i.e. Congress) to pass a law only to have it “struck down” by a panel of unelected, dour ivy-leaguers in black robes (i.e. the Supreme Court)?
Before we get to the answer, a brief refresher in American civics: American constitutionalism as understood by the framers of the U.S. Constitution requires that the “will of the people” exists not in any single law passed by Congress but only in the fundamental law that is the U.S. Constitution. It is the Constitution that embodies the long-term will of the people.
The Constitution established an essentially-popular government, but the problem with all popular governments is the constant tendency of majorities to oppress minorities, particularly during temporary periods of political passion. The framers therefore institutionalized certain checks against the temporary ambition of the majority through such features as the bicameral legislature (Article I, Sec. 2-3) and the executive veto (Article I, Sec. 7), while respecting the popular foundation of American political authority in the form of an original ratification of the U.S. Constitution in the people of the several states (Article VII) and of regular revisions to the fundamental law through amendments to the U.S. Constitution when a supermajority agrees to it (Article V).
So, the Constitution, taken as a whole, represents the will of the people bound by certain constraints to prevent tyranny of the majority. Any action of a congressman, president, or Supreme Court justice at odds with the U.S. Constitution therefore is at odds with the will of the people. We have a word for that: unconstitutional.
What shall we say then of judicial review? If Congress passes an unconstitutional law, that law cannot in any true sense represent the will of the people, especially if it were to represent only some temporary spasm of political desire on the part of a majority of the country. This, in any case, was Alexander Hamilton’s argument in Federalist 78. According to Hamilton, when Congress passes a law that it had no authority to pass, it effectively “enable[s] the representatives of the people to substitute their will to that of their constituents.” When this happens, the Supreme Court may lawfully act as “an intermediate body between the people and the legislature, in order, among other things, to keep the latter within the limits assigned to their authority.” In this way, Hamilton explains the essentially-democratic nature of the practice of judicial review: “If there should happen to be an irreconcilable variance” between the Constitution and a law of Congress, “that which has the superior obligation and validity ought of course to be preferred; or in other words, the constitution ought to be preferred to the statute, the intention of the people to the intention of their agents.”
Rather than being undemocratic, judicial review, rightly understood and rightly exercised, defends the long-term will of the people. As Sasse explained during the Gorsuch hearing, “When Congress passes an unconstitutional law, it is in fact the Congress that is violating the long-term will of the people, for the judiciary is there to assert the will of the people as embodied in our shared Constitution over and against that unconstitutional but perhaps temporarily popular law.”
The Limits of Judicial Review
While judicial review rightly-understood constitutes an essential feature of the American political system, unrestrained judicial review constitutes a dangerous deviation from democratic principles. Hamilton explains in Federalist 78 that judicial review does not “suppose a superiority of the judicial to the legislative power. It only supposes that the power of the people is superior to both.” When the U.S. Supreme court strikes down laws of Congress that are not in “irreconcilable variance” with the U.S. Constitution, the Supreme Court effectively substitutes its own will for the long-term will of the people as embodied in the U.S. Constitution as the final measure according to which all laws are judged.
To be clear, this renders the American polity an oligarchy instead of a democratic republic, and it is no better than Congress passing laws that it has no authority to pass. Both constitute an attempt by our governors to substitute their own will for the long-term will of the people as embodied in the Constitution. In fact, Madison and Hamilton were clear in the Federalist Papers that although all three departments of government play a role in the interpretation of the U.S. Constitution (interpretations which receive institutional force in powers such as the legislative power of Congress and the executive power of the president), the “people themselves…can alone declare its true meaning and enforce its observance” through such things as elections and, of course, through amendments to the U.S. Constitution. The point is that the Supreme Court does not, any more than the president or Congress, provide a final interpretation of the Constitution for which there can never be an appeal. If that were the case, “the people will have ceased to be their own rulers, having to that extent practically resigned their Government into the hands of that eminent tribunal.”
So, is judicial review undemocratic? No! Rightly understood, judicial review is an essential bulwark of American liberty. But wrongly understood, judicial review is an abuse of court power, an abuse made more dangerous by many Americans’ lack of awareness of the importance of the American people – not the legislature, the court, or the president’s legal counsel – being the final judge of the meaning of the Constitution, which is itself the will of the people.
 These were Lincoln’s words in his First Inaugural Address when he was responding to the Supreme Court’s decision in Dred Scott v. Sandford (1857) in which the court held the Missouri Compromise of 1820 to be unconstitutional because it violated an alleged constitutional right of people to own other human beings as property that was protected under the 5th Amendment.
 The United States Supreme Court has, at various points, asserted, either implicitly or explicitly, that its constitutional interpretation, rather the Constitution itself, is the supreme law of the land. One example of an explicit assertion to this effect occurred in Cooper v. Aaron (1958); for a discussion of this as a problem, see Edwin Meese, III, “The Law of the Constitution,” Tulane Law Review, Vol. 61: 979-990.
Comments Off on Markets work with altruism, too
As Adam Smith and many others have emphasized, a great virtue of the market is its ability to channel self-interest toward the public interest. For instance, Walmart sells a smartphone for under $30 that has specs comparable to the original iPhone, which sold for $499 in 2007 (about $600 in today’s dollars). Does Walmart produce functional, low-cost phones because it’s an altruistic public benefactor? Probably not; it sells phones for less than its competitors to win more business and earn more money. So the market channels Walmart’s self-interest toward the public interest. But the market channels altruism effectively, too — indeed, far more effectively than states.
To see why, think of a donation choice as a kind of consumption choice. Let’s go back to the phone example for a minute. If I’m trying to buy a phone for everyone in my family within a $200 budget, I’ll shop around to find the store that will sell me the most phones for $200. Since stores have to compete for my business, they have an incentive to be as efficient as possible so they can offer me the biggest bang for my buck. A store that sells 2 phones for $200 will lose business to the store that manages to sell 3 (comparable) phones for $200.
Similarly, if a philanthropist wants do to as much good as she can with $1 million, she has an incentive to shop around to find the charity that can help the most people for $1 million. Since charities have to compete for her donations, they have an incentive to be as efficient as possible. A charity that saves 400 lives for $1 million will lose her business to the charity that saves 500 lives for $1 million. The philanthropic “consumer” has an incentive to shop around for an efficient charity because she gets to decide which charity her donations will go to. If she discovers that a given charity isn’t doing much good, she can take her money elsewhere.
Contrast this situation with that of the philanthropic voter. Unlike the philanthropic consumer in a market for charity, the philanthropic voter can’t change anything if a state-run redistribution program is operating inefficiently. He might want to withdraw his tax dollars from it and send them somewhere that helps more people, but he’s not allowed to do that.
The most he can do is vote for better candidates who promise better programs, but his single vote isn’t going to change anything. So even those voters with sincere philanthropic motivations have comparatively little incentive to monitor the efficiency of public welfare spending. As a result, public welfare programs have comparatively little incentive to operate efficiently — that is, to do as much good as they could.
This hypothesis seems to be borne out in the data; consider economist Jeffrey Miron’s report on the inefficiency of the federal government’s anti-poverty programs: “If the $1.45 trillion in [the United States federal government’s] direct anti-poverty spending in 2007 had been simply divided up among the poorest 20% of the population, it would have provided an annual guaranteed income that year of more than $62,000 per poor household.”
Along similar lines, total government anti-poverty spending in 2011 amounted to $61,830 per poor family of three — a family for whom the poverty line was $18,530. (Of course, we wouldn’t want the government to directly distribute $62,000 to each family below the poverty line because that would create a perverse incentive to drop below the poverty line. The point is that the government’s anti-poverty spending doesn’t do a cost-effective job of targeting poverty.) By contrast, a top private charity, GiveDirectly, transfers 80-90 percent of its donations to extremely poor recipients in Kenya and Uganda.
Even philanthropists with causes other than maximizing welfare gains per dollar spent can do better in markets than politics. Suppose your child was born prematurely and you feel a strong obligation to support neonatal research. If the government fails to spend your tax dollars on this cause, there’s nothing you can do about it. But a market in charity allows you to fund your “niche” cause without needing the support of the electorate. Indeed, think of how well markets accommodate atypical preferences in general: you can buy soy Buffalo wings from vegan restaurants and tickets to atonal punk concerts even though most of your compatriots don’t share your values.
Socialist philosopher G.A Cohen expresses the conventional view when he says it is “the genius of the market that it recruits low-grade motives to desirable ends.” He’s right that the market can route low-grade motives toward the common good. But he’s wrong that this is the genius of the market. It recruits high-grade motives to desirable ends, too.
Comments Off on Do government employees have a right to religious liberty?
What should happen if a government employee is asked to do something that violates her religious convictions? One possibility is to fire the employee if she won’t do the required task. No one has a right to work for the state, so if an employee can’t fulfill her job duties perhaps she should be replaced with someone who can.
This might be reasonable in some cases. If an employee is unwilling to fulfill a substantial or critical part of her job, she should be replaced. It makes little sense to permit a religious pacifist to be an infantry commander or a Jehovah’s Witness who objects to blood transfusions to serve as an emergency room doctor.
Yet in most real cases, as opposed to the imaginary case I discussed in my last post, government employees have raised religious liberty objections to only a few narrow duties, and often these cases involve new tasks brought about by job transfers or shifting public policy. In other words, these employees did not know they would be asked to violate their religious convictions when they accepted their positions.
For instance, after the Supreme Court struck down state bans on same-sex marriages in Obergefell v. Hodges in 2015, a few county clerks, magistrates, and judges raised religious objections to issuing licenses to same-sex couples or to participating in ceremonies. Several states passed laws protecting such employees, as long as other civic officials are available to provide requested services. Similarly, Congress has passed legislation protecting military chaplains from being required to perform marriages to which they have religious objections.
Such accommodations are commonplace in other policy areas. Consider the death penalty: since 1994, federal law has protected federal and state employees from being forced to participate in an execution “if such participation is contrary to the moral or religious convictions of the employee.” Surely a corrections officer should not be forced to choose between his job and his moral or religious convictions respecting the taking of human life.
Less dramatically, long before same-sex marriage became legal in Kentucky, the state permitted clerks to opt out of issuing licenses to which they objected. For instance, a clerk who is a member of People for the Ethical Treatment of Animals (PETA) can refuse to issue hunting licenses, provided that someone else is available to provide this service.
Finally, at a time when some loud voices are claiming that Islam is not compatible with American values, it seems evident that New York City’s decision to accommodate Muslim police officers who desire to wear a hijab is superior to Philadelphia’s decision to fire a woman for the same “offense.”
Religious liberty is a fundamental American value. The religious convictions of government employees should be accommodated whenever it is reasonable to do so.
Comments Off on Bryan Caplan: Is immigration a basic human right?
Editors Note: On March 16th George Mason University Professor of Economics Bryan Caplan debated Washington University Professor of Philosophy Christopher Wellman on the topic, “Is Immigration a Basic Human Right?” Below is Professor Caplan’s opening statement.
There are many complaints about governments, but the harshest is, “This government grossly violates human rights.” The background assumption is that human beings have rights that everyone – including governments – is morally obliged to respect. When looking at the grossest violators – Nazi Germany, the Soviet Union, Maoist China – almost no one denies the validity of the idea of human rights. But then you have to wonder: Do the governments we know, accept, and even love have clean hands? Or do they violate human rights, too?
To answer, we normally apply a simple test: If an individual treated other people the same way the government does, would he clearly be a horrible criminal? If an individual deliberately kills innocent people, he’s a murderer; if an individual imprisons innocent people, he’s a kidnapper. A government that does the same violates basic human rights – and it can’t justify its actions by calling innocent people “criminals.” If someone is peacefully living his life, he’s innocent – whatever the government says.
What does this have to do with immigration? Lots. Since we’re in San Diego, we’ve seen illegal immigrants. What are the vast majority of them doing? Working for willing employers. Renting apartments from willing landlords. Buying stuff from willing merchants. Sending money home to their families. Maybe even sitting next to you in class. They sure look innocent – even admirable. But the U.S. government can and does forcibly arrest and exile them to the Third World. Why can’t they all just come legally? Because exile is the default; they’re all exiled unless the U.S. government makes a rare exception. This is far less bad than killing or imprisoning them, but it sure looks like a severe human rights violation. If the U.S. government forbade you to live and work here, wouldn’t that be a severe violation of your human rights?
You could reasonably object that human rights are not absolute. While there’s a strong moral presumption against killing, imprisoning, or exiling innocent people, it’s okay to do so if the overall consequences of respecting human rights are clearly awful. The main problem with this objection is that when social scientists measure the overall consequences of immigration, they’re not clearly awful. In fact, the overall consequences look totally awesome. Most notably, standard economic estimates say that letting all the world’s talent flow to wherever it’s most productive would roughly DOUBLE global prosperity. That’s an extra $75 TRILLION of extra wealth per year. How is this possible? Because even the world’s lowest-skill workers produce far more in the First World than they do at home. Even if all other fears about immigration were bulletproof – which they aren’t – they’re dwarfed by this gargantuan economic gain. This isn’t trickle-down economics; it’s Niagara Falls economics.
To effectively defend immigration restrictions, then, saying “Human rights are not absolute” is insufficient. You need to flatly deny that immigration is a human right – to say that while the illegal immigrants you meet on the street may look innocent, they’re actually guilty as hell. The most popular argument analogizes illegal immigrants to trespassers. No one has any right to be here without government permission; it’s our country, so we set the rules.
The obvious problem with this position is that it justifies a vast range of blatant human rights abuses. If it’s our country and we set the rules, why can’t we exile citizens, too? Why can’t we imprison people for saying the wrong thing, practicing the wrong religion, or having kids without government permission? Saying, “That won’t happen,” dodges the question: If the U.S. government did this to you, would it be violating your human rights or not?
Prof. Wellman offers a more sophisticated version of this story. He defends immigration restrictions for “legitimate states” only, on the grounds that immigration restrictions are vital for “freedom of association.” Unfortunately, we have two conflicting freedoms of association. I want to be free to associate with foreigners; lots of foreigners want to associate with me. Immigration restrictions deny us this freedom in the name of all the Americans who don’t want my associates breathing American air.
Who should prevail? In his work, Wellman concedes a crucial premise, freely admitting that the popular notion that we all consent to government is a “fiction,” and that “the coercion states invariably employ is nonconsensual and, as such, is extremely difficult to justify.” We don’t really face a choice between two freedoms of association, but between freedom for real associations we choose to join and freedom for fictional “associations” we’re forced to join. Unless the overall consequences are clearly awful, the fictional ones should lose. Freedom of association is only for free associations.
My critics often tease me, “Should everyone on Earth be free to immigrate into Bryan’s house?” Their point: Treating immigration as a human right is utopian nonsense. My reply: There are three competing moral positions on immigration.
- Foreigners should be free to live in my house even if I don’t consent – a view held by almost no one.
- Foreigners should be free to live in my house if I consent – my view.
- Foreigners shouldn’t be free to live in my house even if I do consent – the standard view I’m criticizing.
Far from being utopian, saying “Immigration is a human right” is just the moderate, common-sense position that when natives and foreigners voluntarily interact, strangers are morally obliged to leave them alone unless the overall consequences are clearly awful. Even if the stranger happens to be the government – and the government happens to be popular.
Comments Off on Emma Watson is right. Feminism is about choice.
As part of the publicity for her role in the new live-action version of Beauty and the Beast, Emma Watson was photographed for Vanity Fair in a Burberry bolero that left her mostly topless.
Although the reaction most people had to the spread was probably not very dramatic (maybe you thought, like I did, “cute jacket”), there were more than a few responses on social media that criticized her choice as being at odds with her feminist ideals.
In her response to the incident, Watson offered up a simple truism about feminism that is more powerful than it might sound:
Feminism is about giving women choice.
So, for Hermione’s sake, and in honor of women’s history month, I’d like to talk a little bit about the importance of choice to the expansions of women’s rights that have taken place over the past 200 years.
How property law restricted women’s choices
Some of the oldest and most significant restrictions on women’s choice in American history are those that restricted married women’s ability to own property separately from their husbands. The legal tradition underlying these restrictions was coverture, which declared “the very being or legal existence of the woman [to be] suspended during the marriage.”1 A favorite quip of historians of the subject is that these laws created a situation in which husband and wife were one within marriage, and that one was the husband.
Although specifics varied by state, this usually meant that married women could not own land or homes in their own names, sign enforceable contracts, stand for themselves in court, or create wills.
Further, divorce was extremely limited, making it nearly impossible to dissolve a marriage once entered. Limitations on divorce always hit those in the least happy marriages the hardest. So while many couples doubtless enjoyed happy marriages, those who were in less fortunate circumstances were legally bound to remain trapped within them.
The upshot of all this is that 19th-century property law and legal practices made it difficult for married women to make some of the most basic decisions about how they wanted to lead their lives.
Fortunately for our mothers and grandmothers, married women became significantly more empowered with respect to these fundamental decisions over the course of the 19th century. Although old habits die hard, and therefore men’s discretionary decision making power within marriage likely continued as a cultural norm among some communities, nearly all married women in the United States had the legal right to own separate property and keep earnings acquired during marriage by 1920.
Cultivating equal rights on the factory floor
Prior to the birth of American industry, most women in the United States would have spent their lives performing some type of domestic labor in a rural farming community. Usually, the women worked on land owned by her husband, father, or other male relative, with any proceeds beyond what the family required for survival accruing to that owner. Textile mills, the first large-scale American factories, offered young women an alternative unlike any they’d seen before.
In the early textile mills, women were invited to apply the skills they had developed weaving cotton into cloth at home to the large-scale water or steam-powered industrial looms recently constructed in Waltham, Lowell, and other cities across the Northeastern U.S. In exchange, they worked for wages which they had control over spending. They had this control because this new type of work gave them a way to do something not very common at the time: moving away from home without getting married.
The young women working in these mills attended lectures, wrote for publications edited by other “mill girls,” opened their own bank accounts, made large purchases like furniture and pianos, and lived in residences acquired in their own names. What freedom compared to life on the farm!
As the American economy continued to grow over the course of the 19th century, opportunities for women to work outside the home continued to expand. The expansion in the range of external options opened new frontiers to women across the country, both in terms of the practical decisions they were able to make and the ideas they were able to encounter. This expanding range of experience proved fertile ground for expansions in women’s rights that would continue to take place through the 20th century.
Susan B. Anthony, a pioneering activist for many women’s rights causes, including separate property ownership, access to professional careers, equal pay for equal work, and women’s suffrage, grew up surrounded by the adventurous young women working in the early American mills.
Her father, Daniel Anthony, was the owner of a water-powered textile mill in Pennsylvania that regularly employed female workers, some of whom boarded directly with the Anthony family. Between these experiences and her father’s encouragement of her education, she grew up with a strong conviction in the capability of women that would drive her to work towards expanding the choices available to women in other domains.
So, if you believe — like Emma Watson and I — that choice is important, take a moment to join me in recognizing how important economic opportunity is for all people to be able to make the most important decisions we face as human beings: who we want to be, and how we want to spend our days.
This women’s history month, let’s toast to economic opportunity and free choice.
1Blackstone, W. 1765. Commentaries on the laws of england: Book the first. Oxford, England: Clarendon Press, p. 430.
This piece is related to a paper that won the Gordon Tullock Prize for best paper published by a young scholar in Public Choice in 2016.
Comments Off on 6 women who should be on the $20 bill
In honor of Women’s History Month and the fight to get a woman on the $20 bill, we reached out to Learn Liberty professors for suggestions on great women whose achievements should earn them a place on US currency. So, in no particular order, here are five worthy women who should be on twenties:
Submitted by Prof. Sarah Skwire, Anne Hutchinson was an active religious leader and proponent of religious freedom in the American colonies.
As Prof. Skwire wrote:
I’d vote for putting Anne Hutchinson on the $20. Her home bible studies, her active preaching, and her theological disputes with established ministers put her into opposition to the Puritan leadership in the Massachusetts Bay Colony. She was convicted of heresy and of being an instrument of the devil, and she was banished from the colony.
After her banishment, she, her family, and her followers moved to the more religiously tolerant colony of Rhode Island.
That statue of her in front of the State House in Boston has a plaque that reads:
IN MEMORY OF
ANNE MARBURY HUTCHINSON
BAPTIZED AT ALFORD
20 JULY 1595 [sic]
KILLED BY THE INDIANS
AT EAST CHESTER NEW YORK 1643
OF CIVIL LIBERTY
AND RELIGIOUS TOLERATION
A rebel, an annoyer of government officials, a fan of civil liberty, an agitator for religious freedom.… I can’t think of a better person to put on a bill.
Suggested by both Dr. Phil Magness and Prof. Aeon Skoble, Jeannette Rankin was a relentless antiwar activist and the first woman member of Congress.
Dr. Magness wrote,
She was the first female elected to Congress, winning her seat almost four years before the extension of womens’ suffrage at the national level (Montana extended the vote to women before the federal government). Rankin’s most famous political cause was her steadfast dedication to pacifism. Rankin voted against the United States’ entry into both world wars, and effectively gave up her seat in Congress twice as the price of opposing patriotic war fervor. The first time resulted in her being redistricted out of her seat in 1918. After returning to politics in 1940, she similarly opposed American entry into World War II on the grounds that it would precipitate a draft and therefore forcibly commit people to fight in a war against their will. Rankin remained a harsh critic of the draft for the remainder of her life. She remained an anti-draft activist into her 90s and organized a march against Lyndon Johnson’s policies during the Vietnam War.
Rankin’s dedication to peace and individual rights would make her a wonderful candidate for the $20 bill.
Suggested by Prof. Aeon Skoble, Sally Ride was the first American woman to go to outer space. After her career at NASA, she went on to become a physics professor. She also co-wrote several books on space geared towards children with the goal of encouraging them to study science. In 2001, she co-founded Sally Ride Science, which encourages students, especially girls and minority students, to study STEM (science, technology, engineering, and math) subjects.
Dr. Sally Ride’s historic journeys to space and dedication to science education make her a great woman to feature on US twenties.
Mercy Otis Warren
Suggested by Prof. Aeon Skoble, Mercy Otis Warren wrote criticisms of royal authority during the American Revolution. She wrote pamphlets, poems, and plays in support of colonists’ rights, and after the war she was a strong anti-Federalist.
With her outspoken advocacy of colonists’ rights and skeptical attitude towards centralized government at the time of the American Revolution, Mercy Otis Warren would fit in well among the founding fathers currently featured on US bills.
As part of the Women On 20s campaign, over 600,000 people cast votes on women to replace Andrew Jackson on the $20 bill, and Harriet Tubman was the winner. The famed abolitionist and “conductor” of the Underground Railroad helped guide over 300 slaves to freedom. She was also a suffragist, speaking and promoting votes for women.
In an article entitled “Let Tubman on the Twenty”, Prof. Sarah Skwire wrote:
Her work, and the work of countless named and unnamed others like her, assured that it is no longer possible legally to exchange a stack of twenty dollar bills for the body and the life and the future of another human being. Her work, and their work, means that the American idea of what constitutes “property” no longer includes other humans.
It’s not clear when or if we’ll see a woman on the US $20 bill, but there’s no shortage of worthy women whose accomplishments warrant a place of honor on our currency.
Comments Off on Supposed FBI investigations into refugees shouldn’t scare you
This past Monday, President Trump released a new executive order shutting down the refugee program for 120 days and banning immigration from six majority-Muslim countries for 90 days. President Trump attempted to justify these changes by stating in part that:
The Attorney General has reported to me that more than 300 persons who entered the United States as refugees are currently the subjects of counterterrorism investigations by the Federal Bureau of Investigation.
The government has refused to provide any additional details about these cases, but an investigation should not be seen as implying guilt. Almost all FBI terrorism investigations do not end with a terrorism conviction. Indeed, the numbers predict that of these 300 refugee investigations, only 1 will turn into a terrorism conviction and that conviction will not be for planning an attack against the United States. This claim about the FBI investigating refugees has turned out to be a groundless smear in the past, and history has shown that refugees have been less likely than others to commit acts of terrorism against the United States.
These 300 represent less than 0.009 percent of all refugees admitted since 1975. As the Cato Institute’s recent report found, only 20 refugees from 1975 to 2015 have attempted, planned, or carried out a terrorist attack inside the United States. Only 3 carried out a deadly terrorist attack, and all of those were before 1980. During the 40 years from 1975 to 2015, the annual risk of death by a refugee terrorist to a U.S. resident was 1 in 3.64 billion. This makes them about 1,000 times less likely to kill a U.S. resident in a terrorist attack than other foreign-born people.
Unfortunately, this type of baseless fearmongering about FBI investigations into refugees is not new. The FBI told ABC News in 2013 that it was investigating “dozens” of refugees as terrorists. In the 26 months after the FBI made the claim, the agency arrested and convicted 31 individuals for “terrorism-related” offenses. Of these, a majority were U.S.-born citizens. Another 4 convictions were not even for terrorism offenses. In the end, the Bureau only arrested and put away for terrorism offenses 9 foreign-born residents total after it claimed “dozens” of open cases against refugees specifically. None of these individuals were planning attacks inside the United States.
So how often do FBI national security investigations actually turn into convictions?
According to the New York Times in 2016, the Bureau has averaged “7,000 to 10,000 preliminary or full investigations involving international terrorism annually in recent years.” This appears to contrast with Reuters, which reported this week that the 300 refugee investigations were part of 1,000 “counterterrorism investigations” into persons tied to “Islamic State or individuals inspired by the militant group.” Similarly, FBI Director James Comey said in May 2016 that there were “north of a thousand cases” that they were investigating of U.S. residents radicalized by the Islamic State online.
The best explanation that I see for this difference is that the Comey/Reuters number refers to a narrower subset of investigations involving the Islamic State and, more importantly, only reflects a snapshot in time. At any particular moment, there may be 1,000 or so investigations open, but there are between 7,000 and 10,000 investigations for the entire year.
This means that very few FBI investigations end in a terrorism conviction. In the 5 years from 2010 to 2014, the entire United States government averaged just 27 terrorism convictions per year. Taking the middle of the 7,000 to 10,000 range for the number of new FBI investigations (8,500) would mean that only about 0.3 percent of all terrorism investigations end in terrorism convictions.*
If these individuals are involved with terrorism, it is very unlikely that they are attempting to harm the United States as opposed to supporting terror groups abroad. Less than 5 people per year were convicted of terrorism offenses in which they were targeting the United States in the five years from 2010 to 2014. This appears to be true today as well. Director Comey said in May 2016 that his main concern was people seeking to join the Islamic State overseas. This means that only 0.05 percent of all investigations end in the conviction of a person who was attempting terrorism in the United States.
Based on these percentages, we can predict that only 1 in 300 of these investigations will turn into a terrorism conviction and that it will not involve a domestic terror plot.
The FBI should continue to investigate people who it has reason to believe are involved in terrorism, but it is incorrect to assume that an investigation means that the person is guilty of a crime or even likely to be guilty of a crime. It is even more incorrect to jump to the conclusion that they pose a threat to anyone in the United States. The fact remains that refugees are less likely than others to commit acts of terrorism, and these new investigations do not change that fact.
*In the less likely scenario where the FBI opens only 1,000 terrorism investigations annually, 2.7 percent would end in terrorism convictions and 0.5 percent would end with convictions for an offense targeting the U.S. These numbers would predict that of these 300 refugees, only 8 will be convicted of a terrorism offense. Of these, only 1 will have planned an attack targeting people inside the United States.