Category Archive: Philosophy
Leave a Comment
Is human blood a “public resource”? Prof. Peter Jaworski argues that your bodily fluids belong to you, and governments should let you sell them.
Harvard psychology professor Steven Pinker argues that political correctness actually breeds the very same extremist views it hopes to quash.
Excerpted from Spiked Magazine’s ‘Unsafe Space Tour’ panel discussion at Harvard University.
Comments Off on The politics of “The Last Jedi”
The Last Jedi — the latest installment in the Star Wars series — premiered to mixed reactions from critics and fans last weekend. The film has many impressive scenes and action sequences. But critics argue that the plot is flawed in various ways.
The movie’s treatment of political themes deserves similar mixed reviews. Unlike most previous Star Wars movies, this one at least implies that institutions matter, not just individual heroics. But it also perpetuates Star Wars’ longstanding confusion about what exactly the “good guys” are fighting for. The series may belatedly value institutions, but it still gives no indication what institutions are valuable.
As The Last Jedi begins, the villainous First Order has almost completely vanquished the New Republic. Only a small Resistance led by Princess (now General) Leia Organa still opposes it, and even that remnant is on the verge of being wiped out.
The situation is in many ways similar to that which existed at the start of the original Star Wars trilogy: a small band of rebels oppose an overwhelmingly powerful empire led by Dark Side Force users — with Supreme Leader Snoke and Kylo Ren seemingly assuming the roles formerly played by Emperor Palpatine and Darth Vader.
If anything, the Resistance may even be worse off than the original Rebellion was at the start of Episode IV. At that time, the rebels had a substantial fleet, and controlled a number of important star systems and planets. In The Last Jedi, they have been reduced to a much smaller force and suffer further attrition throughout the movie.
Institutions are more important than heroes.
Aaron Ross Powell of the Cato Institute describes this setup as a “betrayal” of the original trilogy, since all of the work of Luke, Leia, and Han Solo has effectively been undone. But it can also be seen as a lesson in the importance of institutions. Despite their courage and skill in overthrowing the Empire, our heroes failed to set up effective political institutions that could forestall the emergence of a similar menace in the future.
The New Republic seems just as dysfunctional as the old, and it allows the First Order to amass enormous power right under its nose, just as the Old Republic failed to address the threat of the Sith.
Despite his impressive mastery of the Force, Luke Skywalker failed to establish a new Jedi Order that can prevent powerful Force users like his nephew Ben Solo from going over to the Dark Side. His efforts to train Ben on his own end in dismal failure, as a result of which Ben defects to Snoke and becomes Kylo Ren — a development that parallels Anakin Skywalker’s becoming Darth Vader. Luke’s efforts to train Rey — the powerful new Force user introduced in The Force Awakens — are only modestly more successful.
No amount of individual ability and heroism is an adequate substitute for good institutional design. This message is further underscored by the chase scenes in which the remains of the Resistance fleet try to escape the First Order.
Leia and her second-in-command Admiral Amilyn Holdo repeatedly rebuke ace pilot Poe Dameron for his reckless “flyboy” ways, and his refusal to respect the chain of command. Holdo, by contrast, is praised for being “more interested in protecting the light than she was in seeming like a hero.”
As I have explained in earlier writings on Star Wars, earlier Star Wars films tended to neglect institutional considerations, and implicitly convey the message that we should put our faith in heroic figures like Han, Luke, and Leia, and that concentrated power is only dangerous if placed in the wrong hands. The Last Jedi at least partly corrects that tendency. It suggests that heroes aren’t enough. The Galaxy will not have peace, happiness, or freedom without a functional republic, and perhaps also a new and better Jedi Order.
We still don’t know what the Resistance is fighting for.
But if Episode VIII offsets the flaws of its predecessors in some ways, it perpetuates them in others. Like the rebels who opposed the Empire, the Resistance has little notion of what they are fighting for, other than simply opposing the First Order.
One Resistance fighter says that the movement will win “not by fighting what we hate, but by saving what we love.” But what do they love, other than perhaps their friends and fellow fighters? Two lengthy movies into this new Star Wars trilogy, we still don’t know.
Is it the restoration of the feckless New Republic — the same one that failed so dismally? Is it some new type of political system? We do not know. Perhaps the Resistance members do not even know themselves.
Similarly, Luke Skywalker ultimately promises that he will not be the last Jedi. But what does that mean? Will Rey, or some other successor, establish a new Jedi Order? If so, how will it avoid the catastrophic errors of its predecessor?
The Resistance’s — and the filmmakers — neglect of such questions is paralleled by all too many real-world rebels, who sought to overthrow oppressive regimes without giving sufficient thought to what might come after — or to the possibility that it could be even worse than the current tyrants. Even in established liberal democracies, voters too often react to a flawed status quo by embracing “change” candidates without sufficiently considering whether the their proposed changes are actually likely to improve the situation, rather than make it worse.
Both many real-world rebel movements and those of the Star Wars universe also do little to question their own behavior. The Last Jedi is yet another Star Wars movie that largely ignores the glaring hypocrisy inherent in the fact that the rebels (and now the Resistance) are simultaneously freedom fighters and slave owners. They seek liberty for themselves, yet treat droids as slaves, even though the latter are as intelligent as humans, and clearly capable of feeling emotions such as hope, pain, and fear.
Unlike George Washington and Thomas Jefferson, who at least recognized that their ownership of slaves was at odds with their professed principles, the Star Wars “good guys” seem oblivious to the issue — as – it seems – are all too many of the filmmakers and viewers.
I find your lack of continuity disturbing.
In addition to the renewed attention to institutional issues, The Last Jedi has a number of other interesting plot twists. Both Rey and Kylo Ren’s characters develop in surprising but effective ways. On the other hand, the sprawling plot has a number of turns that seem pointless, yet take up a lot of screen time.
In addition, there are some significant problems with world-building and continuity. For example, the first six Star Wars movies established that even highly talented Force users need extensive training to use their abilities effectively. But Rey demonstrates remarkable skill with the Force, despite having almost no training at all. Similarly, Rey, Kylo Ren, Luke Skywalker and Snoke use the Force to communicate with each other over vast interstellar distances that greatly exceed any previous such abilities that we have seen.
There are also discontinuities in military and technological development. The bombers used by the Resistance in the opening battle are not only more primitive than the craft we see in the original trilogy (set decades earlier), but even seem slower and less sophisticated than World War II-era bombers were. Either the filmmakers laid an egg here, or they want to suggest that the galaxy has gone through a period of severe technological regression!
And, if the Resistance seems to have no coherent agenda, neither does the First Order. Despite his pivotal role in the plot, we learn virtually nothing about Supreme Leader Snoke, his goals, or how and why he came to lead the First Order. We cannot even rule out the popular fan theory that he is really Jar Jar Binks in disguise.
Such flaws may not bother casual viewers, but might well annoy more committed science fiction fans. They remind us that Star Wars is less committed to careful world-building than rivals such as Star Trek and Game of Thrones. This problem may be related to the failure to think carefully about what it is that the rebels are fighting for, and why it matters.
Despite notable flaws, the Last Jedi is still an entertaining and in some ways compelling movie. And, like much of the rest of the Star Wars franchise, it teaches us some useful lessons about what not to do.
Leave a Comment
Professors Laura Kipnis, Angus Johnston, and author Brendan O’Neill debate: Should We Limit Free Speech for Nazis? Excerpted from the Spiked Magazine Unsafe Space Tour panel discussion at New York Law School. Moderated by Tom Slater (of Spiked Magazine).
Comments Off on The real effects of good intentions
Good intentions do not always lead to good outcomes. In fact, many times, they lead to dire consequences.
One of the first principles students learn in a good economics class is that individuals respond predictably to incentives and that individuals subjectively determine their own benefits and costs. In other words, nobody knows the individual better than him or herself. Moreover, everyone looks at the world through his or her own glasses.
I use three main examples to illustrate this.
1. I’m being exploited.
On the first day of my economics classes, I tell my students that if they saw my paychecks that they would realize the school is exploiting me. After a few heads nod in agreement, I ask them if they think that I can weigh my own benefits and costs and choose what is best for me.
Most, if not all, nod in agreement. I inform them that, of course, I would love to get paid more (because I deserve it!) and that the school would like to pay me less, but the fact that I keep showing up to teach and they keep employing me proves that I and my employer are both, in a sense, winning.
I do my best to expose students to economists and other authors who come from a more free-market perspective on various issues because my students probably have not and will not receive this perspective again. Of course, one of the names my students get to know is that of my former professor Dr. Walter Williams. Eventually, I use him as an example of looking at the world through one’s own glasses and unintended consequences.
I tell my students that if Professor Williams saw my paycheck he would laugh — or, perhaps, cry — because he gets paid so much more than I do. So, what would happen if he flew out to California to meet with the president of my school (actually, I teach at two schools, but Williams could meet with both the presidents) and made an appeal on my behalf, to get my salary increased.
Williams could say, “If my former student Ninos Malek is teaching here, then he must be paid what I would be paid!” I ask my students how they think I would feel. Most say I would be extremely happy. Actually, I would not be happy! I would respectfully tell my favorite professor to stop caring about me and fly home. Why?
Because the presidents of my institutions would tell Dr. Williams, “Oh, Ninos Malek must be paid what you get paid? Well, Ninos Malek can instead be paid nothing, because we can cease employing him as a member of the faculty.” The point I am trying to make is that in this example Dr. Williams would be looking at the world through his own lenses — the reference frame of his own paychecks — not mine. His advocacy on my behalf could lead to an unintended consequence.
Even though I would love (really love) to get paid more, the fact I choose to come to work each day must logically mean that this job is my best option and that I am “winning” in the exchange.
2. Do wealthier parents love their children more?
As another example, I tell my students that my wife and I send our two-year old daughter to preschool and that other parents in wealthy countries send their children to school, but that many parents in poor countries send their children off to work. So, obviously, wealthier parents love their children more than poor parents.
When I make this claim, many students strongly shake their heads in disagreement. I then argue that wealthier parents can make decisions for their children better than poor parents can. Again, the students strongly disagree.
But then, if I ask my students whether child labor should be outlawed — thus taking that choice away from parents — they say yes.
My point is to drive home the fact that many individuals from wealthier countries, including some of my students look at the world through their privileged glasses.
Their good intention in trying to outlaw child labor can actually hurt those children. The reason those children are working is not because their parents don’t love them. Instead, the family actually needs the labor of their children to help provide enough income for bare necessities like food. To put it bluntly, before a child can get educated they need to be alive.
And once again, the fact that families are choosing to send their children to work means that’s the best option they have. Outlawing child labor can actually hurt them by taking away that option and forcing them into starvation or riskier work (including prostitution).
3. Let’s make Ferraris mandatory
Finally, I use my Ferrari example. I ask my students how they would feel if some famous, kindly billionaire who wants us all to have sports cars got a law passed declaring if you are driving a car it must be a Ferrari. Many of my students smile and think it’s a great idea. However, by this time, some of my students understand what that would mean — walking a lot more!
If the only cars you can drive are Ferraris, then most people won’t be able to afford to drive at all.
The problem is that this hypothetical billionaire with good intentions is looking at the world through a billionaire’s glasses. My life is better off driving my old Honda Civic than it would be walking. Once again, the fact that I drive my Civic shows that it is the best option I have (until I get that raise).
As another one of my former professors, Don Boudreaux, argues in a Learn Liberty video, intentions are not results. Looking at the world through your own glasses can lead to unintended consequences.
All people, rich or poor, should be free to make their own decisions for themselves and their families because they subjectively determine their benefits and costs. It is pure arrogance to impose on others the lenses through which we look at the world.
Comments Off on Looking at randomness but seeing conspiracies
We see patterns where none exist. It’s what humans do; in fact, it’s what animals do. Mark Twain noticed this, and had a pithy summary.
We should be careful to get out of an experience only the wisdom that is in it — and stop there; lest we be like the cat that sits down on a hot stove-lid. She will never sit down on a hot stove-lid again — and that is well; but also she will never sit down on a cold one any more.
The wary cat has a theory of the world: Stove burns you. Stay away from stove.
Of course, only hot stoves cause burns, so this is not a good theory. But consider two cats:
- Cat A believes (correctly) that the theory about stoves causing burns is incomplete, and is not sure what does cause burns.
- Cat B stays far away from all stoves, hot or cold, because they magically cause pain and injury in a way the cat doesn’t understand.
It’s pretty clear that Cat B is more likely to survive.
The Conspiracy-Theory Mindset
An interesting paper, forthcoming in the European Journal of Social Psychology, titled “Connecting the Dots: Illusory Pattern Perception Predicts Belief in Conspiracies and the Supernatural,” investigates this kind of thinking.
The central finding is that many people who believe in complex conspiracies and supernatural phenomena also see “patterns” in random data. The authors conclude that “illusory pattern perception is a central cognitive mechanism accounting for conspiracy theories and supernatural beliefs.”The interesting part is that humans also add a moral element. We want someone to blame. In the case of the human analogue of stoves — perhaps droughts, floods, or natural disasters — primitive peoples think that angry gods or malevolent forces can be appeased by appropriate human actions. They make sacrifices, and if something bad happens to someone, they imagine that evil forces are at work, or else the person somehow deserved the punishment for having themselves acted badly.
I’m not saying this is a conscious, rational response. Quite the contrary. It’s the irrational part of this that makes it adaptive. The belief that I can understand, and take actions to prevent, terrible events is a key part of being happy, and even healthy.
As I pointed out in my Analyzing Policy (2000) book, the belief that pork contained evil spirits, or that God had commanded that no one eat pork, proved adaptive in desert regions where deadly invisible parasites were likely to infest the meat. You didn’t have to understand the mechanism, as long as you happened upon a good rule and then imbued that rule with magical significance.
A Disaster without an Explanation
One of the most famous catastrophes in European history was the Lisbon earthquake of 1755, in Portugal. The earthquake itself caused fires immediately, as well as a tsunami that arrived in about 40 minutes. All told, between 10,000 and 100,000 people died.
The question asked throughout Enlightenment-era Europe was, “What did Lisbon do to deserve such punishment?” There were some theories, including religion, sexual mores, and some more fanciful claims, but it seemed hard to believe that these could explain why Lisbon would be almost completely destroyed and Berlin, Paris, Prague, Vienna, and other capitals of sin would be untouched.
Unless … what if there was no explanation? No one had done anything, good or bad, to cause it. The earthquake just happened.
Now, humans usually don’t like answers like that; we can’t plan, and we can’t identify patterns.
On the other hand, accepting that there may be no explanation, much less a blame-worthy action as cause, for events is a step toward thinking scientifically. In a way, the Lisbon earthquake was a great benefit to the cause of enlightenment. As John Hamer said,
One of modernity’s greatest achievements is the realization that natural disasters like earthquakes have nothing to do with us, that we need not see the wrath of Zeus in every thunderclap, the displeasure of Poseidon in every menacing wave. The origins of that realization are to be found in the smoldering ashes of Lisbon.
Backsliding on Scientific Modernity
It’s easy to slide back though. And maybe it’s just my imagination, but we seem to be sliding backward pretty fast. After Hurricane Katrina, a few religious leaders went all “wrath of God” on us, claiming that New Orleans was a modern Gomorrah. After Hurricane Harvey this summer, and again after Hurricane Maria, we were told that Houston, and Puerto Rico, must have done something to deserve this terrible punishment.
Sometimes the “sin” is promoting “the homosexual agenda,” of course. Folks on the left tend to think that this kind of religious view is ridiculous, but they are happy to trot out their own “you deserved it!” line of pattern recognition. For the Left, the sin is often the use of fossil fuels. In fact, after Harvey happened to stop in place for 48 hours, immobilized by an utterly random high pressure area to the north, several clear “causes” emerged. One was global warming. Another (I love this one) was “lack of zoning.” Houston, thou has sinned in the eyes of the planners of urban sacredness, and thou willst now suffer the consequences!
In fact, as a number of people have pointed out, the scientific basis for blaming the specific path or effects of one hurricane on any one factor is extremely tenuous. It’s unlikely that global warming or lack of zoning caused that specific event and its effects.
The rise in the temperature of the Gulf of Mexico at this point is less than 1 degree centigrade; the amount of rain dropped by Harvey was actually within normal bounds for a hurricane of that size.
And as for zoning, well, the amount of impermeable paved area in the city doubtless contributed to the severity of the flooding, but 50 inches of rain in two days would not have been “soaked up” by swamps or sloughs; once the ground is saturated, it’s covered with water. And water is impermeable, too.
The difficult thing, and I’d say this to all sides, is to discipline yourself to avoid two atavistic moralistic traps.
- Blame the victim: If you see something bad, you find something about the victim you don’t like. And then you say the bad thing happened because the victim did something bad. For example, Puerto Rico allowed promiscuity, or Houston did not have zoning rules to limit growth.
- Vengeful gods: If you see something bad, you find something that society as a whole is doing you don’t like. For example, we use fossil fuels, so hurricanes become stuck by high pressure areas, or people have stopped going to church, and so an example had to be made.
As Ron Bailey wrote in Reason, there is no overall trend toward worsening intensity, duration, or frequency of hurricanes hitting the United States. There are plausible theories that predict that this may happen, and there are sensible reasons to evaluate these claims seriously.
But it’s a violation of the basic scientific principles of the Enlightenment to blame the victims, or to invoke vengeful gods, as explanations. Sometimes things happen. Noticing patterns where none exist is part of being human. Overcoming that impulse is the most important part of science.
Leave a Comment
To make sense of Karl Marx or even Adam Smith, you need to see the way they looked at prices — through the labor theory of value.
Comments Off on Most social scientists can’t predict the future. But this philosopher did.
When social scientists predict the future, they almost always get it wrong. Human behavior and social phenomena are just too complex to be predictable. But Alexis de Tocqueville was, to some degree, an exception. Besides being a great political philosopher, he was also a political prophet.
Discussions of Tocqueville’s prophetic prowess usually begin with his remarkable prediction in Democracy in America, more than 100 years before the Cold War, that “there are two great peoples on the earth today who, starting from different points, seem to advance toward the same goal: these are the Russians and the Anglo-Americans.… each of them seems called by a secret design of Providence to hold the destinies of half the world in its hands one day.” Though the Cold War is over, continuing tensions between Russia and the United States show that Tocqueville’s prediction remains all-too-relevant.
What allowed Tocqueville to make such an impressive prediction, given that predicting social phenomena is notoriously difficult? An answer to this question may be in view if we consider a less well-known prediction that Tocqueville recorded in his Recollections.
The “Gloomy Prediction” of 1848
Tocqueville served as a legislator in France’s Chamber of Deputies prior to the Revolution of 1848. From this post, he observed an ominous decline in public morality and a corresponding increase in the number of fellow legislators who cared only to enjoy the emoluments of public office. From this, he concluded that “the time will come when the country will find itself once again divided between two great parties.”
He told his colleagues in the Chamber of Deputies that another revolution was brewing. What’s more, he said that they would be to blame if it happened. In an uncommonly courageous speech in the chamber on January 29, 1848, Tocqueville explained that the first, great French Revolution of 1789 happened fundamentally because France’s political leaders had become unworthy of holding power, and the leaders of 1848 were at risk of allowing another revolution to happen because they, too, were unworthy of their office.
On the one hand, the “public morality” of the French people had declined such that a severe bias against private property was now common. Later, when the revolution got underway, these embers of bias would be fanned into flames by socialist ideologues who took advantage of the growing envy of the masses. On the other hand, French politicians ignored this growing antipathy to private property and instead selfishly enjoyed the posh benefits of public office.
Tocqueville presented them with a clear warning: “My firm and profound conviction is this: that public morality is being degraded, and that the degradation of public morality will shortly, very shortly perhaps, bring down upon you new revolutions.… Will you allow it to take you by surprise?”
He called for the Chamber of Deputies to take action before a new revolution was upon them. But rather than taking preventative action, the legislators offered only platitudinous applause: “These gloomy predictions were received with ironical cheers from the majority.… The truth is that no one as yet believed seriously in the danger which I was prophesying, although we were so near the catastrophe.” The assembly did nothing. One legislator in the assembly remarked privately after Tocqueville’s speech that he was “a nasty little man” for trying to frighten the assembly with his disrespectful rhetoric.
Tocqueville as Political Prophet
Tocqueville was, of course, correct in his prediction. 1848 was the year of revolution in Europe, and about a month after Tocqueville’s speech, revolution came to France. To Tocqueville’s reputation as a great writer was added a reputation for political prognostication.
What allowed him to be, as he called himself, a “political prophet”? The answer seems to lie in the most distinctive feature of Tocqueville’s political philosophy: his emphasis on the habits of the mind and heart of a culture. By observing the “morals and opinions” of the French people of 1848, he was able to sense the drift of the country’s political life. As he said to his colleagues in his speech of January 1848, even though there were no tangible signs of revolution or riots, the spirit of revolution had “entered deeply into men’s minds.” The French people were “gradually forming opinions and ideas which are destined not only to upset this or that law, ministry, or even form of government but society itself.”
An Invitation to Consider Tocqueville’s Thought
There is a habit of dismissing Tocqueville’s wisdom as a political philosopher and poo-pooing his predictions as being unimpressive or wrong. But one wonders if this habit stems in part from a distaste for the gloominess of some of Tocqueville’s ideas, not unlike the distaste for Tocqueville’s gloomy prediction in the Chamber of Deputies.
The late 19th-century historian James Bryce, for example, asserted that Tocqueville’s “descriptions of democracy as displayed in America” were “no longer true” and in fact, in some respects, “they were never true.” Bryce regarded one of Tocqueville’s incorrect observations to be the threat of majority tyranny, which, he incorrectly said, “does not strike one as a serious evil in … America.” Theodore Roosevelt later cited Bryce approvingly on this topic, saying that Tocqueville’s warning about majority tyranny “may have been true then, although certainly not to the degree he insisted, but it is not true now.”
Tocqueville’s predictions should provoke us to consider his writing further. Yet the reader of his works needs to consider the possibility that Tocqueville may at times be right when we do not want him to be. Even Tocqueville himself seems to have a hard time believing his own prediction of the Revolution of 1848, but eventually the evidence drove him to deliver his warning to the Chamber of Deputies.
If the dangers to democracy that Tocqueville writes about are true, the natural response ought not to be to ignore them but instead to study them and to be wiser for it.
 See, for example, Joseph Epstein, Tocqueville: Democracy’s Guide (Eminent Lives, 2006), 4–5.
 Alexis de Tocqueville, Democracy in America, trans. Harvey Mansfield and Delba Winthrop (University of Chicago Press, 2000), 395–396.
 Alexis de Tocqueville, The Recollections of Alexis de Tocqueville (The Harvill Press, 1948), 10-14; ibid., 33.
 Ibid., 10.
 For more historical context surrounding this prediction, see Epstein, Tocqueville, chap. 8, and Hugh Brogan, Alexis de Tocqueville: A Life (Yale University Press, 2006), chap. 17.
 Tocqueville, Recollections, 67–69, 79–85.
 Ibid., 14.
 Epstein, Democracy’s Guide, 125.
 Tocqueville, Recollections, 16.
 Ibid., 10.
 Ibid., 12.
 James Bryce, The Predictions of Hamilton and Tocqueville (Johns Hopkins University Press, 1887).
 Theodore Roosevelt, “Introduction,” in Majority Rule and the Judiciary: An Examination of Current Proposals for Constitutional Change Affecting the Reflections of Courts to Legislation (Charles Scribner’s Sons, 1912), 21-22.
 Tocqueville, Recollections, 16.
Comments Off on Rebellion or stability: Which makes a healthier nation?
As the American Revolution began, Americans threw off the rule of a tyrannical king but in their enthusiasm for their newfound freedom, they set up ineffective governments. For instance, they denied the federal government the power to tax, trusting the state legislatures to pay their share of the war costs.
Americans gave their state legislatures too much power and the governors too little. In turn, the people voted irresponsible legislators into office. The result: legislatures started gobbling up executive power, further concentrating it in their hands.
These imbalances made it onerous to fight the British and became even more problematic after the existential crisis of the war had passed. Many states suffered through economic stagnation. Legislatures enacted a litany of new regulations, only to change them soon after, creating chaos and confusion. The federal government could not pay its debts or its armed forces, leaving natives and the British in Canada free to accost settlers on the frontier. And states made conflicting treaties with European powers, increasing an already tense relationship among the newly formed union.
James Madison: Not so quick to ditch the British way
James Madison thought something radical had to change in order to save the fledging nation. He saw the need to reach back to British roots and create institutions that strengthened the federal government and weakened the state governments. This reflected his Burkean understanding of constitutionalism: old laws have a power that constantly changing laws cannot.
While Madison knew the institutions had to be republican, that did not stop him. He simply changed the definition of republicanism. Previously, many would have said that the people have to participate in legislating as they did in ancient Athens or ancient Rome (what we now call direct democracy). Madison claimed that any representative government and any level of suffrage counted as a republic in the modern age.
Opposing Madison’s approach, Thomas Jefferson embraced the political turmoil of the early USA. He was a strict Lockean contractarian who thought that the people are the “only legitimate fountain of power,” so the people’s representatives should have a great deal of power to change laws — including the power to call a constitutional convention.
Madison wrote Federalist 49 in an attempt to convince Jeffersonians of the value of stability. For Madison, laws had to endure in order to have full effect:
It may be considered as an objection inherent in the principle that as every appeal to the people would carry an implication of some defect in the government, frequent appeals would, in great measure, deprive the government of that veneration which time bestows on everything, and without which perhaps the wisest and freest governments would not possess the requisite stability.
Madison endorsed an enduring Constitution over the volatility associated with a more purely contractarian form of democracy that would require a constant recurrence to the people. He followed Burke’s rationale to a certain extent, seeing a need to maintain stability, forsaking some liberty.
Jefferson: Throwing shade at Shays
From Jefferson’s perspective, however, rebellions and tensions demonstrated the health of a nation. He scoffed at the alarm caused by a small uprising in Massachusetts, Shays’s Rebellion, claiming: “A little rebellion now and then is a good thing, and as necessary in the political world as storms in the physical.”
Jefferson took a more Lockean view, seeing the social contract as the only legitimate source of power for the government. He even went further than Locke, saying that “the dead have no rights” over the living. For that reason, every 20 years — which was a generation in the 1800s — the nation should have a constitutional convention allowing the “right to choose for itself the form of government it believes most promotive of its own happiness.”
We see in these radically differing opinions the two constitutional paths the United States had laid before it: one Burkean and one Lockean. The ultimate decision to take the Burkean path provided the United States with long-term stability. It did, however, come at the cost of a more Lockean version of liberty. Considering the result of the French Revolution, due to Madison’s steady hand, American likely avoided what sometimes comes with more liberty: chaos.
Leave a Comment
From the beginning, classical liberalism has been a big tent with a wide diversity of ideas inside it. Watch the full interview with Dave Rubin and Brandon Turner here .
Comments Off on No, you can’t multitask. To succeed, you need to focus.
If you want a great career in the 21st century, you need to stop trying to multitask and start doing “deep work.”
That’s one of the big ideas from Georgetown University computer science professor Cal Newport. He urges us to be aware that “there are different types of work and some types have way bigger returns than others.”
In his book Deep Work: Rules for Focused Success in a Distracted World, Newport explains the difference between deep work and shallow work. You are doing deep work when your professional activities are “performed in a state of distraction-free concentration that push[es] your cognitive capacities to the limit. These efforts create new value, improve your skill, and are hard to replicate.”
In contrast, “shallow work describes activities that are more logistical in nature, that don’t require intense concentration.” Shallow work efforts, explains Newport, “tend to not create much new value in the world and are easy to replicate.” In other words, they’re the type of work efforts that make it easy for your employer to replace you.
Deep work, Newport explains, is rare.
A 2012 McKinsey Global Institute study found that more than a quarter of the average worker’s day is spent answering and reading emails. When you throw in other disruptions such as meetings, checking your phone (the average user spends over 2 hours a day in 76 interactions with their phone), and social media, it is easy to see why deep work is rare.
Yet, Newport argues, while deep work is becoming increasingly rare, “at exactly the same time it is becoming increasingly valuable in our economy. As a consequence, the few who cultivate this skill, and then make it the core of their working life, will thrive.”
It takes deep work to master hard things. To thrive in today’s rapidly changing economy requires a commitment to a never-ending process of deep work. Newport offers this example:
Intelligent machines are complicated and hard to master. To join the group of those who can work well with these machines, therefore, requires that you hone your ability to master hard things. And because these technologies change rapidly, this process of mastering hard things never ends: you must be out to do it quickly, again and again.
The importance of deep work is echoed by George Mason University economics professor Tyler Cowen in his book Average Is Over. What Cowen calls “quality labor with unique skills” will still remain scarce in this highly competitive global economy. Cowen offers up some questions to help us see if we will remain competitive:
Are you good at working with intelligent machines or not? Are your skills a complement to the skills of the computer, or is the computer doing better without you? Worst of all, are you competing against the computer? Are computers helping people in China and India compete against you?
Take a hard look at your work day. Are you honing your ability to do deep work? Your position in the labor force is likely to deteriorate if you are only capable of shallow work.
If you’re only doing shallow work now, what can you do about it?You can cultivate your ability “to focus without distraction on a cognitively demanding task,” explains Newport. One of his suggestions for engaging in more deep work is: stop trying to multitask.
Are you multitasking?
I write “stop trying” because research shows that human beings can’t multitask, they can only switch-task. Each time we switch-task, we lose the possibility of entering into a highly focused state, what psychologist Mihaly Csikszentmihalyi calls “flow.” Switch-tasking is so disruptive that it can reduce our productivity by up to 40%. You are literally working harder to produce less.
Echoing Csikszentmihalyi, Newport describes how good a state of flow in deep work feels compared to the stress of shallow work:
We know it’s satisfying to enter a state where you’re giving full, rapt attention to something that you’re good at…. [On the other hand,] someone who’s based mainly in shallow work, neurologically speaking, is going to eventually construct an understanding of their world that is stressful and fractured.
The late Stanford University communications professor Clifford Nass, along with his colleagues Eyal Ophir and Anthony Wagner, studied multitaskers with the belief that they would uncover cognitive powers of focus that multitaskers had that others didn’t. They couldn’t find such a power.
Not only do chronic multitaskers lose time switch-tasking, but they also alter their brains in not-so-salutary ways. In an interview, Nass explains, “People who multitask all the time can’t filter out irrelevancy. They can’t manage a working memory. They’re chronically distracted.”
The multitaskers “actually think they’re more productive,” but they are deluded. Nass explains why: “They initiate much larger parts of their brain that are irrelevant to the task at hand.… They’re even terrible at multitasking. When we ask them to multitask, they’re actually worse at it [than nonmultitaskers]. So they’re pretty much mental wrecks.”
Multitaskers claim, “When I really have to concentrate, I turn off everything and I am laser-focused.” But according to Nass, the truth is that “they’ve developed habits of mind that make it impossible for them to be laser-focused. They’re suckers for irrelevancy. They just can’t keep on task.”
In other words, multitaskers have lost the ability to do deep work. The way back to deep work takes time and commitment. As Nass explains, “When we try to revert our brains back, our brains are plastic but they’re not elastic. They don’t just snap back into shape.”
In a Microsoft study on shrinking attention spans — “the amount of concentrated time on a task without becoming distracted”—Microsoft CEO Satya Nadella observed that an important trait for success was becoming rarer: “The true scarce commodity of the future will be human attention.”
Stop making excuses.
Are you blaming your circumstances — for example, a demanding boss — for your choice to not engage in deep work? Are you keeping your eye on a future prize — for example, a promotion or a salary increase — rather than making the day-to-day choice to engage in deep work? Are you reading this article and thinking, “Easy for Newport to say, but he doesn’t know my world?”
In her book Rapt: Attention and the Focused Life, Winifred Gallagher offers this guidance: “Who you are, what you think, feel, and do, what you love — is the sum of what you focus on.” What are you focused on today? How much of your day is spent on emails, meetings, or social media? How you choose to spend your time today may be crowding out the uninterrupted time necessary for you to do deep work.
To make deep work the core of your working life, Newport suggests keeping a scoreboard:
It seems like a simple thing, but without it, it’s so easy to go through a week and just say, “Well, I was busy and I think I did some deep work in there.” Once you start keeping score, you look at it and say, “I did one hour out of a 40-hour week? I’m embarrassed.” A compelling scoreboard drives you to action.
One caveat: a scoreboard only drives us into action if we stop blaming, take a hard look at the consequences of our choices, and decide there is a better way. If we can honestly say, “My choices have left something to be desired, and now I am ready to make different ones,” then we are at the bus stop for real change.
Comments Off on The right to refuse service? Business owners and gay marriage
Debating Religious Liberty and Discrimination, by John Corvino, Ryan T. Anderson, and Sherif Girgis, Oxford University Press, 352 pages, $21.95
Steve Tennes, an orchard owner in Michigan, recently refused to host a same-sex wedding on his property, instead referring the couple to another orchard.
Business owners have profound incentives to serve customers. It is a rare proprietor who will turn away a paying customer because of a religious conviction.
Yet over the past few years, several business owners like Tennes have done just that. These men and women believe their faith prohibits them from participating in same-sex wedding ceremonies.
Contrary to popular belief, what Tennes did is perfectly legal in Michigan. The Great Lakes State, like about half of the states, has no law prohibiting discrimination on the basis of sexual orientation.
But the Michigan town of East Lansing, where Tennes brings his produce to the farmers’ market, has a local ordinance prohibiting such conduct. Because Tennes will not host same-sex weddings at his orchard, the city banned him from selling fruit at its market. He responded by suing the town for violating his religious freedom. Litigation is ongoing.
Such cases are at the heart of Debating Religious Liberty and Discrimination, a new point-counterpoint book by John Corvino, Ryan T. Anderson, and Sheriff Girgis. All three authors value religious liberty and oppose unjust discrimination. But as they point out in their joint introduction, “The devil is in the details.”
The Case for Limiting Religious Exemptions
Corvino begins the debate by providing a reasonable case for severely limiting religious exemptions. In good libertarian fashion, he contends that laws restrict liberty and so they shouldn’t be passed unless there are very good reasons to do so. If such reasons exist, all citizens should have to follow the laws regardless of their religious convictions.
So, for instance, his solution to the problem of Native Americans who feel compelled to use peyote in religious ceremonies is not to exempt them from laws banning its use but to eliminate the law altogether. Then anyone, religious or not, can use peyote for whatever reasons they desire.
Corvino doesn’t like religious exemptions, but he doesn’t reject them altogether. He concedes, for instance, that the state should not compel citizens to kill. If the nation is conscripting soldiers, pacifists should be offered an alternative to military service. Similarly, medical professionals should not be forced to participate in abortions or euthanasia. These accommodations should be available to religious and nonreligious citizens alike.
Other than in issues of life and death, most accommodations would disappear in Corvino’s ideal world. This is not to say he is entirely unsympathetic to florists, bakers, orchard owners, and others who believe they should not participate in same-sex wedding ceremonies. He suggests three different ways in which they could be protected without religious accommodations. His preferred method is to revise antidiscrimination laws to exclude small firms that offer expressive or wedding-related services.
The Case for Religion as a Basic Human Good
Anderson and Girgis, by way of contrast, make a robust but accessible philosophical argument for the importance of religious liberty. Drawing from the philosopher John Finnis’s work, they contend that religion is a “basic human good” and that the purpose of the state is to “protect the ability of people to pursue all the basic goods.”
Anderson and Girgis recognize that no right is absolute. If the state has a compelling reason to prevent a religiously motivated action, it may do so. With respect to discrimination, they propose that antidiscrimination laws should trump religiously motivated actions only when private treatment of a particular group imposes material and/or social harms that the law can best cure, and the particular proposed antidiscrimination provision is drawn narrowly enough to (1) suppress interactions that inflict those material and social harms, (2) avoid banning too many legitimate or harmless interactions, and (3) avoid treading too far onto other interests like conscience, religion, and speech.
Applying this test would protect the orchard owners, bakers, and florists who have been sued or prosecuted under antidiscrimination laws. But it would not exempt every religiously motivated action; for example, racial discrimination could still be prohibited.
There is much, much more to this book. Collectively, the essays provide an excellent overview of the main issues in cases involving religious liberty and antidiscrimination statutes. Both sides offer reasonable and well-articulated arguments to support their positions.
Far too often, debates about these matters degenerate quickly into impugning motives and calling names. A critically important contribution of the book is that Corvino, Anderson, and Girgis show that people with deeply held convictions can have a rational argument about controversial issues.