Category Archive: Government
Leave a Comment
“Political Martian” Dan Carlin says Americans need to educate ourselves better about the meaning of liberty. Watch the full interview here.
Comments Off on Blame outdated rights for California’s water woes.
Most water policies reflect some balance of rights and duties. Farmers have right to use water but a duty to leave remaining waters intact for their neighbors. Urban dwellers have the right to receive drinking and wastewater services but a duty to pay for them.
In his 1960 article, “The problem of social cost,” Ronald Coase discussed how the award of rights to one side of a polluter-pollutee relationship is a necessary first step towards finding the efficient level of pollution. His idea, sometimes called the “Coase Theorem,” was that a property right (to pollute or be free of pollution) could be more effective than a moral right to be free of pollution if one was focused on reaching socially optimal levels of pollution rather than on determining who should win or lose from pollution.
Coase’s observations brought him a well-deserved Nobel Prize, but they came with caveats. The first, which he emphasized, was that a rights-based regime will only work if the “transaction costs” of negotiating an agreement on pollution (or other externalities) are not too high.
The second, which he implied, was that property rights could deliver the same results as a regime based on rights and duties. While elegant in principle, this last claim does not hold in the presence of (surprise!) transaction costs.
These two caveats mean that the outcome in a rights-only regime (in which one’s right to pollute is set against another’s right to not be polluted) will be the same as the outcome in a right-duties regime (in which one’s right to pollute comes with a duty to retrain that pollution) if and when transaction costs are low. This equivalency will fail if transactions costs are high by making it harder to reconcile conflicting rights and thus achieve outcomes that are still possible in a rights-duties regime in which self-regulation is unaffected by transactions costs. (This discussion assumes that rights and duties are agreed, clear and enforced. If they are not, then one must turn to third party regulations, which are discussed below.)
We can see how these elements play out with a simple example of upstream farmers whose water use leaves downstream farmers with less water of worse quality. In a world of rights and duties, upstream farmers know that their rights to divert come with duties to not divert too much or discharge too many pollutants. A farmer in this Eden faces a moral, social, and perhaps legal duty to do no (noticeable) harm.
Returning to a world of Coasean property rights, this farmer would have a clear right to commit harm or, if lacking that right, a need to compensate downstream neighbors for any harm. In Coase’s world, the location of rights matters less than the potential for reaching an agreement that balances costs and benefits. Coase’s solution delivers greater efficiency than we’d see in a rights and duties world because it focuses on real harm rather than a duty to avoid an assumed harm, but it comes with greater transactions costs.
A further increase in transactions costs will make it difficult to use either of these solutions, which is why regulations are used to balance the interests of numerous polluters and pollutees.
A World With Non-trivial Transactions Costs Is A World With Both Rights and Duties.
We can apply these ideas to water, where the right to use it coexists with the duty to avoid harm; and government plays a role in giving rights and enforcing duties. Although some people assume that “the government” will manage the process in the public interest, there are many ways that this process can go wrong.
Ignoring the most obvious problems with corruption and public choice, it’s also possible for the government to fail due to complexity. Rights and duties will conflict if different branches of government focus on different rights or duties, if underlying conditions change the nature of rights or duties, or when transactions costs change the appropriate balance between rights and duties.
How might that happen? I am pretty familiar with water policies in California (where I earned my PhD) and the Netherlands (where I have lived and worked for over six years). Although each place has entirely different water conditions, I’d say that their different water policies and outcomes reflect different political and social attitudes.
I’d say that California is running on the equivalent of a Windows operating system (OS) that’s backwards compatible with earlier versions of the OS — and thus just as confusing, complicated, and conservative as you would expect from a system dating back decades.
Turning to the Netherlands, I would say that their policies are like Apple’s OS, meaning that they are sensible, simple and secure because they have been entirely restructured quite often over the 800-plus years that the Dutch have managed water.
Each system will bring advantages to different groups, but California’s strong rights give an advantage to opponents of change, resulting in a tragedy of the anti-commons in which worsening underlying conditions are quickly turning into poor outcomes.
Let’s recall a few examples. Groundwater in the state is unevenly regulated, poorly monitored, and vulnerable to agricultural and industrial pollution. Water markets cannot get off the ground because rights reflect pre-1914 conditions of abundance, lack any provision for environmental water flows, and often reside with the water districts formed to help water users cooperate rather than the users themselves, which can lead to abuse of “minority users” who cannot sell their water and exit without facing massive losses. Rights and regulations are enforced according to different administrative or judicial rulings as previously separate jurisdictions and competencies begin to overlap in conflicting ways. (The ongoing “crisis” of the 1922 Colorado River Compact, for example, can be traced directly to its assumption that each state would receive its volumetric right under all conditions.)
California cities are caught in a mish-mash of water quality regulations, extraction rights, infrastructure access, and service tariffs that change with political borders, conflict at the regional level, and often fail to support sustainable service levels.
These examples are not inevitable. They result when a system founded on miners’ and farmers’ 19th-century rights to divert water is joined to a set of 20th-century duties to provide service, reduce pollution, and so on. California, in other words, is not suffering a series of water crises as much as an outdated model of water governance whose paralysis and dysfunction will only worsen if left alone.
What’s To Be Done?
The future of water in California is grim. On one side, you have farmers using every legal means to avoid restrictions on rights that date from a past in which abundant water supported both farmers and ecosystems. On the other, you have environmentalists and citizens pushing for regulations to protect the “natural flows” that keep ecosystems and communities from collapsing.Both sides have supporters, and contradicting rights and regulations allow each side to claim righteousness while blocking action. So it seems that neither side will win.
The Bottom Line
Californians — and other citizens of the American West — need to revisit the balance of power between rights and regulations and reformulate their policies to reflect current conditions. Will this process be easy? No. Everyone will claim that their rights should receive preferential treatment.
The only way ahead is to recognize the need for change and agree to promote the community over partisan tactical sabotage that delivers short-term gain at the expense of the long-term cooperation that protects ecosystems, delivers economic value, and supports communities.
Comments Off on What the gold standard is and why government killed it
The gold standard is both a strongly advocated and vehemently opposed monetary regime. Both positions, however, usually rely on misconceptions on what the gold standard actually is and why it failed. Below, I will discuss (1) what the gold standard is, (2) what is not, and (3) why it failed.
What the gold standard is
Under a gold standard, gold is money . This means that gold is (1) the most common means of exchange, (2) it is a good store of value, and (3) it is a unit of account. While we can picture gold coins being used for transactions in small amounts, larger amounts are done with a substitute of gold, usually a banknote with a promise that the bearer can exchange it for gold. These banknotes are issued by central banks, and are convertible to gold at par.
One feature of the gold standard is that the change in gold reserves signals to the central bank if it is issuing too many (or too few) convertible banknotes. If the central bank over-issues banknotes, meaning that banknotes increase more than the value individuals want to hold, then consumption at the aggregate or national level increases. Since individuals now see more banknotes than they want to hold in their pockets, they will spend the extra cash. This means that, unless production has increased, imports will also increase.
But in the exporting country the domestic banknotes do not circulate. Therefore the importer has to pay for the imports with gold. If imports increase more than exports, then the central bank sees their reserves decreasing. In modern days, where central banks issue fiat money (that is, banknotes not backed by gold or any other commodity), central banks need to find a substitute to figure out if they are issuing too many banknotes. That substitute is usually inflation; if inflation rises, central bankers reason that money supply might be too loose.
A common concern with the gold standard is that is prone to unexpected and random discoveries of gold that could produce inflation and monetary imbalances. Surely, no regime would be perfect. It would be unwise to criticize the real-world shortcoming of the gold standard in comparison to, for instance, an idealized but unreal central banking regime that issues fiat money. The useful, and fair, comparison would be to compare the real gold standard with a real modern central bank rather than an ideal central bank.
It is true that under gold standards of the past there were periods of inflation. But rather than assuming that these were problems with the gold standard itself, we can look closer at the events taking place at the time. When we do so, we find that either the many cases of inflation were not due to random gold discoveries or that the inflation rates were not actually very high.
Take, the case of the United States. The inflation “peaks” of less than 2% between 1812 and 1816 and again between 1861 and 1866 correspond with the War of 1812 and the Civil War respectively. In these cases, inflation is not explained by a shortcoming in the gold standard, but by an increase in government spending due to armed conflicts.
Another example is that of the Price Revolution that took place in Western Europe between the second half of the 15th century and the first half of the 17th century. During this time period of approximately 150 years the price level increased six times. That translates to an average yearly rate of inflation of just 1 to 1.5 percent.
I t seems that modern central banks, rather than the old gold standard, are the ones that have a poorer track record with respect to keeping a lid on inflation. Since 1971 (when the last remnant of the gold standard was abandoned), the inflation rate in the United States has had a yearly growth rate of 4%. This means that between 1971 and 2017, the price level has increased six times .
What the gold standard is not
There are two important clarifications to make in terms of that what the gold standard is not. The first one has to do with gold standard pegging the price of gold, and the second has to do with the gold standard as an international regime of fixed exchange rates.
The gold standard does not fix the price of gold.
As mentioned above, under a gold standard, gold is what functions as money, the convertible banknotes issued by central banks are money substitutes. Recall that ultimately what functions as the unit of account is gold. This means that the gold standard is not a policy that fixes the price of gold as if central bank banknotes were money and gold just a commodity of reference. This is not just semantics.
When you deposit your dollars (or Euros, British Pounds, etc.) into a bank account you receive a checkbook that you can use to write checks that are “convertible” to dollars. This check is similar to the convertible banknotes that central banks issues. And just as if you write too many checks your bank account balance goes down, if a central bank issues too many convertible banknotes their reserves go down as well. And just as we do not say that we fix the price of the dollar in terms of our checks, we cannot argue that under gold standard we are fixing the price of gold in terms of central bank convertible banknotes.
Today gold is no longer widely perceived or accepted as money. The observed volatility of its price against a currency such as the US dollar is seen as a concern if gold were to be the reference commodity under a gold standard. If, hypothetically speaking, a central bank were to go back to the gold standard, this means that gold would function as money, not that the price of gold would be fixed and the central bank would have to expand or contract the money supply to stabilize its price (i.e. buy and sell gold at the given fixed price.)
The gold standard is not a regime of international fixed exchange rates.
If gold is the commodity that functions as money, then this also means that the gold standard is an international monetary regime. Just as two individuals may write checks from different banks convertible into the same currency, under a gold standard different central banks issue convertible banknotes convertible into the same commodity: gold.
If this is the case, then the gold standard cannot fix exchange rates because there are no exchange rates to fix in the first place. An exchange rate is the price between two different currencies. In a gold standard we have one currency for many countries, similar to today how a group of European countries share the Euro as their currency (the Eurozone).
If we had two metals, gold and silver, then will see an exchange rate between gold and silver. But this would be a bi-metallic system, not just a gold standard.
To argue that there is an exchange rate between two convertible banknotes issued by different central banks is like arguing that there is an exchange rate between two checks denominated in dollars but issued at different banks. If one check is for $50 and the second one for $100, the relation is that we need two $50 checks to equal the value of on $100 check. This is a parity relationship, not an exchange rate. Once again, to argue that the gold standard is an international regime of fixed exchange rates is to confuse what is and what is not money under such a system. These checks, or convertible banknotes, have different denominations or measurements of the same good (pounds, ounces, etc.) the same way that miles and kilometers are different measures of the same thing. There is no price between miles and kilometers, there is a parity conversion.
Why the gold standard failed
A popular argument is that the gold standard failed due to flaws in its design. According to critics, the gold standard is in fact responsible for the Great Depression. According to this argument, when aggregate demand fell central banks had their hands tied by the gold standard and could not react by increasing the money supply. This made the Great Depression a worse crisis than it would otherwise have been. The institutional reforms that followed moved the US away from the gold standard into a more flexible and free system based on fiat money.
But something important happened before the Great Depression; World War I. Remember that the gold standard is an international monetary regime. This international characteristic was broken during WWI, where (1) international shipments of gold were suspended or reduced and (2) major countries suspended their banknotes’ convertibility in order to “print” money to pay for the expenses of war.
WWI meant a de facto end to the gold standard even if de jure no country “gave up” the gold standard. After WWI important decisions had to be made. One of those was the United Kingdom going back to the prewar convertibility of its own banknotes without removing from circulation the excess of banknotes.
This caused the British Pound to devalue against the US dollar. And the United States decided to also increase its money supply in order to contain the British Pound. This eventually fueled the financial bubble that burst in 1929, signifying the beginning of the Great Depression.
Once we take this sequence of events into account we can see that the gold standard broke down because of WWI and it never returned to its normal functionality. The gold standard cannot be responsible for the Great Depression for the simple fact that it stopped working more than a decade before.
Now, there is a more subtle argument made by some economists that the gold standard was responsible for the Great Depression, not because of the gold standard regime but because of the gold standard mentality that constrained the central bankers of the time.
However the behavior of UK and US policymakers of the time went against the gold standard mentality. Especially in the US, where the idea of increasing the money supply without a commensurate increase in gold reserves, all in an effort to help the British Pound, was not part of the gold standard mentality.
The gold standard did not fail due to its own internal problems, but because of government driven, calamitous events such as WWI and the post-WWI policy makers’ looser monetary policy, made possible due to the inconvertibility of the banknotes.
Comments Off on Here’s why the tax code is so huge.
The United States tax code is 2600 pages long, but that’s only the tip of the iceberg. To make sense of these 2600 pages, lawyers, accountants, financial planners, and all the other people we pay to walk us through these pages also need to reference Treasury regulations, commentary, relevant court cases, and even legislative history. In total, around 70,000 pages of text come to define or clarify the tax rules the federal government imposes on us. And this ignores the bevy of state and local tax laws to which we are all subject.
There are no good reasons for the law to be this complicated, but plenty of reasons to simplify matters more than any politician ever seems to suggest. And this has been the case for the better part of the last 70 years. Going back to 1950, as federal tax rates on corporations, capital gains, “the rich,” inheritances, and any number of other categories have fluctuated wildly, a strange thing has become quite clear. No matter how the federal government manipulates the tax code, the total tax revenue the federal government collects has remained a pretty consistent 17% of the economy.
This suggests that our monumental tax code is wholly unnecessary. All the government need do is impose an across-the-board 17% tax on everyone’s incomes. If taxes can be that simple, and they clearly can be, why make them so complicated?
The answer to that question is pretty simple. Politicians and their enablers have an incentive to make taxes complicated, because the more convoluted the tax law becomes, the easier it is to hide who is paying and who is receiving. Amid thousands of pages of legalese, few will notice that a sentence here grants a tax credit to people who buy from favored businesses, or an exemption there reduces the tax burden to favored parties. But imagine if the tax code consisted of the single sentence, “Add up all your income and pay the IRS 17% of it.” Adding the sentence, “But if you are a farmer, pay 10%,” sticks out like a Vegas showgirl at a Quaker convention. Voters everywhere will see that someone is getting special treatment and exactly how much that special treatment is worth.
Just as politicians use the tax code to hide the special favors they grant their preferred constituents, they also use corporations to hide the fact that they are taxing the rest of us at confiscatory rates. When they need to raise money, politicians will talk about corporations paying their “fair share.” People then cheer these heroes of the rank and file who have the “political courage” to stick it to the dastardly corporate profiteers. But here’s the rub: No matter what the tax code says, corporations don’t pay taxes. Corporations collect taxes. Every dollar of tax a corporation hands over to the IRS comes out of people’s pockets — some in the form of higher prices for consumers, some in the form of lower wages or reduced benefits for workers, some in the form of lower returns to investors like you with your IRA or your grandmother with her pension fund. Every single dollar a corporation pays comes from the people.
In the end, it’s a shell game. All of this complication makes it painfully easy for the government to obscure what it is doing. And the easier it is to hide where the money is coming from, the easier it is to tax people in the first place. The easier it is to tax, the easier it is to spend. The easier it is to spend, the greater the debt grows. And let’s not forget, for the 70,000 pages of taxes levied on the American people, the government has still managed to run up a $20 trillion debt.
These are the evils that a complicated tax code brings.
Given this, it is pretty obvious why we have the Byzantine tax code that we do. Only something this ridiculous would enable politicians to dole out favors with such impunity. Only something this complicated would have the rest of us forgetting that we’re actually the ones who end up paying for government favors.
The sensible thing to do, of course, would be to acknowledge that the government is destined to collect 17% of GDP, then find the simplest way to collect that 17%. We should then go one step further and limit government spending to the amount it collects, and no more.
If all that sounds unrealistic, remember that such a plan would only stop the debt from getting worse. It wouldn’t help us to pay it down. And when it takes an unrealistic plan just to stop the government’s debt from getting worse, you know that something is rotten in the District of Columbia. But who didn’t know that already? The real indictment is that simplifying the tax code would seem unrealistic to anyone.
Leave a Comment
Does history repeat itself? Dan Carlin, the “political Martian,” says history can’t teach us what to do next, but it can teach us who we are. Watch the full interview here.
Comments Off on Debate: Is Ayn Rand right about rights?
[Here, Professor Matt Zwolinski provides three essays that argue there are problems with Ayn Rand’s Objectivist philosophy. After each, Professor Stephen Hicks responds with an essay of his own that clarifies and defends the Objectivist point of view.]
Ayn Rand’s Ethical Egoism — Matt Zwolinski
Ayn Rand is, quite famously, an advocate of ethical egoism — the idea that each individual’s own life is the ultimate standard of value for that individual. She is also, quite famously, an advocate of individual rights — the idea that each individual has a morally protected sphere of freedom against which other individuals must not intrude. Figuring out how, or whether, these two things fit together is one of the major puzzles involved in making sense of Rand’s philosophy. If my life is the standard of morality, then why should I refrain from interfering with your freedom if doing so will advance my interests?
In her “synoptic statement” on rights, Rand makes the following series of claims:
If man is to live on earth, it is right for him to use his mind, it is right to act on his own free judgment, it is right to work for his values and to keep the product of his work. If life on earth is his purpose, he has a right to live as a rational being: nature forbids him the irrational.
But there seems to be a fallacy of equivocation going on here. In the first three uses, Rand uses the term “right” to assert that certain actions are morally permissible (it’s not wrong to do them) or even obligatory (it would be wrong not to do them).[i] So, for example, when Rand says that it is right for man to work for his values, she seems to mean at least that it is not wrong for him to do so, and perhaps more strongly that it would be wrong for him not to do so.
The other kind of “right”
Rand’s fourth usage of the word “right,” however, is significantly different. When she says that man “has a right” to live as a rational being, she is not merely saying that it is right for man to live as a rational being. She is saying that man has a right to live as a rational being. And these are two very different claims.
To have a right is to have a certain kind of claim against others. That claim could be a purely moral one (in which case the right is a moral right), or it could be one enforceable by law (in which case it is a legal right). It could be a claim against others that they perform certain positive actions such as repaying a debt (in which case it is a positive right), or it might simply be a claim that others refrain from performing certain kinds of actions like taking one’s property without one’s consent (in which case it is a negative right).
The important point, for our purposes, is that rights in this sense are claims on other people. To say that one person, A, has a right against another, B, doesn’t say much at all about what it would be wrong or right for A herself to do. What it says, instead, is that it would be wrong for B to act (or fail to act) toward A in certain ways.
If any person has a right, then as a matter of moral logic, some other person must have a corresponding obligation.
And this is the puzzle for Rand and her followers: Where exactly are these obligations supposed to come from? In order to remain consistent with egoism, it seems that Rand must claim that A’s right against B must be grounded not in A’s interests, but in B’s. In other words, B only has an obligation to refrain from interfering with A if it is good for B to do so. But as Mike Huemer has argued, it’s very hard to see why this restraint will always turn out to be in best interests of B.
It certainly doesn’t look that way in “lifeboat” cases like the situation described in Joel Feinberg’s story of the lost hiker — cases that I think are not as easily dismissed as Rand believed them to be. But we don’t need to go to the lifeboat to find cases that give us reason to doubt Rand’s claim. Even in ordinary life, there would seem to be plenty of situations in which B can advance his real, rationally defensible interests by violating A’s rights: stealing her lost wallet, lying on a resume he submits to her business, or littering on her property.
Objectivists must, for each and every one of these cases, deny either that (1) the action is actually a rights violation, or (2) that B’s interests would actually be advanced by the violation. In certain cases, this might work — B might not correctly anticipate the guilt he will feel after stealing, or his chances of being punished. But whether the expected costs of a rights-violation outweigh the expected benefits is an empirical question. And as far as I can tell, neither Rand nor her followers have given us sufficient reason to believe that the answer to that question is always going to be that they do.
Zwolinski and Rand on Egoism and Rights — Stephen Hicks
Two points are most important here, one about content and one about method.
At first sight, rights do seem egoistic: I have a right to my life, my liberty, my property, and as a matter of robust, jealously-guarded principle I want those rights to be respected by others.
Rand in particular argues that our rights are based in our needs and capacities as human beings. Human life is a process of thinking, producing, and consuming, and to survive and flourish each individual must take responsibility for the process. The creation and consumption of human value requires freedom of thought and freedom of action — individuals need to think and discover what is good for them, they need to act on their knowledge to produce those good things, and they need to consume the goods they produce.
In a social context, other people can be beneficial to the process: we can learn from each other, act jointly to be more productive, and trade to mutual advantage as consumers.
But other people can also be threats to the process: censorship, kidnapping, enslavement, theft, and so on undercut the affected individual’s ability to think, act, and consume. Those actions are therefore social wrongs, on principle, so their opposites are social rights.
That is what Rand means in the lines in which right is repeated, which Professor Zwolinski sees as problematic (paragraph 2): rights are a type of moral principle; they are part of a family of concepts that link individual right to social right to political right. The connection is maintaining the identification of what is moral in each increasingly-narrow context.
But, as Zwolinski questions (paragraph 6), why does it follow egoistically that I should respect others’ rights? I want my rights to be respected by others, yes — but why should I want others’ rights to be respected by me? Where does the principled commitment to universal and symmetrical application come from?
Rand argues that as human beings we are not able to survive by instinct or by range-of-the-moment action. We are rational beings, and we survive and flourish by making principled, categorical identifications and acting on them. I need to be self-responsible. I need to be productive. I need to plan long-range. And I need to do all of that in a world in which much of my living is social. So what principles should I adopt in my dealings with others?
So the relevant questions about respecting others’ rights are these:
- Can I recognize that others are humans?
- Can I recognize that they have the same general needs?
- Can I understand that, as a general rule, their respecting certain principles in their dealings with me is good for me?
- Can I understand that, as a general rule, my respecting certain principles in my dealings with them is good for them?
- Can I understand that both or all of us will be better off if certain principles are respected?
- Can I grasp that the same facts that make those principles right for me also make them right for others?
Rand’s answer to all of those questions is Yes. Moral self-education, then, hopefully guided and encouraged by good parenting and other socialization, is a matter of thinking through those questions and testing various answers to them in one’s dealings with family members, neighborhood kids, schoolmates, and others as one grows—until one is in a position to conceptualize and commit to principles as a mature individual.
Rational egoism is thus Rand’s grounding of political rights.
(This is not yet to presuppose answers to questions about emergency situations, whether to be a selective predator, how to deal with non-respecters of rights, determining degrees of violations of rights, or the status those not capable of grasping principles. Rand’s theory of rights is about contextual principles applied with practical wisdom; it’s not one of contextless absolutes to be mechanically followed. So more needs to be said.)
The emphasis on rational above indicates that for Rand epistemological matters are central to normative issues, for Rand is in a minority of thinkers who so emphasize the importance of fundamental philosophy. This brings us to a second important point.
Permissible to Whom?
In characterizing Rand’s position, Zwolinski asks at one point (paragraph 3) whether the claim of rights is to be interpreted as permissible or obligatory. That distinction should give us pause, for what kind of morality frames things in terms of permissions and obligations?
If we are to speak of permissible, then we should ask from whom we are seeking permission; and if we are to speak of obligatory, then we should ask to whom or what we are so obligated. Yet if we know anything about Rand’s ethics, then we should sense that we such a taxonomy is alien to it.
The point is that when interpreting a thinker’s position, it is weak methodology to state a thinker’s claim, interpret it by a distinction taken from some other philosophical framework, note that the resulting mix makes no sense, and then criticize the original claim.
Other moralities’ distinctions may be useful in criticizing a thinker’s position after one has figured out what it is. But when initially trying to interpret a position, we should beware of importing highly abstract distinctions from foreign moral theories.
Property and Value — Matt Zwolinski
Ayn Rand was a firm believer in property rights, holding them to be essentially a corollary of the right to life. After all, if the right to life is a right to act in order to preserve one’s life, then this right would be ineffectual if man did not also have the right to the product of his action — to that which he has produced.
The problem is that everything we produce is, ultimately, made out of raw materials that were not themselves produced by anybody. So even if it’s easy to justify why I should be morally entitled to the cake I’ve baked out of the flour and butter I owned, it’s not so easy to justify why I should be morally entitled to the patch of land I simply found and quickly put a fence around. In political philosophy, this is known as the problem of “original appropriation.”
The problem of original appropriation strikes many philosophers as serious because of the seemingly zero-sum nature of natural resources. There’s only so much land to go around. Therefore, whatever land you take and claim as your own leaves less land for me. Your interests might be served by your act of appropriation, but mine seem to be set back. Original appropriation, it has seemed to many philosophers, involves a real conflict of interests between the appropriators and everyone else.
Now, I think there are ways out of this problem — the most promising of which is developed in a wonderful essay by David Schmidtz. But Rand herself never grapples with the problem directly.
I suspect the reason why is that she didn’t see it as a genuinely serious problem. Rand did not believe that land and other natural resources were the true source of value. And thus, one person’s appropriation of some of that stuff did not really set back the interests of others in any serious way.
Mind and Value
For Rand, man’s mind is the fundamental source of values that sustain his life.
Physical stuff by itself can be no aid in man’s survival unless it is first understood by the mind and then put to work through deliberate, rational, productive action. Before man figured out what to do with it, crude oil was a pollutant, not a value. It was the human mind that transformed oil from an annoyance into a resource.
I think that there is a tremendously important insight in this analysis of value. But I also think it’s possible to stretch that insight too far. And I think that Rand, unfortunately, is guilty of doing precisely this.
After all, even if it’s true that nothing of value would exist without the human mind, it’s equally true that nothing (or at least almost nothing) of value would exist without physical resources for the mind to operate on. Both the human mind and physical resources are thus necessary for the production of value. Objective value is an aspect of reality in relation to man. So without the reality, or without the man, there is no value.
Thus, even if we accept Rand’s idea that natural resources have no intrinsic value in themselves, we must nevertheless recognize that they are a necessary component in the production of value. And so when we take those natural resources and put a fence around them, we are depriving others of something important. We are depriving non-owners of the liberty they once possessed to use that resource in their own productive activities. We are imposing upon them an obligation to refrain from using that resource without our consent — an obligation that we will enforce with the use of physical violence, if necessary. And this calls for justification.
I am enthusiastic supporter of property rights. And thus I do believe that such justification can be provided. But — and here I return to my earlier point about rights and egoism — providing a justification to one person of another person’s property right in X would seem to require doing more than simply showing how such rights are good for the first person. Since A’s property right imposes an obligation on B, we need to show how such an obligation is good for B as well. If A’s property right in X is good for A but bad for B, then for B to respect that right would be an act of self-sacrifice, and fundamentally incompatible with his rational pursuit of his own self-interest.
Property Rights and Value: Zwolinski and Rand and Locke and Rousseau — Stephen Hicks
Professor Matt Zwolinski raises a fun and deep issue about property rights. It has a long history before Rand, with Locke and Rousseau staking out near-opposite positions, and with post-Rand thinkers such as Robert Nozick and David Schmidtz making strong contributions.
Why did Rand not engage with it? I agree with Zwolinski that from the perspective of her robust creation ethic, it is either trivial or a non-problem. So the question is whether it really is a problem and/or a more serious one than she judged.
Value results from raw materials plus human agency. How much comes from each? Raw materials can be more or less plentiful, and human agency can be more or less creative. So we can play around with the variables by considering examples.
- A writer uses 1,000 sheets of paper to write a great novel. In this case, the raw material is plentiful and the contribution of human creativity is huge, so we are not inclined to complain that her taking 1,000 sheets of paper leaves less available for the rest of us.
- A hiker discovers easily accessible platinum deposits in unowned territory, stakes it out, and becomes rich after relatively minimal effort. In this case, the raw material is relatively scarce and the contribution of human creativity is much less, so we are more likely to hear complaints that his appropriation is questionable.
So if one emphasizes the value-adding power of human creativity, as Rand and her great near-contemporary Julian Simon are noteworthy for doing, then one acquires an opportunity mindset. The issue of raw materials becomes more trivial, as intelligent people can always create value out of what is available.
But if one is struck by a relative scarcity of certain raw materials, then, as Zwolinski points out, one is pushed into a zero-sum mindset, and that mindset tends to seeing others’ gains as its deprivations and others’ rights as imposing unwanted obligations.
Perspectives on Property
Two points are worth making here, so let’s work with the most popular example—land—to get to the core assumptions, for as always in philosophy the basic assumptions are the most important.
Suppose I look at the Manhattan skyline, as Rand did from her apartment. Do I see opportunities for me, given what others have done with the land? Or do I see deprivation, as others got to Manhattan Island long before I did and acquired it all for themselves? If I scale out to the United States as a whole, I find that almost half of its land is owned by local, state, and federal governments and the rest by private individuals and organizations — all of it acquired long before I immigrated. Should I say that opportunities have been taken away from me and/or that obligations have been imposed on me?
The first important point about such examples is one made by Locke in the Second Treatise, where he states that “he who appropriates land to himself by his labour, does not lessen but increase the common stock of mankind.” (I see Schmidtz as working out in more welcome detail what was only sketched by Locke.)
If, for example I had arrived in 1600 in what is now New York, then some opportunities would have been available to me then that are not available now. True. But some opportunities are available now that were not available then. At which time was the net value of the opportunities greater? If the net opportunities are greater now, then the language of deprivation and imposition is misplaced. (And if my goal is to acquire land in New York, then that opportunity is still available to me, as it has a lively real-estate market.) So property rights are win-win, contrary to the zero-sum thinkers.
But here is what I take to be the second and deeper point. We can speak of the mutually-beneficial nature of property rights, and that is a value of them to each of us. But that value of property rights should not be taken as part of the justification for initial appropriation, because raw materials in their unowned state are not items to which anyone has a claim.
Here we can take Rousseau as the foil, with his famous line against appropriators that initially “the fruits of the earth belong to us all.” His assertion is that, prior to property rights, we all have a claim in common to everything that exists, so anyone who appropriates incurs an obligation to make good on his or her lessening the common stock held by the rest of us.
But initially the raw materials of the universe are unowned, not owned in common, which means that nobody has any sort of claim to them with respect to anyone else. It’s the difference between saying:
- The raw materials are unowned, so everybody has a claim to them.
- The initial raw materials are unowned, so nobody has a claim to them.
To put the point in metaphysical terms, when one comes into existence, one has no claims on anything in the world. A just-born child has no entitlements with respect to the world at large, including both the as-yet unowned raw materials and the properties of others.
The child’s parents have obligations to provide for it on its growth journey to adulthood, but the governing assumption is that everything has to be earned. That includes that first breath of air the child appropriates from the commons by his or her own effort—for which the child need present no justification. At the same time, the preexisting property arrangements are not an imposition upon the just-born child that must be justified to the child.
Force and Freedom — Matt Zwolinski
Ayn Rand endorses a form of the libertarian “nonaggression principle,” which holds that the use of force should properly be banished from human relationships. For Rand, force is evil because it prevents individuals from acting according to the dictates of their own reason.
Thus, force violates man’s fundamental right to life — his right to act in pursuit of his values according to his own judgments, uncompelled by the judgment of any other. As Rand puts it, “To violate man’s rights means to compel him to act against his own judgment, or to expropriate his values. Basically, there is only one way to do it: the use of physical force.”
For Rand, then, “the basic political principle of the Objectivist ethics is: no man may initiate the use of physical force against others.” But how exactly are we to understand the meaning of the key term “force” in this principle?
Traditionally, libertarians and Objectivists have taken one of two broad approaches to defining “force.” One approach, which we can call the “moralized approach,” defines force in terms of an underlying theory of rights. The other approach, the “nonmoralized approach,” defines force in a way that makes no essential reference to rights or other moral terms.
To see the difference, imagine a case in which A violates B’s rights, but does so without so much as physically touching B. Perhaps B leaves his car unlocked on the street, and A lets himself in and drives away with it. Has A initiated force against B? If we accept the nonmoralized definition of force, we will have to say “no.” After all, A didn’t touch B at all. The only way we can explain the way in which A’s action affects B is in terms of the property right B has in his car. But if this is our basis for claiming that A has initiated force against B, then we are implicitly relying on a moralized definition of force. A’s action initiates force against B because it violates B’s (moral) rights.
It matters a great deal which of these understandings Objectivists rely on to inform the nonaggression principle. But neither understanding is entirely without its own peculiar difficulties. If, for instance, we accept a nonmoralized definition of force, then we abandon the tight, conceptual connection between force and the violation of rights, and must accept the possibility that some violations of rights will not involve the initiation of force, and the possibility that some cases of the initiation of force will not involve rights-violations.
And this means that we must take seriously the socialist argument that property rights themselves involve the initiation of force. After all, if I put a fence around a piece of land and threaten to arrest anybody who walks across it without my consent, it certainly looks like I’m initiating force when I grab a peaceful trespasser and slap a pair of handcuffs on him. The only way to deny that my action constitutes the initiation of force, it seems, is to argue that it was really the trespasser who initiated force. But that move is available only if we abandon the nonmoralized conception of force, and adopt a moralized understanding instead.
Suppose we do that. Adopting a moralized definition of force allows us to explain why the individual who steals someone’s car is initiating force, and why the landowner who enforces his property right isn’t. So, so far, so good. But the moralized approach to force comes with a serious drawback of its own.
For if we define the initiation of force in terms of the violation of rights, then we cannot define the violation of rights in terms of the initiation of force, lest we be guilty of circular argument. In other words, if we say that force is just any activity that violates individual rights, we cannot turn around and then say that our rights are to be understood in terms of freedom from the initiation of force.
Both ways of understanding force, then, appear to generate problems for Rand’s use of the nonaggression principle. And Rand’s frequent claim that force severs the connection between man’s mind and his actions seems to lead to further difficulties: Is the claim that force eliminates our ability to act on the dictates of our reason or merely that it limits it? The former claim is quite implausible, but the latter forces us to notice that a great number of other things also limit this ability, such as, well, other people’s property rights.
As I have argued at greater length elsewhere, the non-aggression principle is a poor basis on which to build a libertarian philosophy. But for the reasons described above, Rand’s invocation of it appears to be especially problematic.
Force, Rights, and Zwolinski’s Questions for Rand — Stephen Hicks
Let’s start with four scenarios involving a man running on a field who is suddenly tackled to the ground by another man.
- The tackler, it turns out, was a policeman, and the tackled man was escaping from a house he had burgled.
- The tackler, it turns out, was a defensive football player, and the tackled man was an offensive football player carrying the ball.
- The tackler and tackled were playing football, but the tackled man was outside the field’s white borderline when he was hit by the tackler.
- The tackled man was jogger and the tackler was a weirdo who liked randomly assaulting people.
In case 1, the tackled goes to jail. In case 2, the tackler and tackled try again. In case 3, the tackler’s team is penalized. In case 4, the tackler goes to jail.
Professor Zwolinski’s questions about force and rights again raise issues of content and method. Let’s focus on the method issues, as they are more relevant to his apparent puzzles. Zwolinski is in at least broad agreement with Rand that individual rights exist but has questions about how she derives them that seem to me driven by a methodological tangle.
In the four scenarios above, the physical actions are identical — one man tackles another to the ground — yet they have very different consequences. Understanding why those consequences are normatively appropriate requires attending to the broader complex context within which those actions and consequences occurred.
That in turn means that the proper place to start is not by specifying contextless definitions of force (e.g., as moralized or non-moralized) and then trying to deduce correct answers about particular circumstances. The method is not to present an abstract dichotomy of definitions, ask for a commitment to either, and then find a problematic case for whichever one is chosen.
Zwolinski is certainly correct that non-moralized definitions won’t work, and his objection here seems a variation on the classic Is-Ought problem: if we define force only non-morally, then we will face a gap when we want to define rights as moral principles. And at the same time we of course should heed Zwolinski’s warning about using moralized concepts in circular ways.
But the key content point is that all human action is “moralized.” We are always in a context of judging good and bad, right and wrong, better or worse. Consequently, by the time we get to high philosophy and are identifying principles such as rights, we are deeply embedded in moralized contexts.
(In his closing paragraph, Zwolinski was perhaps speaking loosely in saying that the NAP is a poor principle upon which to base a libertarian philosophy. But certainly Rand’s invocation of something like an NAP is not basic to her philosophy. It’s not even basic to her ethics or to her social philosophy. Rather it is a derivative, specifying a bridge principle between ethics and social philosophy and politics.)
Actions necessary for human life
Yet as Zwolinski also properly states, Rand begins by specifying the individual actions that are necessary for human life (thinking, production, etc.). She identifies ways in which others’ actions can be beneficial to our lives (teaching, friendship, economic trade, etc.). Then she identifies the types of actions by others that interfere with those necessary actions — and within that very broad category she identify the subset of interferences that are major enough to justify physical retaliation (theft, rape, kidnapping, assault, etc.).
The process is empirical, and at each stage of identification an argument from cases is necessary to establish the principle involved. We see this argument, for example, among philosophers about defining that final category of cases in which the retaliatory principle kicks in — where exactly is the demarcation?
John Stuart Mill offers the broader Harm Principle (On Liberty, I.9) while Rand specifies the narrow initiation-of-physical-force principle. Mill eschews the rights label while Rand embraces it. But the method for both is inductive by investigating a large number of particular cases and abstracting the relevant similarities and differences. Or to put it in modern-philosophy epistemological terms, their approach is empirical-and-bottom-up-abstraction — rather than rationalist-abstract-definitions-and-downward-branching-decision-trees.
But even here “initiation of force” is all by itself not a definitive guide, as many initiations of force are legitimate. Parent initiate force regularly with their infants — every time the kid’s diaper needs changing he or she is man-handled (or woman-handled) without consent.
Boxers are encouraged to initiate massive physical force upon each other until the bell rings. If you see your girlfriend about to step in the path of an onrushing bus, you will grab her and haul her back.
So we always need to identify what legitimate values are being pursued or possessed and by what means. Then we can exercise judgment whether the initiation of physical force in a particular case is an inappropriate interference with that legitimate pursuit or possession.
[i] This is what analytic philosophers refer to as the “deontic status” of an action.
Leave a Comment
The new idea on the Left today isn’t so much Marxism as Welfarism. That’s how Prof. Brandon Turner explains today’s politics. Full video on Facebook
Comments Off on Let’s take back the meaning of “pro-choice”
What comes to mind when you hear the term “pro-choice”? If it’s the number of deodorant options at your local convenience store, you must not follow the news very closely. “Choice” has become a euphemism for abortion rights specifically (along with “life” and prenatal rights), but the concept of choice is actually a fundamental precept of living in a free society.
So I want to take back the term “pro-choice.” Truly supporting choice means supporting the right of individuals to choose what they want to do with their bodies and property and with whom they want to do it as long as that choice does not clearly affect the property rights of another. Here are some questions to consider:
Let’s concede that government will fund education (of course, a liberty-minded person would argue that that is inappropriate to force citizens to pay taxes to support schools). Why should a parent have to send their child to the school in their neighborhood? If you support choice, should you deny parents a choice of educational institutions?
Food and Drugs
Some city governments have recently imposed a tax on sodas because of the unhealthy nature of the beverages. In New York City, there was even a push by Democrat politician Felix Ortiz to ban salt. Therefore, one’s choice to drink sodas or junk food is being either hindered by a higher price due to the tax.
When it comes to drugs, some conservatives want extreme government control over what individuals can put into their bodies. They sometimes argue that drug use causes negative externalities which justify government intervention, but these effects on third parties can be remedied by strict government penalties when one violates the rights of others or injures others under the influence of the drug.
Liberals, who are sympathetic to legalizing marijuana, are inconsistent when it comes to other drugs. But shouldn’t being pro-choice mean that we support an individual’s right to choose what to put in their bodies regardless of the potential internal harms they may cause? This is not to say that an individual who supports legalization is for the use of drugs. In fact, one can hate these drugs, but still hold the position that the government should not use money, human resources, and jail space fighting these drugs.
To whom does one’s head belong? There is no doubt that it belongs to the individual and nobody else. Yet, the government requires motorcycle riders to wear a helmet. If one supports choice, then one should support motorcycle riders who don’t want to wear a helmet — it’s their body and they should be able to do what they want to do with it, including not protecting their head with a helmet. Of course, a person who chooses not to wear the helmet in order to enjoy his motorcycle ride more must also be willing to deal with the consequences of that choice.
Discrimination (personal and business)
Most, if not all, individuals support an individual’s right to discriminate based on whatever criteria he or she decides when it comes to whom they will “hook up with,” date, or marry. In other words, the vast majority is pro-choice when it comes to using race, ethnic background, attractiveness, religion, sexual orientation, and age as filtering devices.
Moreover, it is legal for one to say, “I will only marry someone of this religion” or “I am only attracted to this particular look,” or “I will only marry someone who is younger than I am.” The vast majority of people, regardless of political philosophy, are pro-choice when it comes to our personal lives.
However, why does this choice or freedom of association not exist for the individual who is a business owner? What if a business owner wants to hire only employees who look like a particular Hollywood actor or actress or a famous model? The vast majority, conservative or liberal, would not support this freedom of choice. Why is it legal and considered morally acceptable to have the freedom of choice to determine who enters into his or her home based on whatever criteria, but not when it comes to a business owner who wants to discriminate?
Many, if not most, people support one’s right to give away (donate) his or her kidney or part of his or her liver. In fact, they would probably hold the donor in high esteem. So why not support the choice of an individual to sell his or her kidney to a willing buyer who is in critical condition (which is currently illegal)? If it’s my organ, should I not have the right to sell it to a willing buyer? Incidentally, I would guess that most individuals believe it should be legal to sell one’s eggs or sperm (which is legal).
Do we have a choice to sell the use of our body for money? Yes, it’s called a job. An employee is the willing seller of his or her labor, and the last I checked employees are bodies not spirits. In fact, I bet most people also believe an adult should have the right to make an income by starring in pornographic films.
They might even believe, though not necessarily, that there is nothing immoral about pornography itself. So why are so many people anti-choice when it comes to using one’s body to make money from sex? It’s interesting that porn stars earn an income by having sex, but prostitution is illegal. It is inconsistent to support choice for willing adults when it comes to careers and ways of earning income, including sex (e.g., strippers, porn stars), except when one wants to be a gigolo or prostitute.
To be clear, I am not supporting the morality of these ways of earning an income. I am pointing out the logical inconsistency.
The foundational basis of what it means to be “pro-choice” is that a person’s body belongs to himself or herself and that the government should not interfere with what one chooses to do with their own body—it’s a property rights argument. So let’s take back the term “pro-choice” and apply it consistently across all levels of human action and interaction.
Comments Off on Did Trump’s attack on Syria violate the Constitution?
Donald Trump’s decision to bomb an airfield in Syria, along with his hints about overthrowing Bashar al-Assad, have led many to wonder: doesn’t Trump need authorization from Congress before acting? The answer, like many in politics, is yes and no.
A quick scan of the Constitution will tell you that Congress has the power to Declare War as well as the power to issue “Letters of Marque and Reprisals.” For those not up on 18th century lingo, those Letters provide the bearer with the power to capture an enemy who has left the country (typically by ship). It legalizes actions that would otherwise be considered piracy.
The president, in turn, has the power to act unilaterally in emergencies. We know this from Madison’s famous decision to change Congress’s power from “make” to “declare” war so presidents would be able to “repel sudden attacks.”
Once Congress has provided presidents with authorization, they have the power as commander-in-chief to conduct military operations. Congress can maintain discipline through the “power of the purse” — containing presidential adventurism by closing up the purse strings.
Presidents at War
That’s the theory. But that’s not how it has worked in practice. In fact, Congress has issued only 5 declarations of war over the entire course of US history. And yet presidents have deployed the military more than 300 times.
Thomas Jefferson sent the American navy to the Mediterranean to fight the Barbary Pirates without prior congressional approval. James Polk marched the army up to the border with Mexico, all but daring the Mexicans on the other side to shoot. William McKinley and Theodore Roosevelt treated the American military like a “civilizing force,” sending it to the Caribbean and the Pacific to display America’s increasing might and open new markets for American products.
Executive power only expanded from that point on. WWI and WWII put enormous power in the hands of the executive to act unilaterally. By the time Truman became president, his predecessors had carved out a large enough space for unilateralism that he started and conducted the Korean War without ever receiving congressional approval.
After the “Imperial Presidencies” of Lyndon Johnson and Richard Nixon, Congress passed the War Powers Resolution, meant to hold executives more accountable. But, while presidents will acknowledge that resolution’s existence, they never acknowledge its constitutionality.
By the time George W. Bush became president, he had decades of precedent providing him with good reason to assume he could initiate military operations without congressional approval. And yet, after 9/11 Bush sought a congressional Authorization for the Use of Military Force (AUMF) to pursue terrorists and those who provide them sanctuary. He sought another AUMF in 2002 for military operations against Iraq.
This leads to a puzzle: Presidents in the 21st century know they can initiate operations unilaterally. Members of Congress know they can sit on the sidelines. For the most part, presidents get the glory if it goes well and Congress wags their fingers if it goes poorly. Presidents know that Congress is likely to abdicate responsibility, so when they want to take military action they avoid wasting the political capital needed to court the legislative body effectively.
In fact, for at least 100 years, there have been only two sorts of circumstances in which presidents have sought, and Congress has provided, congressional approval for military action:
- Direct Attacks on American People or Territory
This occurred for all of the declared wars as well as for the 2001 AUMF that followed 9/11.
- Vital US Interests at Stake
George H.W. Bush made this claim to Congress to receive authorization for the first war in Iraq. George W. Bush claimed Saddam Hussein still had WMDs in 2002, leading Congress to provide him with the authority to initiate hostilities if he deemed it necessary. Since WWII, Congress has only issued an AUMF 8 times.
Even when there is a pressing humanitarian crisis, like those in Kosovo, Sudan, or Syria, Congress has shown extreme reluctance to stand behind the president and provide authorization to use military force. It is a safe assumption that if there isn’t a clear US interest; clear public support; a clear exit strategy, or a fearful population, Congress tends to allow presidents the power to sink or swim on their own.
This is problematic for any constitutional system. It also facilitates reactive and poorly thought-out policy. Looking back on policy decisions over the last hundred years, we can see a cyclical pattern in which presidential adventurism leads to a period of relative isolation, during which time enemies (broadly understood) regroup and attack, leading to another round of presidential adventurism.
Congressional buy-in isn’t a magic bullet: Congress has contributed to plenty of mistakes. It does, however, force a president to deliberate about decisions and provide more clarity about the military objectives. This process is vital for producing better policy outcomes, fewer errors and more responsibility for the vagaries of war.
Without an attack on the United States or a clear US interest, however, Congress has proven unwilling to perform their constitutional duty. What happens if Congress doesn’t challenge the president? Without that check on executive power, presidents have long understood they have the authority — thanks to their constitutional powers and deference in the military to civilian control — to initiate nearly any operation, of any size, anywhere in the world.
The problem comes out most clearly in a quote from Congressman Jack Kingston from 2014. He accidentally told the truth when discussing the congressional decision to avoid giving Obama an AUMF. “A lot of people would like to stay on the sideline and say, ‘Just bomb the place and tell us about it later’… We like the path we’re on now. We can denounce it if it goes bad, and praise it if it goes well and ask what took him so long.’”
Congressional deference doesn’t impede presidential adventurism. It facilitates it.
Decisions about how to spend American treasure and spill American blood comes from one end of Pennsylvania avenue while the other looks on. For better or worse, it is likely that America’s military decisions in Syria will fall entirely on Trump’s shoulders.
Comments Off on Is Judicial Review Undemocratic?
America just got a civics lesson from a U.S. Senator on the role of the Supreme Court. In his opening statement during the nomination hearing of Neil Gorsuch, Senator Ben Sasse explained the proper (albeit uncommonly-realized) role of a Supreme Court justice. According to Sasse, the Supreme Court, when it appropriately exercises the power of judicial review, defends the long-term will of the people. Sasse is right, and those who wish to defend limited government and the will of the people in the United States should be passionate both about defending judicial review and also about limiting judicial review within proper constitutional bounds.
Defending Judicial Review
Why isn’t judicial review undemocratic? Why is it alright for the elected representatives of the American people (i.e. Congress) to pass a law only to have it “struck down” by a panel of unelected, dour ivy-leaguers in black robes (i.e. the Supreme Court)?
Before we get to the answer, a brief refresher in American civics: American constitutionalism as understood by the framers of the U.S. Constitution requires that the “will of the people” exists not in any single law passed by Congress but only in the fundamental law that is the U.S. Constitution. It is the Constitution that embodies the long-term will of the people.
The Constitution established an essentially-popular government, but the problem with all popular governments is the constant tendency of majorities to oppress minorities, particularly during temporary periods of political passion. The framers therefore institutionalized certain checks against the temporary ambition of the majority through such features as the bicameral legislature (Article I, Sec. 2-3) and the executive veto (Article I, Sec. 7), while respecting the popular foundation of American political authority in the form of an original ratification of the U.S. Constitution in the people of the several states (Article VII) and of regular revisions to the fundamental law through amendments to the U.S. Constitution when a supermajority agrees to it (Article V).
So, the Constitution, taken as a whole, represents the will of the people bound by certain constraints to prevent tyranny of the majority. Any action of a congressman, president, or Supreme Court justice at odds with the U.S. Constitution therefore is at odds with the will of the people. We have a word for that: unconstitutional.
What shall we say then of judicial review? If Congress passes an unconstitutional law, that law cannot in any true sense represent the will of the people, especially if it were to represent only some temporary spasm of political desire on the part of a majority of the country. This, in any case, was Alexander Hamilton’s argument in Federalist 78. According to Hamilton, when Congress passes a law that it had no authority to pass, it effectively “enable[s] the representatives of the people to substitute their will to that of their constituents.” When this happens, the Supreme Court may lawfully act as “an intermediate body between the people and the legislature, in order, among other things, to keep the latter within the limits assigned to their authority.” In this way, Hamilton explains the essentially-democratic nature of the practice of judicial review: “If there should happen to be an irreconcilable variance” between the Constitution and a law of Congress, “that which has the superior obligation and validity ought of course to be preferred; or in other words, the constitution ought to be preferred to the statute, the intention of the people to the intention of their agents.”
Rather than being undemocratic, judicial review, rightly understood and rightly exercised, defends the long-term will of the people. As Sasse explained during the Gorsuch hearing, “When Congress passes an unconstitutional law, it is in fact the Congress that is violating the long-term will of the people, for the judiciary is there to assert the will of the people as embodied in our shared Constitution over and against that unconstitutional but perhaps temporarily popular law.”
The Limits of Judicial Review
While judicial review rightly-understood constitutes an essential feature of the American political system, unrestrained judicial review constitutes a dangerous deviation from democratic principles. Hamilton explains in Federalist 78 that judicial review does not “suppose a superiority of the judicial to the legislative power. It only supposes that the power of the people is superior to both.” When the U.S. Supreme court strikes down laws of Congress that are not in “irreconcilable variance” with the U.S. Constitution, the Supreme Court effectively substitutes its own will for the long-term will of the people as embodied in the U.S. Constitution as the final measure according to which all laws are judged.
To be clear, this renders the American polity an oligarchy instead of a democratic republic, and it is no better than Congress passing laws that it has no authority to pass. Both constitute an attempt by our governors to substitute their own will for the long-term will of the people as embodied in the Constitution. In fact, Madison and Hamilton were clear in the Federalist Papers that although all three departments of government play a role in the interpretation of the U.S. Constitution (interpretations which receive institutional force in powers such as the legislative power of Congress and the executive power of the president), the “people themselves…can alone declare its true meaning and enforce its observance” through such things as elections and, of course, through amendments to the U.S. Constitution. The point is that the Supreme Court does not, any more than the president or Congress, provide a final interpretation of the Constitution for which there can never be an appeal. If that were the case, “the people will have ceased to be their own rulers, having to that extent practically resigned their Government into the hands of that eminent tribunal.”
So, is judicial review undemocratic? No! Rightly understood, judicial review is an essential bulwark of American liberty. But wrongly understood, judicial review is an abuse of court power, an abuse made more dangerous by many Americans’ lack of awareness of the importance of the American people – not the legislature, the court, or the president’s legal counsel – being the final judge of the meaning of the Constitution, which is itself the will of the people.
 These were Lincoln’s words in his First Inaugural Address when he was responding to the Supreme Court’s decision in Dred Scott v. Sandford (1857) in which the court held the Missouri Compromise of 1820 to be unconstitutional because it violated an alleged constitutional right of people to own other human beings as property that was protected under the 5th Amendment.
 The United States Supreme Court has, at various points, asserted, either implicitly or explicitly, that its constitutional interpretation, rather the Constitution itself, is the supreme law of the land. One example of an explicit assertion to this effect occurred in Cooper v. Aaron (1958); for a discussion of this as a problem, see Edwin Meese, III, “The Law of the Constitution,” Tulane Law Review, Vol. 61: 979-990.
Comments Off on How to stop politicians from gerrymandering
Elected officials are regularly tempted to exercise their power in ways that benefit themselves and their friends at the public expense. A good example is gerrymandering, the practice of drawing district lines to help ensure a desired result in future elections. Both parties do it, and the practice dates far back in history.
Gerrymandering often results in strangely shaped political districts in which it is very difficult for voters to unseat incumbent politicians.
In a classic single-party gerrymander, the party in power packs opposition voters densely into as few districts as possible, thus enabling its own voters to lead by a comfortable margin in a maximum of districts. When a legislature is under split party control, the theme is often bipartisan connivance: you protect your incumbents and we’ll protect ours. Third-party and independent voters, as is so common in our system, have no one looking out for their interests.
Some other strategic political purposes can be served by gerrymandering: Weak incumbents can be spared scrutiny of their performance by assigning
them tracts that fall short of being coherent political communities,
perhaps combined slivers of multiple metropolitan areas with little in common. It’s expensive and time-consuming for a challenger to campaign or advertise against an incumbent in such a district. Party bosses can also punish their own party’s lawmakers for being too independent-minded by drawing them unfavorable districts.
The process feeds apathy. Residents who have not even figured out which district they are in are less likely to keep track of how well their representative is serving their interests.
The Constitutional Background
Our Constitution puts states in charge of apportioning their own legislatures, while dividing the corresponding power as to congressional districts between them and Congress. The Supreme Court’s one-person-one-vote rulings require equal or nearly so population in districts within a state. The Voting Rights Act of 1965, following the Equal Protection Clause, bans districting done for a racially discriminatory purpose, which adds a sometimes-complex overlay of requirements.
Although the Supreme Court has been urged to ban politically motivated gerrymandering, it has thus far declined to do so. Its rationale: it could identify no principled and objective standard to apply that would not draw it into a multitude of complicated local disputes.
Fortunately, ideas for reforming gerrymandering are many. They fall into two main categories:
- Rules on who is responsible for drawing district lines
- Rules directing the shape or extent of districts
Who should draw the lines?
One of the ideas that recurs most frequently is to make the process bipartisan, or at least avoid empaneling a majority of loyalists from a single party. The second largest party thus winds up in a negotiating position, perhaps with one or more neutrals or tie-breaking votes in between.
A newer trend, which has caught on especially in Western states in recent years, is to entrust redistricting to a more fully independent commission of citizens not holding office. Elected officials themselves, their families, and political pros are frequently excluded.
In a category of its own is the system used in Iowa (as well as many countries outside the U.S.). It assigns redistricting to the same nonpartisan civil service staff that provides legislative services such as bill analysis at the capitol. Although Iowa’s system is often praised for its fair results, it may owe some of that success to features of the local political scene not replicated everywhere. For example, Iowa has a fairly even party balance and a legislative staff whose nonpartisan bona fides are accepted by lawmakers of both parties.
Under any of these systems, the law can go further by prescribing the powerful step of “blinding” the line-drawers to politics – that is, directing them not to consider such factors as current party registration, past voting records, or the residence of any individual, such as an incumbent.
What should districts look like?
The most essential task in reform is to provide clear and objective rules for governing how districts are drawn. The three most widely accepted standards are as follows:
All parts of a district should touch. Although this seems obvious, careful language helps prevent such tricks as circuitous connections over water.
Intuition tells us the difference between a district shaped like a turtle and one shaped like a tapeworm. But trusting to intuition is not necessary: at least two mathematical measures of compactness are widely employed. Colorado’s constitution prescribes the “total-perimeter test”: “Each district shall be as compact in area as possible and the aggregate linear distance of all district boundaries shall be as short as possible.” Other states use a “radius” or “length/width” test.
Where possible, districts should respect the boundaries of smaller political subdivisions, such as counties and towns. One convenient measure of congruence is the number of county or town splits in a plan, with lower numbers ordinarily better.
Other criteria are sometimes prescribed, but if too many are introduced, and if the commission is given latitude to balance among them, then a dangerous degree of discretion is reintroduced into the process.
The Role of Technology
Technologically, gerrymandering is a bit of an arms race. Politicians with access to so-called big data can now efficiently sort voters down to precincts, city blocks, and even buildings. That is why the problem will get worse absent correction. Yet quantitative methods hold out hope for the reform side as well, and not only by providing objective, replicable measures of goals like compactness.
Geographic information systems (GIS) methods now allow members of the public using inexpensive software to analyze the full data set behind a map. In several states, that has meant members of the public could offer maps of their own or make well-informed critiques of legislators’ proposed maps. In one triumph for citizen data use, the Pennsylvania Supreme Court invalidated a map drawn by lawmakers as clearly inferior to a map that had been submitted independently by an Allentown piano teacher.
Redistricting reform makes sense as a safeguard against the entrenchment and insulation of a permanent political class. Voters should choose legislators, not the other way around.
 (Article I, Section 4 of the Constitution: “The Times, Places and Manner of holding Elections for Senators and Representatives shall be prescribed in each State by the Legislature thereof; but the Congress may at any time by Law make or alter such Regulations.”)
Comments Off on The Progressive War: Woodrow Wilson and the US Entry into WWI
One hundred years ago today, on April 6, 2017, the United States entered World War I. It was a difficult decision on the part of President Woodrow Wilson, but one that he believed held the potential to change the entire future of human civilization and to turn away from its bloody, destructive past.
Since 1914, the war had been brutal, with a level of destruction that shocked even jaded observers, and the United States remained on the sidelines, vowing repeatedly that it had no reason to take part in the conflagration.
Now, however, it was at last going to fight.
The US entry into World War I is often regarded as the end of what was called the Progressive Movement — the years since 1901 that had seen great reform-minded activism embraced by the national government. In this interpretation, America joining the war amounted to nothing less than the betrayal of all progressive impulses and an abject surrender to the type of uncivilized militarism many progressives bitterly opposed and for which they blamed the war in the first place. Wilson, campaigning for reelection in 1916 and desperately wanting progressive support, acquiesced in allowing “He kept us out of war” to be one of his campaign slogans.
But in fact, the American entry into the war was the apotheosis of progressivism — the high-water mark of its crusading zeal — not a betrayal of its central tenets. America joining the war was clothed in progressive rhetoric with the goal being nothing less than ending war forever as a blight on humanity.
President Wilson had repeatedly hoped the belligerents would accept mediation, particularly during 1916, the “Year of the Offensives,” in which Germany and Britain bled each other dry on the fields of the Somme and Verdun. But they did not.
As historian Arthur S. Link notes, the British and the French even made it clear that they would regard any attempt by Wilson to mediate as a hostile act. The President grew furious with such refusals and became convinced that no participant in the war cared anything whatever for real peace: all they cared about was winning, regardless of the cost.
Any peace that could possibly come from these barbarous participants would be short and meaningless, only setting the stage for future conflict. With all the self-righteousness he could muster, Wilson convinced himself that only he could bring peace to Europe.
Progressivism at Home
The progressive mentality in the United States approached social and political problems not as conditions to be managed but as things a modern, rational government could fix once and for all. Whether it was dismal, unsanitary conditions in the nation’s meatpacking plants, rapacious corporations that destroyed free competition, or the chaos of a decentralized financial system that allowed millionaires to dictate banking policy, such challenges for America demanded creative and authoritative measures.
No longer were local ameliorative efforts to be endorsed; it was the national government that would bring about definitive permanent solutions. And now, under Wilson’s leadership, it would take on the most destructive and persistent problem that mankind had ever faced.
“The world must be made safe for democracy,” he told the Senate in January, 1917, adding that the United States had “no selfish ends to serve. We desire no conquest, no dominion.” This would be a type of war the world had never seen. True, it was Englishman H.G. Wells and not President Wilson who initially described the war as “a war for peace,” one that “shall stop this sort of thing forever.” But it summed up the president’s understanding.
For Wilson, this was no betrayal of progressivism. This would be its culmination.
European recalcitrance regarding peace led Wilson to the odd insistence that America participate in the war not as an ally of the British and the French, but as an “associated power.” The distinction was largely lost on London and Paris, which cared little for such semantics provided that once they arrived, American soldiers would shoot at the Germans. But for Wilson, the difference was crucial: America was not fighting for the same discredited goals for which other nations were fighting. America was fighting to end war permanently. The centerpiece of his vision was the creation after the war of a worldwide organization that would ensure peace, rationally and fairly. The League of Nations would be the Federal Reserve System on an international scale.
As American participation in the war ultimately showed (and as more recent presidents like George W. Bush and Barack Obama have learned), when a crusading determination to remake the world seizes the government, policy failure, disappointment, and disillusionment are often the results. Woodrow Wilson’s approach to World War One ultimately stands as a continual reminder of the need for a realistic understanding of what politics can achieve.