Category Archive: Debt/Spending
Comments Off on Entitlement reform key to fixing America’s fiscal future
In his first address to Congress, President Trump lamented that “the past Administration has put on more new debt than nearly all other Presidents combined.” With federal debt approaching $20 trillion, he is right to be concerned about the rapid accumulation in recent years.
However, the president did not mention of Medicare and Social Security, two of the largest and fastest-growing federal programs, and he has previously stated that he sees no reason to reduce spending on these programs. Treasury Secretary Mnuchin reiterated last week, “We are not touching [entitlements] now, so don’t expect to see that as part of this budget.”
Without substantive reform, it will be exceedingly difficult to address the country’s long-term fiscal problems, and it will only get harder if needed changes are delayed.
Medicare and Social Security already account for roughly two-fifths of all federal outlays, and they will account for a growing share of the federal budget over the coming decade. Medicare, Social Security, and net interest payments on the debt will account for roughly 55 percent of federal outlays by 2027, an increase over their already significant share of 45 percent last year.
Source: Congressional Budget Office, “10-Year Budget Projections, January 2017,” Tables 1-2 and 1-3.
Entitlement spending growth is a major reason that budget deficits are projected to surge over the next decade. Although forecasting ten years in advance is notoriously difficult, the deficit is estimated to exceed $1.4 trillion by 2027 and accelerate further after that, with trillions added to the debt as a result. By 2045, debt held by the public will almost double, to 145 percent of GDP according to the Congressional Budget Office. It is practically inconceivable that politicians would not step in before this happened. However, if left unaddressed. debt at these levels would severely hamper economic growth, reduce living standards, and put increasing amounts of pressure on net interest payments and other areas of the federal budget.
Source: Congressional Budget Office, “Long Term Budget Projections, January 2017,” Supplemental Table 1. Annual Data Underlying Key Projections in CBO’s Extended Baseline.
Efforts to root out waste, fraud, and abuse, or to increase government’s efficiency are certainly worth pursuing, but proposals that eschew any kind of entitlement reform will leave the main drivers of debt in the long-term untouched.
Similarly, reducing regulatory barriers, improving the tax code, and generally developing a policy framework that allows the economy grow more rapidly are good ideas. To some extent, this could attenuate structural fiscal issues, but even higher rates of growth cannot make them go away. According to one recent estimate, productivity growth would need to be twice projected levels just to stabilize the debt at slightly lower levels as a percent of GDP. Doubling productivity growth rates would be an impressive accomplishment, but there is a limit to how much it can help the country get out of its debt problem.
This is why entitlement reform is key. The unsustainable nature of these programs face mean that some reforms will have to be implemented: the only questions are when and what kind of changes will be made. The longer these reforms are put off, the inevitable changes will by necessity be larger and more abrupt.
For example, the Social Security Trustees estimate that an immediate and permanent benefit reduction of 16 percent for all beneficiaries would be enough to make the program solvent for the full 75-year projection. If nothing is done until the trust fund becomes insolvent in 2034, an immediate 21 percent reduction in benefits would be necessary.
Phasing in a gradual increase in the retirement age indexed to increases with longevity, or using the chained CPI for cost of living adjustments are measures that could go some way to making the program sustainable without sudden, significant benefits or tax increases. Kicking the can down the road will only increase the magnitude of eventual disruption, when changes will have to be concentrated in fewer years and the burden will fall on fewer people.
Part of the political difficulty stems from the public. People are wary of reforms that could affect their benefits, and they lack understanding regarding which programs are the drivers of the country’s debt. In a recent poll, 46 percent of respondents said they thought foreign aid, which accounts for roughly one percent of the federal budget, contributes “a great deal” to the national debt, a higher proportion than for any of the other programs polled. It is laudable to take a hard look at spending at all agencies and to excise inefficient or wasteful spending, this alone will not be enough to improve the overall fiscal picture.
Without real reform, the important task of placing entitlement programs back on a sustainable trajectory will be left for later generations—at which point the country will be farther down this unsustainable path.
Charles Hughes is a policy analyst at the Manhattan Institute. Follow him on twitter @CharlesHHughes.
Comments Off on The black hole of Pentagon finance
The Pentagon suppressed a 2015 study exposing $125 billion—yes, billion—in administrative waste over a five year period in order to protect its own budget from being slashed. The Washington Post revealed the suppressed report earlier this month.
The numbers in the report are staggering:
- 23% of the Pentagon’s $580 billion budget ($134 billion) is spent on overhead and core business operations like accounting, HR, and property management.
- The Pentagon employs over 1 million people in its back-office bureaucracy.
- The average administrative job at the Pentagon costs taxpayers more than $200,000.
But none of this should come as a surprise given how government bureaucracies operate. The political arrangement of the military-industrial complex is very different from the way competitive markets work, which has important consequences. In competitive markets, profit and loss provide continual feedback as to whether companies are using their resources effectively or not. The result is that resources tend to be used where they create the most value.
But for government (in this case the Department of Defense), profit and loss are determined not by the market, but by a political actor’s ability to navigate politics. Decisions about where resources will go are made by bureaucrats, not consumers and entrepreneurs. This means there is no way to ensure that resources in the defense industry are being used where they are valued most highly. Success is determined by the size of the agency’s budget. This incentivizes bureaucratic bloat and administrative secrecy.
After all, it’s taxpayer money, so there is little accountability for wasteful spending. The result is that the Department of Defense overspends and under-delivers.
Impossible to Audit
Given the incentives at work, it shouldn’t be surprising that this report is not the most recent instance of waste and mismanagement. Consider that since 1997, the Government Accountability Office has been legally required to audit the financial statements of federal agencies. Despite this requirement, it has been unable to audit the Department of Defense because the DOD has been unable to provide accurate and credible financial documents.
This fundamental lack of basic accounting processes and controls means that the Pentagon is unable to keep track of its financial resources and expenditures in any kind of meaningful way. But this wouldn’t change the underlying problems anyway. The sheer size and complexity of the military bureaucracy coupled with overly lofty foreign-policy goals means thorough oversight and accountability are virtually impossible.
The only real solution would be to drastically reduce the size and scope of the military and related government agencies, which would remove many of the incentives for the DOD to overspend and to obfuscate its spending. This reduction, in turn, requires adopting a restrained foreign policy minimizing the use of military abroad and the significant resources necessary to fund such international adventures.
Comments Off on Fiscal and economic implications of higher interest rates
As the Mercatus Center’s Scott Sumner often says, one ought never to reason from a price change. Interest rates, like other prices, can change for all sorts of reasons; the implications of the change generally depend on the particular reason for such a change.
Consequently, there’s no simple answer to the question, “If the interest rate on bellwether bonds, such as a 30-year Treasury bond, increases by 200 basis points, will the average US citizen (or the US Treasury, or both), be better off or worse off?” The most correct answer is “It depends.”
Any nominal interest rate reflects two predominant influences: the state of economic productivity and the expected future rate of inflation. Of these influences, the inflation rate is far more variable. Most of the decline in nominal interest rates since the 1980s reflects a corresponding decline in inflation.
Lately, however, both productivity and inflation have been subdued. Inflation has been hovering around 1 percent, while annual total factor productivity growth has been bouncing between 0 percent and 0.6 percent. In light of such figures, today’s remarkably low T-bill yield of just under 2.34 percent is hardly surprising. Still, it’s disturbing to realize that this value reflects market makers’ opinion that current low rates of inflation and productivity growth are likely to persist for some time.
To say that long-term Treasury bill rates mainly reflect the course of economic productivity and inflation doesn’t mean that those rates don’t themselves depend on monetary policy. Monetary policy is, of course, an important determinant of the inflation rate and of the public’s inflation expectations. In the long run, a looser monetary policy stance means higher inflation and therefore higher nominal interest rates, ceteris paribus. In the short run, however, looser policy can, and often does, lower both nominal and real interest rates. Its ability to do so—especially its ability to lower long-run rates—will be limited to the extent that it results in relatively rapid, upward adjustment in inflation expectations.
In any event, monetary easing alone can’t reduce rates for long, though it may appear to do so when it happens to coincide with a decline in either productivity growth or inflation expectations. It follows that, despite popular opinions to the contrary, easy money hasn’t had much—if anything at all—to do with the low rates that have prevailed since 2009. Had monetary policy really been easy all this time, spending growth and inflation would not have remained so subdued.
The more complicated truth is that, although the Fed has added trillions to the monetary base, the demand for both cash reserves and other relatively safe assets has also grown proportionately. That growth is in part a result of other Fed policies, including the decision to reward banks for holding reserves; the adoption and enforcement of Basel III’s Liquidity Coverage Ratio; and the more stringent regulation, if not outright prevention, of many once-conventional (and mostly prudent) kinds of bank lending. These and other measures have served to “shunt” available bank funds into a relatively limited set of markets, contributing to the “easiness” of money in those markets, while making it scarce elsewhere. Bubbles, perhaps; but no suds.
Having considered why rates are so low to begin with, it’s evident that they might rise in the near future owing to either an increase in the expected rate of inflation or an increase in the growth rate of productivity. Rates might increase by 200 basis points because the expected inflation rate increases by 200 basis points, with no change in the productivity growth rate; because the rate of productivity growth increases by 200 basis points, with no change in the expected rate of inflation; or because the two rates change by other values that sum to 200 basis points.
Some of those possibilities are of course less likely than others. For several decades now the growth rate of total factor productivity has seldom been as high or higher than 2 percent, or 200 basis points, and more recent experience suggests that it’s likely to remain well below that level for some time.
Notice that none of these possibilities depend on the Fed’s tightening its policy stance. On the contrary, whatever the more immediate interest rate effects might be, such tightening would almost certainly lead to reduced levels of both actual and expected inflation. Though the effects of Fed tightening on productivity are less predictable, those are also more likely to be negative than positive.
To come finally to the question that forms the topic of this colloquium, it should be clear by now that the general economic implications of an increase in long-term interest rates will depend on the underlying cause of the increase. An increase based on more rapid productivity growth should be a cause of celebration, for the simple reason that such productivity growth is desirable in itself. Higher interest rates will mean higher costs of borrowing, but those costs will be higher because there are more opportunities to use funds productively and because people can afford to bear the higher rates.
A substantial increase in rates based mainly or entirely on higher inflation is, in contrast, likely to do more harm than good. Even those experts who favor a rate of inflation close to 2 percent doubt that still higher rates are desirable. Because it tends to distort relative prices, inflation at such rates is likely to undermine both productivity and overall economic prosperity.
To the extent that it hasn’t been fully anticipated (and it is clear that markets today are not anticipating any substantial rise in inflation), a higher rate of inflation will also tend to reward debtors at the expense of creditors. In particular, it will reduce the government’s real debt burden at the expense of those who own non-indexed Treasury securities. The government might, therefore, benefit from an inflation-based increase in long-term interest rates, even though such an increase would make things worse for the average Joe.
Comments Off on A political economist explains the best way to shrink the government in 9 charts
Let’s say that you’re a policymaker interested in reducing the size of government. Strategically, is it easier to cut government regulation or roll back the welfare state (thereby reducing government spending)?
The Niskanen Center’s Will Wilkinson recently wrote a piece relating to this question that’s gotten a lot of attention: “What If We Can’t Make Government Smaller?” His argument rests on an inference from what’s known as Wagner’s Law, or the law of increasing state spending. This law suggests that as per-capita GDP rises in a country, so does the government share of GDP.
If this empirical regularity reflects a kind of law of politics, it suggests that it’s impossible to cut government spending as the economy continues to become more prosperous.
Wilkinson argues we should instead focus on deregulation and leave the welfare state alone.
Note: Whether one believes that the welfare state is just or efficient is a totally separate question from the one that Wilkinson’s piece raises. You could think that the welfare state is unjust and inefficient yet be persuaded by Wilkinson’s argument that shrinking it is just too difficult in the near future. Or you could support the welfare state yet not be persuaded that it’s invulnerable to political attacks. In fact, there’s an ongoing debate in political science on just this question. The general consensus seems to be that welfare states have remained stable in size, but that economic risks have become more privatized.
Questioning Wilkinson’s Conclusions About Reducing the Size of Government
I want to take a closer look at the data on this question of whether welfare state or regulatory state retrenchment is politically easier.
First, we need to make a distinction between welfare state transfers and the overall economic footprint of government. “Government consumption” measures what the government spends on wages and benefits and goods for its own use.
We see no Wagner’s Law in government consumption (Figure 1, data from Penn World Table 9.0).
In fact, in periods of economic prosperity, like the 1980s and 1990s, government consumption has fallen as a share of the economy. In times of economic stagnation, like the 2000s, government consumption has risen. Of course, it could be that government consumption significantly harms economic growth. Still, these data show little evidence for the view that it’s impossible to cut government consumption.
So what has risen over time? Welfare state transfers. Between 1985 and 2012, Medicare and Medicaid spending nearly tripled as a share of GDP and are projected to rise further. Social Security spending has also risen, more than quadrupling as a share of GDP between the mid-1950s and early 1980s before leveling off somewhat. Most of these increases are driven by the aging of the population and cost disease in the health care sector, not public demand for new programs.
What about regulation? Figure 2 shows how the number of pages in the federal register, which includes administrative rules, proposed rules, public notices, and presidential orders, has changed over time.
Figure 3 does the same for the number of pages in the federal administrative code.
Figure 4 shows how the inflation-adjusted budgetary cost of enforcing federal regulation has changed over time.
Finally, Figure 5 shows the number of economically significant final regulations by year for different presidential administrations.
However you measure them, the number of regulations octupled between 1963 and 2013. Over that same period, real GDP less than quintupled. The inflation-adjusted budgetary cost of enforcing regulation has also increased by a factor of about 10 over the last 55 years. Every recent presidential administration has added more regulation, but the Reagan administrations are an outlier in their lighter regulatory touch. There’s certainly an indication that George W. Bush was a more avid regulator than Bill Clinton, and Barack Obama yet more avid than Bush.
By any measure, then, the federal regulatory burden has skyrocketed, even as a percentage of the economy. It’s difficult-to-impossible to believe that the state and local regulatory burden has fallen enough to compensate for this rise.
These data certainly don’t suggest that cutting federal regulation will be easier than trimming the welfare state.
Whether a Providing a Social Safety Net Makes It Easy to Reduce the Size of Government
A final point to consider is whether cutting the welfare state would make it harder to cut regulation. Perhaps a social safety net makes voters more amenable to free markets.
The best way to examine this idea is to look at how changes in free markets correlate with changes in size of government.
I’ve looked at five-year changes in government consumption share of GDP and in economic freedom, excluding size of government and international trade (which is often controlled by international agreements, not domestic politics) for all western European countries and the Anglo-American democracies of North America and Oceania since 1990. (For 2010–2014 the change examined is just four years.)
Figure 6 shows a scatter plot of the two variables for all these countries.
Figure 7 limits the scatter plot to larger countries.
Figure 8 lags government consumption change by five years.
Figure 9 does the same for just the larger countries.
None of these figures suggest a strong, linear relationship between cutting government and either increasing or decreasing other elements of economic freedom, either contemporaneously or with a five-year lag. Now, this is pretty crude evidence and not definitive on the question, but it definitely casts doubt on the claim that bigger government leads to freer markets.
Governments do have a tendency to grow. However, the U.S. has cut government consumption significantly in the past and could do so again. The drivers of welfare spending are the aging of the population and rising health care costs, not political support for new programs.
And finally, there really is no evidence that cutting federal regulation is going to be easier than cutting spending.
Comments Off on Buy ’em out – a new strategy for cutting government
If you were looking for serious policy discussion, the 2016 election has been a massive disappointment. As revealed during the debates and in their many public statements, neither Hillary Clinton nor Donald Trump has a plan for addressing the public sector’s biggest problem: government has become so large that it is unmanageable and ineffective.
In 1930, total government expenditure was 10% of GDP. Of that, approximately 3% was federal spending, and 7% was state and local spending. Today, government expenditure is about 40% of GDP, with 25% of that spending federal, and the remaining 15% state and local.
Government has gotten much larger, as well as significantly more removed from ordinary citizens. The concentration of power at the federal level weakens democratic checks on politicians and bureaucrats, who are freed to use tax revenues to advance their narrow interests, rather than those of the nation. The only way to fix this problem is to restore limited government and strict federalism, as envisioned by the 9th and 10th Amendments to the US Constitution.
A Massive Thicket
Many federal programs must be retired.
The problem is that a massive web of politicians, bureaucrats, and interest groups stand in the way of shrinking government back to a manageable scale. Voters like smaller and more local government—and hence a lower tax bill—in the abstract. But special interests oppose it, each on their specific issue.
The result is a classic ‘concentrated benefits, dispersed costs’ problem. Each interest group will fight hard to protect its privileges. Voters would be happy in the aggregate to end political patronage, but individually it’s too costly for them to do so. The result is runaway government, with taxpayers footing the bill.
Because of this, many proposals to shrink government are dead on arrival. But there’s one that hasn’t been tried, which will work for voters and political insiders both. In brief, taxpayers can buy out special interests.
The Buyout Strategy
The theoretical groundwork for this strategy has already been laid. Nobel laureate James Buchanan wrote a scholarly article titled, “Positive Economics, Welfare Economics, and Political Economy,” in which he argued that the only way to ‘test’ whether policy proposals are welfare enhancing is if the interested parties, private and public, consent to the proposals. This means any program to shrink the state has to get the consent of those currently benefiting from state policy, even if that policy is bad for the nation as a whole. Fortunately, there exists space between private and national payoffs for political entrepreneurs to arrange mutually beneficial bargains. In particular, it would be in the interests of taxpayers and political insiders both if taxpayers paid insiders simply to stop doing what they’re doing.
Consider economists’ favorite example of political inefficiency: agricultural subsidies. Economists are virtually unanimous in claiming there is no socially beneficial aspect of agricultural subsidies. In fact, such subsidies are socially costly, because they direct resources towards the promotion of products that the market has deemed less valuable. It would be both in taxpayers’ and agricultural producers’ interests if the following deal were struck: keep paying agricultural producers the full amount of the subsidy, in dollars, whether they stay in the industry or not.
This is a windfall gain for agricultural producers. Before, they only got the money if they made agricultural products. Now they get the money without any strings attached. Many will use this as an opportunity to get out of the business and do something else. Perhaps some will simply retire. But this whole arrangement is good for citizens.
The social costs of agricultural subsidies lies not in the money changing hands, but in the political distortion of resource allocation. Now that the cash transfer is without condition, agricultural producers no longer have an incentive to continue supplying output that the market has deemed lower-valued than cost. Ordinary citizens are better off, because subsidies are no longer destroying wealth. Agricultural producers are also better off, because they get the cash value of the subsidies irrespective of how much they produce. Everybody wins, and in terms of economic efficiency, the nation is wealthier.
More and More Buy Outs
Policies like this can be tinkered with in order to sweeten the deal for taxpayers. For example, continue paying the full subsidies for ten years, and then phase them out over the following ten. Since without this program, the subsidies would probably have continued in perpetuity, the policy will also be deficit-reducing in addition to wealth-enhancing.
The ‘buy them out’ proposal provides a general framework for reducing wasteful government activity on all margins. It can be applied to multiple issues: healthcare, education, even entitlement reform can be addressed by making mutually agreeable buyouts. The logic holds, whatever the specific application: these proposals will be windfall gains for political insiders, will reduce the government’s bill for ordinary taxpayers, and facilitate a more efficient allocation of the nation’s scarce resources.
Grand overhauls of public policy must take the status quo as given. The only way to deal with government bloat is to recognize political insiders are not going to forego their privileges without compensation. Taxpayers, working through their elected representatives, can and should buy out these insiders. Doing so may be the only incentive-compatible path back towards more local, responsive, and effective government.
Comments Off on The answer is a new government program. What’s the question?
The Sunday Washington Post had a long, hagiographic article about Senator Mark Warner’s critique about how capitalism “isn’t working” for the masses and his heroic attempts to fix it that left me thinking I’m in an alternate reality.
The problem he sees is that the growing tendency of people to change jobs throughout their career has left people unprepared for retirement, and that we need to do more to make sure that workers have some sort of safety net to provide them with health care and income in their golden years.
That this was largely addressed decades ago with the introduction of Social Security and Medicare was completely missing from the article. Social Security is an incredibly progressive retirement program that provides everyone with a work history of at least ten years with a decent-sized benefit that doesn’t go up all that much for wealthier people who contributed much more. And Medicare is the largest government program there is, covering hospitalization costs, basic health costs and drug benefits for tens of millions of senior citizens. The government spends about $1.5 trillion each year on these two programs, and they make up the majority of our federal budget. There’s also plenty of evidence that they prevent seniors from indigence: the poverty rate for seniors is well below that of other age groups.
The current Administration also added an expensive entitlement that makes it much easier for people under age 65 who do not receive health insurance to obtain it, along with a healthy subsidy. For a family of four in Washington DC there is still a subsidy for an income of $80,000, which is well above the mean household income, and Medicaid completely covers those who don’t make enough money to buy their own health insurance. What more can we possibly do to make health insurance more affordable for the working poor?
The latest push of the Administration–and one that Senator Warner is leading–is to create some sort of government 401k. The idea is an awful one–the rationale is that since we move around to so many jobs, and since many employees do not provide a retirement plan, the government should do it for them. Earlier this year the Department of Labor made it much easier for the states to set up retirement accounts for their workers that would be administered by the state as an option for workers at firms without a retirement plan.
It is a supremely bad idea. For starters, there is no evidence that a public option is better than a private option, and plenty of data showing the contrary. For instance, the college savings accounts run by the states are no different than what people would get if they went to their local Fidelity or Vanguard office and opened their account, save for the fact that the latter would not come with a tax break, and the money in the government account has a sharply higher management fee than are found in the private funds. The Department of Labor just spent a year trying to drive down management fees in retirement accounts and they’re embarking on a new plan that would invariably create millions of accounts with higher management fees than they could get elsewhere.
Until recently liberals were in full defense of defined benefit pensions despite the fact that they disadvantaged people who had shorter job tenure and were more likely to change jobs, both of which tend to be truer for women than men. That they realize these don’t work in today’s economy is gratifying, but their insistence that the government create a vehicle to replace it is nonsensical.
If we want to nudge people to get a retirement account, we can do that without the state of Massachusetts inserting itself as a middleman. And politicians should stop pretending that there’s a senior citizen poverty crisis, no matter how flattering the Post may treat such efforts.
Comments Off on Volunteer Work Reveals The Inefficiencies of Government Aid
This piece was contributed by Sloane Shearman, a staff member at Learn Liberty.
“Any Problem, Any Time”
That’s the motto of the Community Help Centre in State College, Pennsylvania. While pursuing my undergraduate degree at Penn State, I volunteered as a counselor on their crisis and basic needs hotline. Many people assume that the hotline was a suicide hotline, catering to clients in severe emotional distress. At times it was, especially during the late hours of the night and around holidays. During the daytime, though, the hotline fielded calls from individuals in a wide variety of situations.
People were often referred by other assistance organizations, social workers, or friends in similar circumstances. Usual daytime calls might include clarifying the hours of the food bank, helping individuals sign up for fuel vouchers, or patiently listening to frustrated, distraught clients who had been turned away from one assistance program—and referring them to another.
Hotline counselors went through up to twelve hours of training per week for six months, and I was surprised to discover that much of that time was spent learning how to navigate the myriad private and public organizations in the community dedicated to helping people in need. I am embarrassed to admit that I was initially frustrated to spend so much time poring over the intricate, tedious details of the functions of these organizations, how they operate, and their criteria for providing assistance.
I quickly learned that Community Help Centre functions as an informational hub. As difficult and mentally taxing as it was for me to learn the ins and outs of this web of services, it is unimaginably worse for people in need.
Herein lies the intuitive, but not immediately obvious nature of working closely with people facing financial hardship: when responding to a person facing eviction, or hunger, or frozen pipes, it is not helpful to hand them a form and transfer them to the next stage of bureaucracy.
Yes, doing those things is better than not doing them, but those who wish to truly help others must recognize that these are times of financial and emotional crisis. While they involve more tangible problems than those struggling with mental illness or grief, individuals in these situations often need just as much—and sometimes more—compassion and patience.
Like most people in that line of work, I was attracted to the work due to my desire to help people. While providing someone with the resources they needed gave me a deep sense of joy, often it seemed that all I could really do was lend people an understanding ear.
“What am I supposed to do?”
The Community Help Centre emphasized helping clients solve their problems without formal assistance. Counselors were trained to inquire if clients had asked their friends, family, or faith communities for help before referring them to other organizations. While we understood and did not take lightly the toll this took on clients’ pride, there were several reasons this measure was necessary.
First, this prevented unnecessary strains on existing resources within the network. Second, individuals are able to transfer money and other resources more quickly than bureaucracies. Once aware of the problem, individuals are also more likely to investigate the problem and provide ongoing assistance or solutions in ways outside agencies are not. Finally, personal connections may be able to help those in need in ways that organizations simply cannot.
One call illustrated this final point for me and profoundly affected my perspective on aid distribution. As I answered the phone, a young woman quickly announced, “I don’t need money or food stamps—I just got a job! What I really need, though, is a bike to get there. Can you help me?” My moment of happiness for her quickly faded as I turned over several options in my head. She was new to the area without any existing support network. Furthermore, there were no programs designed to distribute bikes to those in need.
Resignedly, I gave her the contact information for local churches that had flexible funds, just as I always did for callers whose needs did not fall neatly into the purview of government programs. I wished her luck, but I couldn’t help but worry that the funds wouldn’t be there for her and that she would be calling back soon, this time for money and food stamps.
Why couldn’t I help her? Simple expenses like this could save taxpayer money in the long run, helping people overcome a momentary rough patch and enabling them to support themselves.
When resources are distributed by government agencies, their purpose inevitably becomes politicized. Politicians, answering to their constituents, would find it hard to articulate why bicycles are a proper public expenditure.
These disputes lead to rigidity that plagues government-provided assistance. Many cannot imagine having to resort to the social safety net to survive. The vast majority of clients we served did not make terrible life choices or need long-term assistance—there just wasn’t enough money in the budget for car repairs and rent, or a parent lost their job, or someone fell ill. Their needs are immediate and temporary, and government-run programs are not.
It was obvious that an early intervention would save money and the emotional distress that comes with these situations, but our clients weren’t needy enough to qualify for help. For example, our case managers were aghast to learn that funds that had previously been allocated for a program that helped individuals pay their rent had been slashed. The justification was that much of the money had been put toward building new homeless shelters in the area. Without context, it may seem obvious that providing shelter to the homeless is a more necessary expenditure than helping people pay their rent. But the manifest reality is that rather than helping families stay in their homes in the first place, they could receive help only once the floor had disappeared beneath them.
French economist Frederic Bastiat’s essay “That Which is Seen and That Which is Unseen” explains what the Pennsylvania budget does not: Taking money away from communities only to redistribute it back to them results in fewer resources than if individuals or private organizations were to provide charitable assistance themselves.
This piece is part of a two-part series on government, charity, and community. You can check out part one here.
Comments Off on A Proposal to Reform Welfare and Rebuild American Community
When Alexis de Tocqueville traveled across America, he was struck by the vitality of its civil society. “Wherever at the head of some new undertaking you see the government in France, or a man of rank in England, in the United States you will be sure to find an association,” he wrote. Indeed, “Americans form associations for the smallest undertakings.”
Today many fear that civil society is in retreat. Local covenants and centers of community (especially churches and religious organizations) are the institutions that, for centuries, led men and women to voluntarily serve the common good. But as government has grown in size and scope, an ever-expanding taxonomy of state responsibilities has supplanted the role of these important civic organizations.
Religion’s Role in Social Assistance
Healthcare and social assistance to the needy are among the most enduring aspects of America’s religious heritage. Protestants were stewards for the poor through privately supported hospitals, while Catholic brothers and sisters would often own and administer their own institutions using community fundraising and patient fees. To this day, Catholic hospitals care for one in six patients in the U.S.
Churches fulfilled a functional, day to day role of providing social insurance, which was sustainable because people had a transcendent, higher purpose—namely, worship. Without the ability to issue “individual mandates” or extract taxation, philanthropy had to be earned through reciprocal relationships rooted in trust and goodwill. Churches relied on their member’s contributions and self-sacrifice to measure their strength of commitment to Christian ideals and bond the community together.
Government Crowds Out Churches
This began to unravel in the early 20th century. First, the social spending of the New Deal crowded out significant amounts of church-based welfare. By one estimate, New Deal spending caused church charitable spending to fall by 30 percent.
Then came President Lyndon Johnson’s “Great Society” and “War on Poverty,” and with it the creation of Medicaid and the expansion of food stamps and other income supplements targeted at the poor. Spending on Medicaid and Social Security rose exponentially after 1990, following a landmark Supreme Court decision that greatly expanded eligibility and a misconceived federal budget that promised to match state spending on low-income hospitals dollar for dollar, essentially turning Medicaid into a money pump.
Data shows that commitment to religious community in the United States has been steadily decreasing, and in a way that seems correlated to spending on public welfare. Since 1990, America’s non-religious population has grown from about 5 percent to over 20 percent and is climbing. A comparison of U.S. state rankings reveals a striking negative relationship between the generosity of the welfare system and the size of the self-identified “very religious” population.
A Nuanced Take on Welfare and Civil Society
But does this mean that the deterioration of community and tight-knit social networks—what economists called “social capital”—is inevitable? Robert Putnam’s Bowling Alone is surely the most extensive and influential survey on the decline of social capital. But its subject is exclusively the United States, which makes it difficult to extrapolate any relationship between government welfare programs and social capital.
When we take a look at the many studies on the determinants of social capital in Europe, it appears that the evidence by–and–large contradicts the U.S. narrative. Indeed, empirical studies tend to talk of welfare being “trust enabling”.
Consider Sweden, which has one of most comprehensive welfare states in the world but also ranks near the top in measures of social capital. The typical response is to ascribe it all to ethnic or cultural homogeneity. Nonetheless, it doesn’t alter the fact that changes in welfare policy within Nordic countries shows no evidence that thick communities and welfare are substitutes.
The key factor appears to be the high level of decentralization in many social programs. For example, in Sweden delivering healthcare is the responsibility of County Councils, while welfare, disability, and programs for the elderly are controlled by municipalities. Swedes also have very high rates of union membership. Yet instead of being confrontational with the employer, the norm is mutual advantage. In turn, unions are entrusted to manage stuff that in the U.S. would be cynically regulated, like unemployment insurance and parental leave.
Economists extol the virtue of this kind of decentralization, known as subsidiarity, for reasons of asymmetric information. That’s just jargon for the truism that, in tight communities, everybody knows everybody. I was astonished to learn, for instance, that 75% of Swedes report attending “study circles,” 10% on a regular basis. These are regular meetings of a dozen or so people organized by larger voluntary associations that “range from the study of foreign languages to cooking to the European Union question.”
As the political scientist Bo Rothstein wrote of the Swedish welfare state, “its main architects sought a social policy based on the idea of ‘people’s insurance’ that would supply all citizens with basic resources without incurring the stigmatization associated with poor relief.” And yet the so-called people’s insurance also found a way to co-exist with community life by assigning responsibility for provision to the local level.
A Modest Proposal to Repair Welfare
This suggests that a reform path for the current U.S. welfare system exists that can revive civil society without necessarily abolishing public assistance outright. The conservative-libertarian scholar Charles Murray, for example, has proposed replacing our existing patchwork of 80 or so centralized, means tested programs with a simple lump sum transfer known as a guaranteed basic income. Murray argues that such a system would pave the way for a flourishing of families and communities, as the least well-off members of society would be compelled to come together and pool their resources.
In the meantime, for reasons likely inherent to the structure of U.S. federalism, America has so far earned its title as the reluctant welfare state. In turn the country is becoming one that de Tocqueville would find increasingly hard to recognize.
Foreign Aid Is Taking Money From Poor People in Rich Countries and Giving it to Rich People in Poor CountriesComments Off on Foreign Aid Is Taking Money From Poor People in Rich Countries and Giving it to Rich People in Poor Countries
After the 2010 earthquake in Haiti that killed around 250,000 people and displaced 1.5 million others, the billion dollar state-led humanitarian relief effort failed to accomplish even the most basic tasks like rebuilding houses. The situation remains dire with 80,000 people still living in “temporary” tent camps.
In the new Learn Liberty video below, Professor Chris Coyne explains why humanitarian aid efforts so often go awry. Anytime the government allocates money for something, he says, there is intense rivalry among the groups trying to get a piece of it. This causes government agencies and NGOs (non-government organizations) to expend vast amounts of resources—which could be used for more productive projects—to lobby the government for funds. For instance, during the aftermath of the Haiti earthquake in 2010, more than 15,000 NGOs competed for $10 billion of US government relief funds. This concept is known to economists and political scientists as rent-seeking.
Humanitarian aid efforts also fail because bureaucrats have little incentive to use the money they receive in an efficient manner. Their goal is power not profit, so they use funds to maximize their responsibility rather than create value for those who need it. As a result money is wasted. It’s estimated that only 5 percent of the $9 billion spent in Haiti by January 2013 went directly to those affected by the earthquake.
Due to these misaligned incentives, governments and people would be better off just giving money directly to those in need.
Comments Off on Poverty Declined After Welfare Reform
This year marks the 20th anniversary of the landmark welfare reform that transformed antipoverty policy. The Personal Responsibility and Work Opportunity Reconciliation Act—known by the clumsy acronym PRWORA—was signed by President Bill Clinton on August 22, 1996, a couple of months before the presidential election.
Clinton’s decision to sign welfare reform into law divided the Democratic party. Two prominent members of his welfare policy team resigned in protest, and Democrats opposed to the Republican-written legislation issued apocalyptic predictions. The late senator Daniel Patrick Moynihan famously surmised that the new five-year time limit on eligibility for federal cash benefits “might put half a million children on the streets of New York in 10 years’ time.” Moynihan went on to lament: “We will wonder where they came from. We will say, ‘Why are these children sleeping on grates? Why are they being picked up in the morning frozen?’”
The reality of poverty after welfare reform is not that portrayed by critics. Children—in particular, those in single-mother families—are significantly less likely to be poor today than they were before welfare reform. This is because income and poverty trends are poorly conveyed by official statistics and by most analyses of poverty data. Household surveys underestimate the cash income of these families and do not count as income a variety of valuable noncash benefits, including food stamps, housing subsidies, and Medicaid (the receipt and value of which are also underestimated). Meanwhile, the rise in the cost of living tends to be overestimated, pulling up poverty trends over time.
Reliable indicators do show increasing hardship in some years, but they mostly reflect the business cycle rather than a steady rise. To the extent that some less reliable measures of hardship appear to have worsened after 1996, they generally did so among groups of Americans (such as childless households, the elderly, children of married couples, and even married college graduates) who never received cash welfare under the AFDC program.
Child poverty overall fell between 1996 and 2014, after taking into account refundable tax credits and noncash benefits other than health coverage. After including household heads’ live-in romantic partners in the family (i.e., cohabitation) as well, child poverty was lower in 2014 than at any point since at least 1979. After also using the best available cost-of-living adjustment to update the poverty line and including health benefits as income, child poverty overall was lower by 5 percentage points in 2014 than in 1996 and is now at an all-time low. And after partially adjusting these estimates for the tendency of families to underreport government benefits, the child poverty rate in 2012 was just over 5 percent, compared with nearly 20 percent as indicated by the official poverty measure.
What about “extreme” child poverty, defined as living on $2 or less per day? Extreme child poverty overall was the same in 2014 as in 1996—about one-half of one percent of children—once noncash government benefits and refundable tax credits are factored in. After correcting for underreporting of government benefits, in no year did the number of children in extreme poverty exceed one in 400. After correcting for underreporting, practically no children of single mothers were in extreme poverty in either 1996 or 2012. Fewer than one in 1,500 children of single mothers were in a household getting by on $2 a day per person for the whole year in 2012.
The question is not whether PRWORA was the single best welfare-reform policy that could have been imagined; policymaking never produces that result. The question is what would have happened in the absence of the welfare reform that we actually implemented.
Perhaps looser welfare policy would lower poverty in any given year but reduce upward mobility out of poverty—either over the course of childhood or once children become adults. More generally, to the extent that we reduce the cost of making poor decisions, more people will tend to make poor decisions. One does not have to believe that this logic demands that we eliminate all safety nets to acknowledge that these sorts of trade-offs are inevitable in designing welfare policy.
None of this is to say that TANF or other aspects of welfare policy cannot be improved or that our levels of deep poverty are sufficiently low. But policymakers should reject the increasingly conventional view that extreme poverty has dramatically increased and the view that welfare reform did more harm than good. Improving policy and reducing hardship require that we have a clearheaded view of our challenges.
Comments Off on Louisiana Floods Reveal Age Old Broken Window Fallacy
As Moldy Carpets Inevitably Follow a Flood…
…economic derangement inevitably follows natural disasters. Here’s the derangement du jour:
Record-breaking floods in Louisiana have killed 13 people, damaged or destroyed 40,000 homes and landed 8,000 people in shelters. And yet flood-hit East Baton Rouge Parish – where 14 percent of residents live below the poverty line – and the rest of Louisiana are expected to ultimately benefit from the rebuilding effort as insurance funds and federal dollars start to pour in.
Read the entire, and entirely – and sadly – predictable, report here. (HT Craig Kohtz)
Wealth from wreckage. Riches from ruins. Development from destruction.
I’d say that Bastiat would spin in his grave, but Bastiat’s ghost – like Bastiat in the flesh – has heard this nonsense so frequently that its occurrence is no more of a surprise than are downed trees in a hurricane.
The most common form of this fallacy – committed, for example, by Mr. Krugman in his New York Times column of September 14th, 2001 – is the proclamation that the effort to rebuild destroyed buildings and to replace destroyed resources results in so much extra spending that the number of human wants that are satisfied by newly mobilized resources is greater than was the number of human wants that were being satisfied before the blessed destruction occurred.
This form of the fallacy is Exhibit A for economists who perform the great social service of warning people to look for the unseen and to avoid seeing only that which is most immediate and obvious. The advice is to look at reality as competent economists look at reality and not as typical, economically uninformed reporters, pundits, and politicians look at reality. The advice is to ask questions that come naturally to the minds of competent economists. “From where do the resources come to rebuild the destroyed homes, factories, roads, and bridges? Does nature, having at one moment cruelly unleashed resource destruction upon humanity, at the next moment munificently bestow upon humanity new stocks of resources out of nothing, ex nihilo, inspired to do so merely by observing humans spend money at a faster rate then money was spent before?”
If we are to believe typical reports that flood out after natural disasters strike, the apparent answer is “Yes,” we are told. “In fact,” we are further ensured, “not only are the resources used for the rebuilding free, the total stock of economic resources is enlarged by the natural disaster, so that society has a larger combination of resources and goods and services after the natural disaster than it had before.”
This claim is absurd, of course. But it is widely believed to be true.
A different form of the fallacy involves looking at only the locale affected directly by the natural disaster and concluding that whatever happens to that locale in the wake of the disaster is the only relevant outcome to consider. For example, it’s said that one source of Baton Rouge’s coming good fortune is the dollars that Uncle Sam will transfer to that locale as federal disaster-relief aid – dollars that the people of Baton Rouge will then spend to acquire the real resources they’ll use to rebuild.
It very well could be that amount of real resources that such federal aid transfers to Baton Rouge is so large that it leaves the people of that place economically better off than they were before the tragic floods struck. But Baton Rouge’s gain is other people’s loss. For Americans as a whole – for humanity as a whole – the floods must be reckoned to still make us poorer. Such reports as the one linked above never are accompanied by corresponding reports with headlines such as “Baton Rouge’s Flood of Good Fortune Makes All Other Americans – and America At Large – Poorer.” But such a report should be a natural accompaniment to the ever-predictable “XYZ’s Natural Disaster Is Really Its Blessing In Disguise.”
A variation on this form of the fallacy involves a focus on the private insurance funds that the (insured) people of the disaster-struck area will receive. If the insurance is properly run, these funds do not on net make the people of the ravaged area wealthier. Not only does proper insurance require customers to pay deductibles, but against whatever ‘gain’ is received by the victims, post-disaster, must be weighed both the premiums that they paid pre-disaster for the insurance and the premium hikes that will likely occur post-disaster. And either way, whether the insurance is properly or improperly run, society at large is unquestionably made poorer by the natural disaster – no matter how much faster people might, because of the natural disaster, spend money.
Finally, it’s true that clever folk can tell stories of how natural disasters might change humans’ work effort, risk-taking propensities, inclination to innovate, and other human characteristics that result in an increased flow of output of real goods and services. But it remains the case that had the human characteristics changed in these happy ways without the resource destruction caused by a natural disaster – as they have changed in the past – then society would be richer still had the natural disaster not struck.
Comments Off on Why a “Job-Killing” Trade Program Deserves Americans’ Support
The debate over the effects of free trade in general and NAFTA in particular on the American economy are again central to American politics.
Nothing new there.
However, both the Republican and Democrat presidential candidates criticizing free trade? That is something worth talking about! Indeed, both former presidential contender Bernie Sanders (and lately, in some respect Hillary Clinton) and Donald Trump have opposed free trade agreements. And not because they don’t understand economics or because they don’t have economic advisors.
They do it because so many Americans feel that NAFTA and other trade agreements made their lives worse.
And these Americans might vote.
I would like to add to the discussion started in this blog post by offering a middle-ground perspective on this topic, a perspective similar to the one I presented when writing the book review for Offshoring of American Jobs: What Response From U.S. Economic Policy? by Jagdish Bhagwati and Alan S. Blinder, edited by Benjamin M. Friedman.
Free Trade (Think NAFTA) Has a Positive Impact on the World
When discussing free trade, very few economists would disagree that its overall impact, especially in the long run, is significantly positive. We can easily show this graphically, mathematically, or simply intuitively.
We know, without being economists, that specialization makes people more productive. If Mexico specializes in manufacturing refrigerators, it can build more refrigerators than the US. And if the US specializes in manufacturing the machinery the factories in Mexico need to produce refrigerators, then the US can produce more machinery than before. (Actually, this is not far from truth. In 2015, we imported from Mexico goods valued at $295 billion while exporting to Mexico goods valued at $236 billion. The trade deficit with Mexico is valued at $58 billion, or about 0.34 percent of our GDP).
I do not want to list all the benefits that NAFTA brought to the US; there are many others who have done that elsewhere. I can just say that, for example, for every well-paying job that NAFTA created in Mexico, a family who potentially considered crossing illegally to the US renounced their plan. Yes, that’s almost nothing, some will say. But this is just a piece of the puzzle.
Doesn’t NAFTA Destroy American Jobs?
I do not want to list the problems linked to NAFTA either, but I do want to point to the main issue: job creation vs. job destruction. People associate free trade and NAFTA with the loss of good paying manufacturing jobs.
We can tell Americans that the goods we import from Mexico and China are much cheaper thanks to NAFTA, and therefore everybody benefits. It’s just a reality.
But many have a father, mother, uncle or aunt, grandfather or grandmother, son or daughter that lost their $20-50 per-hour job. And that impacted them more than the 10 to 40 percent they save each time they buy something made in Mexico or China.
The catch is that the disappearance of manufacturing jobs in the US (a loss of about eight million jobs between 1990 and 2010) is not just due to free trade. Many policymakers also “blame” technology for it. And indeed, they are right. But we all know that correlations are tricky, and NAFTA was implemented exactly during the time technological changes also led to structural changes in the workforce. Then came China and the Great Recession.
Bad luck for free trade.
Until Americans perceive that benefits arising from free trade exceed the costs (despite the short-term growing pains), they will continue to oppose it. Sacrificing a few million for the good of the majority looks good mathematically, but not politically and maybe not even morally. And there are other problems associated with free trade besides lost jobs that need to be addressed.
It is the job of the believers in free trade to find a solution to all this disarray—to reconcile the benefits with the very real human costs.
It’s admittedly a difficult task. But if we don’t try, the US economy will slow down and all the gains from free trade the world accumulated all these years will be lost.