This piece was adapted from an academic journal article by Adam Thierer by the same name.
Imagine a society that always encouraged you to make the right choices without forcing to make those choices. In this type of world you’d often end up doing the “right thing” without really having to think about it. Making progress would, theoretically, involve much less risk and failure than it ever did before.
According to many economists, this sort of scenario is possible, at least on a small scale. It’s called, “nudging.”
Nudge advocates seek to improve decisions about health, wealth, and happiness by applying insights from human psychology and behavioral economics to public policy decision making.
For a very simple example of what a nudge might look like in real life, suppose the check-out lines in the grocery store were lined with fruits and vegetables, while all candy was confined to a “junk food” aisle. This would be a nudge, as you would still be able to buy the candy, but would be encouraged to purchase the healthy food.
This piece will address the trade-offs associated with determining optimal default rules and devising laws and regulations that seek to engineer better “choice architecture.”

A Few Critiques of Nudge Theory

Critics of nudge theory often attack this reasoning by noting that policymakers simply don’t have enough information to make such decisions on behalf of everyone else.
It’s a valid concern, but an equally compelling critique of nudge theory attacks the underlying assumption that better choices must be “architected” at all to avoid undesirable outcomes.
That is, advocates of either “soft” or “hard” paternalistic interventions often fail to appreciate the enormous value in allowing experiments—personal, organizational, and societal—to run their course and to “learn by doing.”
I want to address how nudge theory often ignores or devalues the way ongoing experimentation facilitates greater learning, innovation, resiliency, and progress. This sort of experimentation, of course, includes the possibility of failure.
Put simply, when we fail, we can learn a great deal from it.

Risk and Failure: What Behavioral Theorists Overlook

Policymakers and regulatory proponents often seek to short-circuit that process of ongoing trial-and-error experimentation, believing that they can anticipate and head off many mistakes through preemptive, precautionary steps.
What they overlook, though, is that when it comes to human health, wealth, and happiness—and to social progress and prosperity more generally—there is no static equilibrium, no final destination. There is only a dynamic and never-ending learning process.
Learning from experience provides individuals and organizations with valuable information about which methods work better than others. Even more importantly, learning by doing facilitates social and economic resiliency that helps individuals and organizations develop better coping strategies for when things go wrong.
Behavioral theorists and nudge advocates often fail to incorporate these insights into their analysis and policy proposals.

A Better Way to Think About Risk

To more fully appreciate what we learn by failing, we must reconsider the way we think and talk about risk. Risk is often mistakenly conflated with harm itself when, in reality, risk represents only the potential for harm or the “potential for an unwanted outcome.”
But even that definition is too limiting because risk often involves the potential for desired outcomes as well as unwanted ones. Every thrill-seeker knows this. For example, someone who skydives from an airplane realizes that the potential for grave harm exists, but they also derive tremendous satisfaction from what they regard as a thrilling experience.
Much the same holds true for anyone who has ever started a business. Saint Thomas Aquinas once noted that, “If the highest aim of a captain were to preserve his ship, he would keep it in port forever.” Of course, no captain does so because they have higher aspirations.
Like captains of ships, businesspeople know the possibility of failure exists when they “take a risk” with a new venture or investment, but so too does the potential for enormous reward. That is why they steer the ship of industry out of port and into the brave unknown (and potentially quite risky) waters. If they didn’t, progress would never occur and society would suffer. As Sir Alfred Pugsley, one of the foremost modern experts on structural engineering, once noted, “A profession that never has accidents is unlikely to be serving its country efficiently.”

Why Embrace Failure? The Benefits of Learning by Doing

The value of “failing better” is often overlooked in public policy debates. This is especially the case in discussions of behavioral economics and nudge theory.
Individuals, organizations, and nations all benefit from the knowledge they gain from making mistakes. “Humiliating to human pride as it may be,” the Nobel prize-winning economist Friedrich Hayek once wrote, “we must recognize that the advance and even preservation of civilization are dependent upon a maximum of opportunity for accidents to happen.”
Progress and prosperity are inextricably linked to trial-and-error experimentation, including the freedom to fail in the process. “We could virtually end all risk of failure by simply declaring a moratorium on innovation, change, and progress,” notes engineering historian Henry Petroski. But the costs to society of doing so would be catastrophic. Somewhat paradoxically, therefore, we must tolerate a certain amount of risk and short-term failure if we want long-term success.
Firms learn important lessons when their business plans fail and the public rejects their goods or services. The economist Joseph Schumpeter famously described the “perennial gales of creative destruction” that constantly renew capitalist economies. If we disallowed risk-taking and propped up every failing business model, society would never discover new and better ways of innovating and satisfying consumer needs.

Why Failure is Vital to Progress in Engineering

Let’s consider some instructive examples of this process in action. First, we can learn about the importance of “learning by doing”— and failing—from the history of structural and mechanical engineering. Structures such as skyscrapers and suspension bridges, and machines such as ships and airplanes, are remarkably sophisticated contraptions. What’s more remarkable, however, is the fact that so few of them fail today.
Of course, that has everything to do with the fact that previous iterations of each did fail and that we learned so much from those failures.
This uncomfortable reality will not sit well with many safety-conscious regulatory advocates who might suggest that past failures simply mean that society must redouble its efforts to find preemptive, precautionary solutions and avoid similar accidents and calamities in the future.
But, again, that ignores the fact that strict application of that principle would mean many life-enriching, and even life-saving, innovations would ever come about.

Conclusion: Avoiding Failure Leads to Failure

If we hope to prosper both as individuals and as a society, we must preserve the general freedom to “learn by doing,” and even to fail frequently in the process. Both individuals and institutions learn how to do things better—both more efficiently and safer—by making mistakes and dealing with adversity.
Facing up to challenges and failures is never easy, but it helps us learn how to cope with change and continuously devise new systems and solutions to accommodate those disruptions.
Rigid precautionary principle thinking and policymaking, by contrast, interrupts this learning progress and leaves us more vulnerable to the most serious problems we might face as individuals or a society. Paradoxically, then, we can conclude that individuals, institutions, and countries that overzealously seek to avoid the possibility of certain short-term failures are actually prone to potentially far more dangerous and systemic failures in the long term.