Medicare is the federal system that covers hospitalization, physician care, drugs, and a limited amount of nursing home care for men and women over age 65. President Lyndon Johnson started it in 1965 in a modest financial way when the elderly were a small fraction of the adult population, and when drugs and surgeries to treat diseases of old age were far fewer and less complex. This program has grown in the 42 years since then into a major entitlement, with spending of almost $400 billion, which is more than 3 percent of American GDP. Of even greater concern is the projected growth in this program during the next several decades.
If past growth in Medicare is a reasonable guide to future growth, and assuming that real GDP grows at an annual rate of two and one half percent, Medicare spending as a share of GDP will double by 2020, and increase some 3-4 times by 2050 to 10 percent or more of GDP. Dollar spending on Medicare patients would increase to over a trillion dollars by 2020. Less than half of the projected increase would be due to the further aging of the population, while the majority is the result of the expected continuing growth in spending on hospitalizations, surgeries, and drugs for the elderly of given ages.
Much of the increased spending would occur even with the most efficient health delivery system since senior citizens along with younger adults put a high value on living longer in reasonably good health. The value placed on longer life and good health generally rises as incomes grow; indeed, economic analysis and past experience indicates that the willingness to pay for better health will increase in the future at least as rapidly as incomes do.
Still, there is no doubt that while the American health care delivery system has many strengths, including an encouragement to medical innovation, medical costs for the elderly could be significantly reduced with no reduction in quantity and quality if various inefficiencies in the system were corrected. Numerous proposals have been advanced to make the Medicare delivery system more efficient. Former Secretary of State and of the Treasury George Shultz, along with Professor John Shoven, in an important forthcoming book, Putting our House in Order, review several of these proposed reforms and advance their own thoughtful reforms for social security as well as Medicare and Medicaid. We discussed various reforms of American spending on medical care in our blog posts on January 13 of this year, and I will not repeat them now.
Instead, I discuss why drugs should have an important role in inefficient as well as efficient medical delivery systems. Medicare only started to cover spending on drugs in 2003, and the coverage had various defects. These include a deductible that is much too low, and a "doughnut" where there is no coverage at all for additional spending in the middle ranges of drug spending (see my discussion on Feb. 13, 2005 of these and other defects of drug coverage, with suggestions for how to make that coverage more effective). It is no surprise that spending on drugs by the elderly were not part of Medicare at the beginning since drugs were a minor part of their total medical spending in 1965. However, discoveries since then have led to various blockbuster drugs, and many less revolutionary advances in medications. These include drugs to lower blood pressure and cholesterol, to treat Parkinson's disease and other disorders of the nervous system, to help thin the blood, to overcome erectile dysfunction, to fight Aids, and to reduce testosterone to combat the spread of prostate cancer. The share of their total medical care that seniors spend on drugs has moved steadily upwards during the past 40 years, and now is more than 12 percent, and before long might approach 20 percent.
Drugs should be part of an effective health delivery system not only because of the continual introduction of new drugs, including a growing importance of genetic based drugs, but also because drugs have a very attractive cost structure, especially for the growing elderly population. As the number of persons over age 65 increases during the next several decades, it would be useful to have a health delivery system in which costs do not rise as rapidly as the number of persons treated. Surgeries do not have this property since their cost tends to increase in proportion to the number of surgeries performed since each one takes a more or less fixed number of surgeons and supporting personnel. Hospitals also have few economies of scale with respect to the number of in-patients treated once a relatively small efficient bed size is reached.
Drugs have a totally different cost structure. They typically have very high fixed costs of research and development and low marginal costs of adding additional users. To develop a new drug to treat depression, or Alzheimer's, or another serious medical problem, usually requires hundreds of millions of dollars in the form of spending on research and various stages of clinical trial development, including many failures before a successful treatment is developed. Once a valuable drug is developed, however, the cost of producing each pill is usually small, certainly a tiny fraction of its fixed costs of development. This means that additional users can be added with relatively little increase in total costs, and a decline in average cost per user.
This property of the cost of producing drugs has two extremely important implications for Medicare costs. The first is that drugs are an efficient way to treat diseases and disorders that hit a large number of men and women since then the fixed costs can be spread over a larger number of users. This makes them particularly valuable to the elderly who are a growing share of the population in the United States and all other developed countries, and in many developing countries as well, including China. Moreover, the U.S. and world populations are also increasing, which increases the demand for all drugs, including those that help treat older persons.
Drugs are also valuable in inefficient delivery systems that have trouble choking off medical treatments that would not pass a benefit-cost calculation. This would characterize systems with highly subsidized medical care, with excessively low deductibles, or with rules that cannot deny treatments to the very elderly and those close to dying who would benefit only a little from receiving treatment. Surgery, hospitalization, and close physician supervision are expensive ways to treat seniors who do not benefit much from this care since the cost of these procedures tend to rise in proportion to the number treated. On the other hand, while treating seniors with drugs sometimes also may not add much in the way of benefits, the additional cost per user would be much smaller than the average cost per user. This property of the cost of using drugs makes them particularly useful in the American medical system since that system errs on the side of generosity toward the elderly and others compared to the health care systems in most developed countries.
This advantage of drugs in inefficient health delivery systems does not argue against the need for major reforms of Medicare to make it more efficient. It recognizes, however, the value of second-best solutions in a political environment where reforms of health care are likely to come slowly because they run up against many powerful vested interests.
The Medicare Challenge--Posner's Comment
Becker makes the ingenious suggestion that the effect of adding drug coverage to the Medicare program is to prevent spending on drugs from growing as rapidly as the number of persons covered by Medicare. The reason is that because the marginal cost of drugs tends to be very low; most of the costs of drugs are fixed costs of research and development. Hence the larger the number of persons eligible for Medicare drug benefits, the lower the average cost of drugs.
Nevertheless the net effect of the addition to drug coverage on total Medicare spending is likely to be a substantial expansion in the total cost of Medicare. As of January of this year, 25 million persons had enrolled in Medicare Part D (the drug part), and the total annual expense to Medicare is estimated to reach $36 billion this year. As the program is only two years old, further increases in enrollment and usage can be expected, irrespective of increases in the eligible population, since more than 40 million persons receive Medicare benefits.
The net addition to Medicare costs will be less than the cost of Medicare drug coverage if drugs are a net substitute for other covered treatments. But they may not be, because there is also a complementary relation between drugs and other forms of treatment, such as surgery; to the extent that drugs reduce the pain, discomfort, or disability of surgery, they may increase the demand for surgery by reducing its nonpecuniary costs, a cost reduction that though real will not be reflected in the Medicare cost figures.
In addition, by increasing the demand for drugs, Part D will increase the net expected profits from new drugs, and thus increase the incentive to create such drugs, with the heavy fixed costs that, as Becker points out, are entailed by the development of new drugs.
Still another problem with Medicare drug coverage is that people have less aversion to popping a pill than to being operated on or otherwise confined in a hospital. The cost of surgery, as it appears to most people, includes a significant nonpecuniary element that of course is not reimbursed by public or private health insurance. Taking drugs does not impose such costs unless a drug has serious side effects. Hence the Medicare drug subsidy should cause a greater percentage increase in demand than the traditional Medicare subsidies did.
Drugs also provide an attractive but costly substitute for life-style changes designed to improve one's health. If the choice is between giving up rich food and taking a pill paid for by Medicare, the latter may be preferred though the social cost may be higher; the subsidy confronts the consumer with false alternatives from an overall social perspective, just like monopoly pricing.
The addition of drug coverage to Medicare entrenches the worst feature of the Medicare program, which is the lack of a means test. There is no reason why people who can afford to purchase health insurance that will cover their medical expenses in their old age should be subsidized by the taxpayer. There is, however, no political will to require a means test. More broadly, there is no political will to reduce public expenditures on health care. The focus of politicians is not on containing costs but rather on what has the opposite effect: expanding coverage. Congress is moving to require health insurance companies to cover mental illnesses, despite uncertainties about the efficacy of treatments for mental illness and the "cosmetic" element in the treatment of such illness because of the lack of a clear distinction between being mentally ill and simply being less happy, focused, energetic, outgoing, and, in short, successful than one would like to be. Especially because of the optional character of treatment for borderline mental "illness," demand for such treatments will be highly responsive to a subsidy, assuming the insured person who demands such treatments can shift some of the cost to the other members of his insurance pool. Notice too how the movement to require insurance coverage for mental illness will interact with Medicare drug coverage to further expand drug usage by the elderly.
In addition of course the politicians want to extend health-insurance coverage to the 40 million plus uninsured Americans, and this will increase the demand for medical services. What politicians say in a presidential-campaign year is not, however, a reliable guide to their intentions. I suspect that when the new Administration takes office in January of next year it will find that fiscal constraints preclude any significant expansion in the gargantuan federal subsidies for health care (including such indirect subsidies as imposing mandates on private insurance companies); but neither can we expect meaningful measures of cost containment.
"Sports doping"--the use of anabolic steroids and other drugs to increase athletic performance, as Barry Bonds, Roger Clemens, and other prominent professional athletes have been accused of doing--is intensely controversial. A recent article in Nature--Barbara Sahakian and Sharon Morein-Zamir, "Professor‚Äôs Little Helper," Dec. 2007--discusses the parallel phenomenon of "intelligence doping." The term refers to the use of drugs to enhance cognitive performance. These are drugs like Adderall, Modafinil, and Provigil that are used to treat genuine disorders, such as attention deficit disorder in the case of the former and narcolepsy in the case of the latter. But they can also be used by normal people, including students and academics, to improve cognitive functioning by increasing concentration, memory, wakefulness, and mental energy generally. Coffee has many of the same effects, but they are much weaker.
As in the case of sports doping, there is concern that the use of these drugs may have long-term adverse effects on the health of the user. There is even less evidence of this in the case of sports doping, however. But this may be because these drugs are newer--which means that they are just the first wave of cognition-enhancing drugs and that the subsequent waves will be more effective.
Becker and I blogged about sports doping on August 27, 2006. We pointed to the arms-race character of the practice. Because of the importance attached to winning an athletic event, anything that increases an athlete's performance, such as taking steroids, places pressure on other athletes to do likewise. The result is expense, and also possible ill health, without any certain improvement in the quality of athletic competition as perceived by fans. That is not necessarily a compelling argument for trying to ban sports doping; indeed I consider the argument weak because of the difficulty and hence cost of monitoring drug use, especially the newer enhancement practice of "gene doping," and because of the existence of borderline enhancement practices (borderline between "natural" and "artificial"), such as training at a high altitude in order to increase one's production of red blood cells, which in turn enables a greater absorption of oxygen, or undergoing eye surgery to increase visual acuity.
If fans object for whatever reason to sports doping, then sports leagues and team owners will have an incentive to ban the practice; the argument for criminalizing the practice would then depend on whether purely private sanctions could achieve an adequate level of deterrence. Suppose teams, leagues, and players all want to ban sports doping whether because of health concerns or fans' preferences, but that detection is extremely difficult, so that the probability of catching an athlete doing sports doping is very low. Then the optimal punishment may be more severe than the team or league could impose. The argument is the same as for why embezzlement is a crime, rather than the government's leaving it to the bank to punish the embezzler by firing him or suing him for the money he stole.
Fans appear to be ambivalent about banning sports doping, because they are concerned with absolute rather than just relative performance, and so enjoy the additional spectacle created by "bionic" athletes. In fact neither the teams (and leagues) nor the players' unions seem enthusiastic about banning the practice, which suggests that it does not decrease--it may actually increase--the incomes of the teams and (on average) the players.
The case for banning intelligence doping is even weaker than the case for banning sports doping. One reason is that there is a strong positive externality from increased cognitive functioning, since smart people usually cannot capture the entire social product of their work in the form of a higher income. Like other producers, part of the benefit that their production occurs inures to consumers as consumer surplus. An example is patentable inventions. Because patents are limited in duration, usually to 20 years, any benefits that a patented invention generates after the patent expires enures to persons other than the patentee. Even if there were no positive externality--even if the user of an intelligence-enhancing drug captured the entire incremental income generated by that use--there would be a social benefit, since the user is part of society, and hence no economic argument for banning.
What is a possible source of concern is that because there is competition based on intelligence, for example to get into good schools or win academic prizes or achieve success in commercial fields such as finance that place a premium on intellectual acuity, the availability of intelligence-enhancing drugs places pressure on persons who would prefer not to use them because of concerns over their possible negative health consequences to use them anyway. There is also a danger that such drugs produce only very short-term effects, for example on exam performance, that may exaggerate a person‚Äôs long-term ability. (This is one of the reasons for objecting to exam coaching.) But against this is the fact that it is even more difficult than in the case of sports doping to draw a line between permitted and forbidden uses of cognition-enhancing drugs. It is hard to define "normal" cognitive functioning in a meaningful sense. Should people with an IQ above 100, which is the average IQ, be forbidden to use such drugs, but people below that level permitted to use them until it brings them up to 100? That would be absurd. The person with an IQ of 120 would argue compellingly that he should be allowed to take intelligence-enhancing drugs in order to be able to compete for good school placements and jobs with people having an IQ of 130. And so on up.
Of course the naturally gifted will object to any "artificial" enhancements that enable others to compete with them. But it is not obvious why their objections should be given weight from a public policy standpoint. It is not as if allowing such enhancements would be likely to discourage the naturally gifted from developing and using their gifts (it might have the opposite effect, by creating greater competition for them), let alone discouraging bright people from seeking out other people to marry and produce children by.
According to the economists' analysis of externalities, a substance should be taxed, and in extreme cases banned, only if it raises the earnings and other benefits to users at the expense of harm imposed on others. For this reason, excessive drinking is fined and punished also in other ways, since heavy drinkers are more likely to get into accidents that harm others. Similarly, a substance should be subsidized if it benefits others while benefiting users. Posner's example of substances that encourage greater innovation fits this category. Substances that raise benefits to users by the same amount as it raises their overall productivity should be neither banned nor encouraged.
In economies with competitive labor and products markets, individuals tend to receive earnings and other benefits that equal their "marginal product"; i.e., equal their contribution to total output. Substances or anything else that raise the productivity of individuals who receive their marginal product should be neither taxed nor subsidized. For example, sleeping pills enable users to get a better night's sleep, which allows them to get to work on time and have a clearer mind while at work, and thus become more productive while at work. There is no reason to tax or subsidize this use of sleeping pills because individuals taking the pills would generally be paid the full increase in their productivity, and they would bear the full costs. The vast majority of substances that people take to affect their cognition fit into this category of meriting neither ban nor encouragement since they primarily affect the productivity of users without having significant external effects on others, either positive or negative.
Exceptions are substances that raise the benefits of users in situations that have a "zero sum" aspect, so that the user's gain corresponds to about an equal lose to others. Sports competitions have important zero sum aspects since fans care a lot about who wins and loses in addition to the quality of the play, so that increases in the performance of every player without changing outcomes brings relatively little benefit to fans. This importance of relative performance to fans provides the justification put forward in our discussion on sports doping on August 27, 2006 for major league baseball and other organized sports leagues to place limits and bans on the use of certain drugs among their players. For these drugs may have long run negative consequences for the health of users without raising fan welfare by much.
However, I do not see any reason why governments should be involved in enforcement of any bans imposed by sports leagues, and it was absurd for Congress to hold hearings on whether either Roger Clemons or his former trainer were lying. Governments may be helpful to sports leagues in enforcing their ban by punishing violators, but governments might also be helpful to companies in getting their workers to reduce absenteeism. Still, that is no reason for governments to be punishing workers who do not show up for work because there are no externalities outside the workplace of the individual company or league that warrant government intervention.
Other examples where relative performance helps to determine benefits include entrance to college based on test scores, such as the SAT test, or patent races to see who comes up first with the technique or process that several competitors are seeking to discover. South Korea and other countries have tried to use laws to cut down on private tutoring and other investments that increase the likelihood that a student may succeed in gaining entrance to top universities, where the number of acceptances remains constant. Presumably, these countries would want to ban students from taking various stimulants that improve their performance, perhaps at a risk to their health, but such bans are difficult to enforce.
Some economists have claimed that superstar situations involving singers and other entertainers, novelists, money managers, and lawyers have this zero sum character since a person can make a huge income by being only slightly better at what he does than others. Since fans and customers prefer to listen to the best singers or have their money managed by funds that seem to be the best, someone who is only a little better than the competition can attract a very large fan base, or a lot of money to be managed. They may earn only a little from each fan or on each dollar invested, but make it up through the millions of fans they have and the billions of dollars they manage.
The superstar phenomena discovered by my late colleague Sherwin Rosen is real and important in modern societies with huge economies of scale in communication, but it is not a zero sum situation. A superstar still only collects the value of his net contribution to output. The value is large not because of externalities, but because a superstar may only add a little utility to each fan, but he adds thislittle to each member of a huge fan base.
All in all, even aside from enforcement issues, I see little reason for governments to ban the use of Provigil and other stimulants that improve cognitive performance. There are some situations where this improvement mainly benefits users at the expense of harm imposed on their competitors. For the most part, however, potential users are the best judge of whether they should use stimulants since they bear the lion's share of the costs as well as receive the benefits.
Hardly a day goes by during this housing crisis that the media does not report on families in foreclosure proceedings, or in arrears in repayment on mortgages that had close to zero down payment requirements and low ‚Äúteaser ‚Äú interest rates. The many excuses offered by some home owners for their plight, and also eagerly by the authors of these human interest stories, is that the borrowers did not understand that these introductory interest rates might rise a lot after a few years, or that they would have negative equity in their homes if housing prices stopped rising and began to fall. An obvious alternative explanation for their behavior is that they gambled that the good times would continue indefinitely.
This type of response to failed decisions is not unique to the present housing crisis, but is part of a strong trend toward shifting responsibility to others. Women who sign a pre-nuptial agreement specifying the amount of their husband's pre-marital wealth that would be theirs in the event of divorce often try to have the agreements overthrown in divorce litigation. They claim that they did not understand what the agreements meant, or that their husbands took advantage of them in other ways to get them to sign the agreements. Usually they signed simply because that was the only way they could marry the men they very much wanted to marry, perhaps in part because the men were wealthy.
Many criminals who confess to or are convicted of serious crimes try to have the courts excuse or mitigate their behavior. They allege that they had uncaring or abusive parents, or that fathers, relatives, stepfathers, or other adults molested them as children. Abusive treatment is awful, but still the vast majority of children abused do become law-abiding and responsible adults. That is a major fact that courts should pay attention to.
Successful attempts to shift the responsibility for bad decisions toward others and to society more generally create a "moral hazard" in behavior. If individuals are not held accountable for decisions and actions that harm themselves or others, they have less incentive to act responsibly in the first place since they will escape some or all of the bad consequences of their actions. It does not matter greatly whether this moral hazard resulted from the shifting of blame for unsuccessful actions to the "small print" in a contract, to an abused childhood, to a mental state, or to many other efforts to shift responsibility away from oneself.
An important foundation of the philosophy behind the arguments for private enterprise, free economies, and free societies more generally, is that these societies rely on and require individual decision-making and responsibility. This philosophy not only emphasizes the moral hazard reasons to require individual responsibility, but also "the use it or lose it principle", a colloquial expression indicating that various mental and physical capacities wear down and erode if they are not used on a regular basis. This principle implies that people who are accustomed to having other persons or governments make their decisions for them lose the ability to make good decisions for themselves. Free societies lead to better decision-making partly because men and women accumulate more experience at making decisions that affect their well-being and that of others.
Of course, I recognize that not all individuals are equally capable of making decisions in their own interests. Clearly, the mentally retarded have trouble understanding complicated decisions. People sometimes get fooled by how contracts and transactions are presented to them, perhaps because of cognitive quirks. College-educated persons generally manage their financial assets better, and respond more successfully to many types of economic, health, and other stresses, than persons with less schooling. For example, educated residents of New Orleans reacted more effectively to the challenge of the Katrina hurricane than did high school dropouts. Similarly, the anarchy in Russia following the collapse of communism greatly lowered the life expectancy of all Russian men except those men with a college education. These men continued to improve their life expectancy throughout the economic crisis that engulfed Russia.
Still, greater practice in making decisions, and greater responsibility for the consequences of one's decisions, usually significantly improves decision-making by the vast majority of adults, regardless of limitations in their education and cognition. Moreover, many of the decisions and actions that do not work out well are not due to low education, inability to understand what is going on, or biased and incorrect information. For example, the sub prime mess that continues to devastate financial institutions of the United States and elsewhere is not due to the limited information given to borrowers since this crisis has also financially ruined many highly educated and sophisticated bankers, hedge fund managers, and others with years of experience dealing with complicated financial assets. Borrower and lender alike, regardless of their financial experience, were caught up in the atmosphere brought on by a bubble that seemed to promise perpetual good times in financial markets.
What if anything should governments do to help out in this present financial crisis, mindful of the many kinds of moral hazard that are lurking, but also mindful that the financial structure is delicately balanced? Despite the moral hazard risks, interventionist policies might be justified not because some borrowers or lenders were taken advantage of, but if these interventions would help the economy recover more quickly, and insure that the recession is neither prolonged nor deep. Still it is difficult to see the merits in the Fed's efforts to help the sale of Bears Stearns to JPMorgan Chase by guaranteeing many billions of mortgage and other assets of the company.
Becker makes two principal points in his interesting post: that free enterprise encourages people to take responsibility for their actions and thereby make better decisions; and that there is "a strong trend toward shifting responsibility to others."
I would qualify these points as follows. Free enterprise requires individuals to make a variety of decisions, concerning both production and consumption, that in a socialist system is the responsibility of government officials. It does not follow that people in free-enterprise societies "take responsibility," in some psychological sense, for their actions. The tendency to blame others when things go wrong is deeply rooted in human nature and I imagine no less common in America than in any other country. In fact, in a free-market system, competition places significant limitations on the freedom of choice of consumers, investors, and workers.
But has the tendency toward shifting responsibility for our actions to other people perhaps become more common over time? Maybe so, with the erosion of belief in free will. In the traditional sense of that concept, a sense most highly developed (so far as I know) in Christian theology, uncoerced decisions, such as a decision to commit or refrain from committing a crime, are deemed to be uncaused. They are deemed the "free" choice of the person making them, so that if he makes the wrong choice he has no one to blame but himself. (There is an odd exception: some Christians believe that a person can be "possessed" by the devil, in which event he is not responsible for his actions until the devil is exorcised.) I find it hard--maybe for lack of imagination--to believe that decisions have no cause. I assume that they are determined by the balance of advantages and disadvantages as it appears to the decider, though he may not be fully conscious (or conscious at all) of the considerations that are moving him. Those considerations are influenced by background, intelligence, experiences, and other factors most of which are not, in any meaningful sense, within a person's "control."
On this view, to call a person "responsible" for a decision (such as the decision to take out a no-down-payment mortgage with an adjustable interest rate) is just to say that his process of weighing the pros and cons of the decision was not overborne by force or fraud or thwarted by a mental deficiency. The decision may not have been blameworthy in any very deep sense; it may have been foreordained by psychological factors. Becker mentions "greed." Why are some people greedy? Because they choose to be bad? Or because their psychology, which they are not responsible for, has produced in them an abnormal demand for money? All "freedom" means is not being subject to certain kinds of coercion. Freedom so understood expands the opportunities open to people, but how they exploit their opportunities is the product of the interaction of their genetic and financial endowments, their upbringing and other environmental factors, and their good and bad luck.
Moral hazard is thus not a defect of the will, but a rational response to one's opportunity set. If one has medical insurance without deductibles or copayments, the marginal cost of medical care will be low (even zero), so one will consume more of it. If one is confident that in the event of a flood or an earthquake there will be a government bailout, one will buy less or no flood or earthquake insurance. The government‚Äôs bailing out of investment companies, banks, and mortgagors will induce those entities to take more investment risks in the future than they otherwise would, and so will increase the risk of future housing bubbles and credit crunches. This has, I think, always been so. That is, there was never a time when, because people were averse to taking advantage of opportunities to shift costs to other people, moral hazard was not a social problem.
Criminals will sometimes try to place the blame for their crimes on a bad upbringing. That is nothing new. A criminal (or his lawyer) will make any argument that might reduce his sentence; he would be irrational not to do so. And it is plausible that a bad upbringing, along with a low IQ, increases the likelihood that a person will become a criminal, by reducing his alternative legal opportunities. But as Becker points out, most people with a bad upbringing (and equally most people with low IQs) do not become criminals. This has, to my mind, a practical rather than a moral significance. It suggests that the threat of punishment can deter even a person who has had a bad upbringing. So by adding that threat to the considerations that a person will weigh in deciding whether to commit a crime, society can reduce the crime rate. We may even want to punish the criminals with the bad upbringings more heavily than other criminals, in the belief that they can be deterred only by a threat of heavier punishment. On this approach to crime and punishment, we punish criminals not because they "freely" chose to do bad things, but because by punishing them we can at tolerable cost reduce the prevalence of activities that generate net negative social costs. We make people do the "right" thing not by appealing to the exercise of their free will but by increasing the cost to them of doing the wrong thing. Fortunately, few judges, whether or not they believe in a strong sense of free will, allow the excuse of a bad upbringing to mitigate punishment.
As for the people who took out risky mortgages in the expectation that house prices would continue to rise, they should not be bailed out (that is the moral hazard problem) by government even, I think, if they were victims of fraud. But if they were victims of fraud, they should have legal remedies against the people who defrauded them. Of course, if there were no legal remedies against fraud, people would be more careful--but they would be too careful; they would incur high costs of self-protection. It is cheaper to punish fraud, just as it is cheaper to punish burglary than to tell people to fortify their houses.
The death on February 27 of William Buckley provoked a surprising outpouring of praise, not limited to conservatives. The praise was mixed with hyperbole. He was credited with having created modern American conservatism, with having united free-market economists with social and other noneconomic conservatives, with being the person without whom there would never have been a Reagan presidency, and with being a formidable intellectual.
I doubt that any of those things is quite true. He was colorful, rich, good-natured, a skillful polemicist and influential "public intellectual" (in my book Public Intellectuals: A Study of Decline ) he ranked number 20 in "media mentions" for the period--long past his period of greatest influence--1995 to 2000), a bricoleur, defined by Wikipedia as "a person who creates things from scratch, is creative and resourceful: a person who collects information and things and then puts them together in a way that they were not originally designed to do." What he put together were conservative Catholicism; McCarthyism; belligerent, even militaristic anticommunism (roll back the Iron Curtain rather than contain the Soviet Union)--a position related, like his McCarthyism, to his religiosity, which made communism particularly odious to him--defense of the southern states' resistance to racial integration; hostility to big government' and (the basis of his hostility to the "nanny state") individualism, as expressed for example in his advocacy of legalizing marijuana and other mind-altering drugs (though I don't know when he began advocating legalization), and entrepreneurship. All but repealing the drug laws were ingredients of an American conservatism of the 1950s that was outside the mainstream of the Republican Party of the time, though it stopped short of the John Birch Society.
Apart from his libertarian streak, Buckley's policy positions were not, for the most part, sound. Joseph McCarthy appeared on the scene after the communist penetration (which was considerable) of the government had been eliminated by the Truman Administration. The southern states' rights movement was disreputable. Containment was probably the most sensible response to Soviet expansionism. And religion is not, in my opinion anyway, a good basis for public policy. Moreover, Buckley was a journalist, working under deadlines that resulted in most of his opinions being merely asserted rather than also well-supported. His policy positions were not fully coherent: His enthusiasm for rolling back the Iron Curtain did not sort well with his dislike of big government, since wars and heavy defense expenditures increase the size of government, as President Eisenhower was well aware.
The suggestion in the obituaries that he united free-market economists with other conservatives is especially misleading. Free-market economists have always been on a different track from the kind of political and social conservative that Buckley exemplified. He was a friend of free markets, but on moral grounds rather than because he thought the market a more efficient method of allocating resources than the government, though he thought that also.
The conservative economic movement has had two major streams, which are convergent. One is the Austrian school, whose best-known exemplar was Friedrich Hayek. Hayek argued powerfully that socialism doesn't work, because it does not enable the aggregation of the information required to operate a modern economy; for that, the price system is necessary, because prices impound and transmit information far more effectively than a centralized economic controller can do. Hayek's insight was vindicated by the collapse of the communist system. But his influence has been mainly in Europe, where it has been, however, considerable, especially in the nations transitioning from communism.
The other stream, largely independent of the Austrian, originated with maverick economists, such as Milton Friedman, Aaron Director, and George Stigler, who at the height of the 1930s depression, when free-market economics was in the dog house and the Soviet Union's collectivist economy was widely admired including among economists, had the temerity (like Hayek) to argue that collectivist regulation of the economy was inferior to leaving the regulation of economic activity to the market. The school expanded slowly after World War II; Ronald Coase, a brilliant English economist who moved to the United States, was an influential critic of regulation. While Director and Stigler mounted a strong challenge to conventional views of antitrust, Stigler and especially Friedman challenged a wide range of governmental policies.
Other economists, and even a few economics-minded law professors, joined the free-market movement. But the movement received virtually no hearing during the 1960s, the era of the "Great Society" programs of Lyndon Johnson. However, the stagflation of the 1970s exposed the failure of conventional ‚Äúliberal‚Äù (in the welfare-state sense) policies, promoted increased acceptance of free-market economics, and stimulated the deregulation and privatization movements, which began in the Clinton Administration and expanded in the Reagan and (first) Bush Administration, continuing into the Clinton Administration, notably with welfare reform.
All this had nothing to do with William Buckley. Most of the causes dearest to his heart were unrelated to economic policy, such as his belief about the proper strategies for defending against the Soviet Union, expelling Soviet agents from the federal government, or defeating our current enemies. Buckley was a strong opponent of abortion, whereas economists, while they can tote up the costs of forbidding or permitting abortion, do not, as economists, have any position on whether a fetus should have the same legal status as a newborn child. Economists might think that particular religious beliefs, such as Calvinism, with its emphasis on frugality and saving, promote social welfare, but they have no position on the truth of religion. They value markets because markets are efficient, not because people have a moral entitlement (as John Stuart Mill believed) to engage in any and all conduct that does not create a palpable harm to other people ("my rights end where your nose begins"). Markets to an economist are just instruments, and for solving particular problems there are sometimes better instruments.
What is true is that a political movement based solely on free-market economics could not have achieved political power under conditions of modern American democracy. Modern conservatism, to the extent that it is a coherent movement, combines free-market economics (to a degree) with political and social conservatism (tough on crime, strong on national defense, friendly to religion, critical of liberal social values, hostile to trial lawyers and judicial activism). It was not a movement created by Buckley, able journalist and polemicist though he was.
William F. Buckley and Economics-Becker
I agree with Posner that Buckley was not a major originator of ideas. He was, however, an absolutely superb public intellectual who had the courage to be a conservative when that was still highly unpopular, especially in the New York City circles that Buckley inhabited. He persuaded many young college students that a conservative stance is a respectable position intellectually, and to this end founded the political youth movement Young Americans for Freedom. He influenced Barry Goldwater and Ronald Reagan, although he was a much less important influence on their thinking, or on that of Margaret Thatcher and other conservative political leaders, than more creative thinkers like Milton Friedman and Friedrich Hayek.
I did not know William Buckley personally, but I admired his multi-talented contributions. He was a media entrepreneur, as seen from the influence of The National Review, a magazine that he founded when conservative magazines were not popular, and supported financially for many years. He wrote a widely read syndicated column On the Right, and hosted for many years Firing Line, a weekly television program that debated public policy issues. I have only occasionally read National Review or his columns, or watched his television programs, but I usually enjoyed and admired them when I did. When young he had repugnant views on several issues, like racial matters and McCarthyism, but he had the intellectual honesty to eventually repudiate most of these opinions. Over time he became the favorite conservative of liberals for his wit, use of language, and urbane debating skills.
Although a friend and skiing companion of Milton Friedman, Buckley had little interest in economic issues per se, and he was not concerned with the effects of taxation, regulation, and other economic policy issues. He recognized and accepted that his support of strong armies would lead to a much bigger government. He had a well-publicized conversion toward legalizing the use of drugs, a conversion influenced by Milton Friedman's strong stand in favor of legalized drugs.
Discussions by economists of regulation, competition, and other economic issues generally concentrate on their contribution to economic efficiency, and the analysis of conditions under which competition and private enterprise promotes efficiency. Friedman, Hayek, and George Stigler, three leading free market economists of the twentieth century ,were very much interested in these issues, but they also took a much broader view. For example, in Capitalism and Freedom, Friedman claims that greater freedom should be the goal of economic activity, and also discusses the connection between political and economic freedom: "economic freedom is an end in itself‚Ä¶economic freedom is also an indispensable means toward the achievement of political freedom". Hayek's Road to Serfdom and Constitution of Liberty argues that a private enterprise system is crucial for the achievement of political and social freedoms.
Stigler in his Five Lectures on Economic Problems references with approval the classical economist' much broader approach toward the contributions of individual choice and private enterprise. By making individuals responsible for their decisions, competition and private enterprise force individuals to become more self-reliant since this type of economic system provides much stronger pressures to act responsibly than do systems where individuals have their choices made for them by governments. In essence, this approach argues that competitive economic systems do not just (usually) satisfy individual preferences in an efficient way, but that these systems also change preferences themselves in valuable directions. Hayek claimed in Individualism and the Social Order that Adam Smith, and presumably Hayek as well, believed that "man was by nature lazy and indolent, improvident and wasteful, and that it was only by the force of circumstances that he could be made to behave economically or carefully to adjust his means to his ends".
Buckley was not interested in the technical discussions of economists, and the nitty-gritty economic issues they analyzed, but he was drawn to a generally free market position because he was attracted by the broader virtues of free markets and capitalism in encouraging economic, political, and civil freedoms. He did not care as much as economists do that, for example, agricultural price supports make for inefficient food production, or that tariffs on imports raised the cost to consumers of various goods. On the other hand, he did care very much that these and other interferences tend to stifle various freedoms. In this way he was, I believe, greatly influenced by the broader perspective of the effects of an economic system on individual responsibility taken by classical economists and the leading free market economists of his time.
Until the mid 1960's, female high school graduates were less likely than male graduates to go to college, and female college students were far more likely to drop out than were male students. The direct reason for this difference was that many younger women married and then dropped out of school - mainly to start having families. Perhaps a more basic reason for this gender difference in education was that women did not participate in the labor force so much in those days, and hence many women did not believe a college education was useful.
All this has changed radically since 1970. Female high school graduates are now no less likely to enter college than are male graduates, and a much larger fraction of girls than boys finish high school. These two facts imply that considerably more women enter college than men. The fraction of college students who are female is further increased by the greater propensity of women who enter college to finish and graduate. About 57% of American college students are women, and they constitute about 60% of those who graduates. Similar trends toward making women a majority of college students apply to many European countries, and some Asian countries as well.
What explains this reversal from under representation of women in college to over representation ((see the related discussion in my blog entry for July 17, 2006)? One important cause is that marriage and child-rearing exert a much weaker pull out of school for women than in the past since women marry and start families at much later ages than 40 years ago. This increase in age at marriage is related to the decline in birth rates, and to the increased time that women want to spend working rather than caring for children and running households.
A college education is more attractive to women who spend greater time in the labor force since going to college significantly raises earnings of women as well as men. The financial attractiveness of a college education has grown sharply for both sexes since the 1970's because of the large rise in the earnings premium from a college education. The average hourly earnings of college-educated persons grew from about 40 % higher than that of high school graduates in 1980 to about 80% higher in recent years. This trend toward a much higher college education premium is also found in many other countries as well as the United States.
Although quantitative evidence on non-earnings benefits are more limited, the advantages of a college education in improving health, raising children, managing financial assets, responding to adversity, and in other areas of life have also grown along with the growth in the college earnings premium. This implies a widening advantage of a college education even to women who spend a significant portion of their time raising children and managing a household. In addition, the propensity of college-educated women to be married has increased a lot relative to the marital rates of women with less education, so that graduating college no longer significantly reduces a woman's chances of marriage.
Since these forces pushing women toward a college education have been strong during the past several decades, it is no surprise that a much larger fraction of young women now enter and complete college than a half century ago. This does not, however, fully explain why women are more likely than men to be in college since most of these forces have been just as powerful for men, and college-educated men still spend a larger fraction of their time working in the labor force than do college-educated women.
An important reason why women not only closed the education gap with men, but also changed the direction of that gap, relates, I believe, to the better performance of women in school. The average grades of women at every education level exceed the average grades of men, while the variation around the average is larger for men. Persons with low grades find school unpleasant since their teachers criticize them, and they come to believe that they are failures. Since many more boys than girls in high school have low grades because both average grades are lower and the variance in grades is greater for boys, more boys than girls find high school unpleasant and drop out before graduating. Dropouts truncate the grade distributions of graduates at the lower end, so that average grades of boys who graduate high school are closer to the average grades of girls who graduate than are the averages for all boys and girls in high school.
The same process operates at the college level. Men have much lower grades in college and find the experience less pleasant, so they drop out of college in much larger numbers than women, and are much less likely to graduate. That many more men than in the past continue on to college after high school indicates that they are aware of the rise in financial and other benefits from college. That they drop out of college in large numbers presumably indicates that they are either discouraged by their low grades, or they just do not like being students.
Why women at all ages do better in school than men is not so easily understood. It is unlikely that women do better mainly because they expect to remain in school longer- this is causation from remaining in school longer to better grades- since women had better average grades than men even when they were more likely to drop out of school. One line of explanation argues that women are more diligent students, less rebellious, and more docile students. Whatever the explanation for the remarkable shift in college attendance rates of men and women during the past 40 years, this shift is likely to have major implications for future changes in the gender gap in average earnings, the fraction of heads of business that are women, and other measures of gender differences in achievement.
It is no surprise that female enrollment in college has increased over the last half century. The later age of marriage and childbearing and the greatly increased job opportunities of women explain the trend. Another factor, stressed by Becker in his pathbreaking economic analysis of the family, is increased emphasis on quality rather than quantity of children; parental education is an important factor in the quality of children.
The fact that women tend on average to get better grades in college helps to explain their lower dropout rate, but this is nothing new; even in the era when women dropped out of college to marry and have children, they had higher grades than men. That women are better students than men is pretty much a constant--and a puzzle.
When one observes members of one group outperforming another in a competitive environment in which, therefore, substitution of inputs is possible, a possible explanation is discrimination against the members of the superior group. If a college wants to have the the best students it can attract, and the women attending the college have better grades than the men who attend it, something is wrong--the school could increase the quality of its student body by admitting more women and fewer men. That it does not do so may be because it values other gender-dependent factors--for example, female students may prefer a lower ratio of female to male students than a purely meritocratic admissions policy would produce, and this preference may influence the college's admissions decisions. But this is unlikely to be a good explanation for the superior female academic performance today. The incentive to discriminate against female college applicants was much stronger in the old days, yet the female-male performance gap has not (so far as I can discover) diminished.
Women might outperform men academically because they worked harder, and they might work harder because they had more to gain from completing college successfully and doing so with high grades. But as Becker points out, since male participation in the labor force continues (and probably will continue) to exceed that of women, and since there is a large wage premium for college graduates, men actually have more to gain from completing college than women do. Yet not only do they drop out at a higher rate; but male college enrollment has not increased nearly as rapidly as female college enrollment has. Women are not just catching up with men on the educational front; they are becoming better educated than men.
So there are two puzzles: why women get better grades than men, and why men have a lower elasticity of response to the effect of education on earnings than women do. At this stage of our knowledge, the answers to these questions must be highly speculative; what follows, then, is guesswork.
The first question is, though, I think, a little easier than the second. From the standpoint of most teachers, right up to and including the level of teachers of college undergraduates, the ideal student is well behaved, unaggressive, docile, patient, meticulous, and empathetic in the sense of intuiting the response to the teacher that is most likely to please the teacher. Those are traits less characteristic of boys than of girls. Moreover, there is more variance in IQ among boys than girls--to exaggerate, more morons and more geniuses--and both the morons and the geniuses are difficult for most teachers, the morons for obvious reasons, the geniuses because they are easily bored in a class geared to the comprehension of the average student. So girls are easier to teach, and so are "rewarded" (not deliberately) with higher average grades.
Nothing in the suggested answers to the first question, however, can explain why males should be less responsive to the growing value of a college education than females. One possibility is that there is nothing more that men can do to improve their academic performance, given genetic limitations. Notice the curious fact that the more men in the lower tail of the male IQ distribution drop out at some stage in their academic career, the higher the average grades of the men who remain school should be; the "genius" tail pulls up the average, while the "moron" tail, being depleted because of dropouts, pulls it down less than it would if the students in that tail did not drop out disproportionately and thus cease to figure in the determination of grades. Maybe the "genius" tail, because of the publicity that its members attract, has obscured the fact that women may on average be more intelligent, or at least have innately a suite of qualities more supportive of academic perfornance, than men. The key is "innately." If aggressiveness and other psychological or cognitive qualities that inhibit male academic performance are innate, men may have maxed out long ago, while women did not reach their peak then because of factors extraneous to ability, such as lack of demand for women in high-skilled jobs, until recently.
Another possibility is that the decline of the conventional "patriarchal" family since the 1960s has been harder on boys than on girls. Because of rampant divorce and illegitimacy, a boy's biological father is less likely to be a continuous presence during the boy's formative years, and this is only one factor in what appears to be a decline in the disciplining of children. If docility is as I have suggested a factor in academic performance, a decline in discipline is more likely to harm the academic performance of boys than of girls because the former need more discipline to instill docility in them. It is difficult to test this hypothesis empirically, however, because grade inflation bedevils any effort to use changes in average grades over time as a measure of the trend in academic performance.
But, to repeat, these suggested answer to the puzzle of the gender education gap are highly speculative--a stimulus (I hope) to further thought, not the end of the inquiry.