There is concern about the possibility of a flu pandemic that would be as or more lethal than the 1918-1919 Spanish flu pandemic, which may have killed as many as 50 million people worldwide; 500,000 died in the United States. A strain of avian flu first detected in 1997 has infected some 150 million birds, including chickens, ducks, and geese, mainly but not only in eastern Asia. More than 100 human beings have been infected, of whom about half have died. The victims were infected by contact with diseased birds rather than by contact with infected humans. As long as the only transmission is from birds to humans rather than from humans to humans, there will be no human pandemic. But the flu virus is notoriously mutable; if the current strain of avian flu mutated into a form that made it transmissible from one infected person to another, it might spread rapidly through the human population. Stocks of vaccine for immunizing people from the avian-flu strain, and of drugs (mainly Tamiflu) for treating already infected people, appear are inadequate. The Swiss pharmaceutical manufacturer Roche, the only producer of Tamiflu, has been reluctant to license its production to other manufacturers.
The probability of a pandemic is unknown, but probably significant because of the vast number of infected birds and the increasing number of infected human beings, in whom the virus might mutate into a form in which it was transmissible to other human beings. Flu pandemics have been frequent. There were two in the twentieth century besides the Spanish flu pandemic. They occurred in 1957-1958 and 1968-1969, and each killed more than a million people worldwide. All three twentieth-century pandemics involved strains of avian flu. There was also the swine-flu pandemic scare in 1976; the failure of a pandemic to materialize has engendered some skepticism concerning the likelihood of an avian flu pandemic. One of the most foolish forms of commentary on issues of public safety is to note the number of false alarms and infer from that number--entirely illegitimately--that there is nothing to fear.
The world in general and the United States in particular are unprepared for a flu pandemic. Although the current strain of avian flu was discovered eight years ago, vaccine development and production are just beginning, along with stockpiling of Tamiflu. Apparently there is at present only enough vaccine for 1 percent of the U.S. population. Roche has only a limited capacity for producing Tamiflu and, as mentioned, is reluctant to license other pharmaceutical firms to produce the vaccine. The President recently announced a $7.1 billion program for improving the nation's defenses against flu pandemics, but it will take years for the program to yield substantial protection.
So we are seeing basically a repetition of the planning failures that resulted in the Hurricane Katrina debacle. The history of flu pandemics should have indicated the necessity for measures to assure an adequate response to any new pandemic, but until an unprecedented number of birds had been infected and human beings were dying from the disease, very little was done.
The causes are the familiar ones. People, including policymakers, have grave difficulty taking measures to respond to risks of small or unknown probability. This is partly because there are so many such risks that it is difficult to assess them all, and the lack of solid probability estimates makes prioritizing the risks inescapably arbitrary, and it is partly because politicians have truncated horizons that lead them to focus on immediate threats to the neglect of more remote ones that may be more serious. ("Remote" in the sense that, if the annual probability of some untoward event is low, the event, though it could occur at any time, would be unlikely to occur before most current senior officials leave office.) But by the time a threat becomes immediate, it may be too late to take effective response measures.
There is also a psychological or cognitive impediment--an "imagination cost"--to thinking seriously about risks with which there is little recent experience. Wishful thinking plays a role too. There is the inverse Chicken Little problem: the illogical reaction that because the swine-flu pandemic never materialized, no flu pandemic will ever materialize. Another example of wishful thinking is the argument that most people afflicted by the Spanish flu in the 1918-1919 pandemic died not of flu, but of bacterial diseases such as pneumonia that the flu made them more vulnerable to. But, first, is is far from clear that "most" died of such diseases, and, second, the current strain of avian flu appears to be more lethal than the Spanish flu. Only about 1 percent of Spanish flu victims died, whereas 50 percent of known human victims of the current avian flu have died. That percentage is probably an overestimate because many of the milder cases may not have been reported or may have been misdiagnosed; but it is unlikely that the true fatality rate is only one-fiftieth of the current reported rate. It is estimated that even a "medium-level" flu pandemic could cause up to 200,000 U.S. deaths and a purely economic impact (that is, ignoring the nonpecuniary cost of death and illness) of more than $150 billion.
A specific problem with respect to preventing flu pandemics is the difficult economics of flu vaccines. Because of the frequent mutations of the virus, a vaccine may be effective for only one season, in which event the manufacturer must recover his entire investment in the vaccine in just a few months. The expected cost of the vaccine to the manufacturer is increased by his legal liability (a form of products liability) for injuries due to the side effects of the vaccine. If a large population is vaccinated, a percentage of the population, amounting to a very large number of people, will in the normal course experience illness in the months following the vaccination. Many of them will be tempted to sue, and uncertainty about the causation of an illness may enable a number of persons to recover damages who would have become ill anyway. This problem can be solved in a variety of ways: by requiring proof of negligence rather than imposing strict liability for side effects of vaccination; by increasing the burden of proving causation in vaccination suits; or by the governmen's undertaking to indemnify the producers for damages attributed to the vaccine. Even if such steps were taken, there would be a strong case for the government‚Äôs financing vaccine development and procuring large quantities of vaccines for distribution as needed.
Measures along these lines are now being taken; and the government‚Äôs agreeing to indemnify manufacturers for damages resulting from vaccine side effects would be a natural evolution from the National Vaccine Injury Compensation Program, created in 1986, which provides relatively modest "no fault" compensation for injuries caused by vaccination but does not preclude lawsuits against the manufacturers of the vaccine. However, measures not begun until the threat of a pandemic is imminent may be too little, too late.
A difficult question is compulsory licensing of patented or other proprietary flu vaccines. On the one hand, compulsory licensing would speed the production of vaccine; on the other hand, it would reduce the incentive of firms to develop new vaccines in the first place. The answer may be to combine compulsory licensing with generous research subsidies.
Hurricane Katrina and now the danger of an avian flu pandemic--one an actual, the other a potential, catastrophe for which the nation failed or is failing to prepare adequately--underscore the need for institutional reforms that will overcome policy myopia based on inability to plan seriously for responding to catastrophes of slight or unknown probability but huge potential harm.
Posner raises most of the important issues. I will make just a couple of points. Companies are reluctant to invest in developing vaccines or other protections against the Avian and other types of flu not only because of their legitimate fear of excessive litigation by those person who claim to have been harmed by the vaccine. In addition, and more important in combating pandemics, they will be forced during a pandemic to allow the production of generics and other much cheaper substitutes for effective drugs they develop, despite any intellectual property rights they are supposed to have.
This has already happened with Roche's Tamiflu drug that apparently offers some protection against avian flu. The Taiwan government has forced Roche to license the island's health department to produce Tamiflu as long as Roche does not supply enough to meet the needs of the Taiwan population. Roche has also caved in to demands by Indonesia and a couple of other countries, saying its patent does not prevent their production of Tamiflu.Yet so far there have scarcely been 100 human cases of avian flu. Can you imagine the pressures on any company to either give away its vaccine or allow cheap generic versions if a widespread pandemic develops that could kill millions of persons in many nations, as happened during the 1918-19 flu pandemic and others discussed by Posner?
These negative incentives for companies to develop vaccines is all the more regrettable since the world's population would be willing to pay enormous amounts to have a vaccine available if a serious deadly pandemic developed. Economists have estimated from people‚Äôs decisions about various life-threatening risks the amounts they would be willing to pay to reduce their risk of dying from an accident or from a disease. A young person in the United States is estimated to be willing to pay about $500 for a 1/10,000 decline in the probability of dying at each age. This means that 10,000 such young persons would be willing to pay in the aggregate about $5 million for such a decline in their risk of dying. The $5 million figure in this example is what economists call "the value of life" (to young persons for such risks).
Suppose a million individuals in the US alone were at risk of dying during a major pandemic. If $5 million is taken as the value of a life, this gives a total willingness to pay by the million persons of about $5 trillion, or about ¬Ω of US GDP, for an effective vaccine to avoid getting sick and dying from the pandemic. This is a very rough estimate that may be too large since some very elderly persons would die, and they generally put less value on living a bit longer. On the other hand, it is more likely a gross underestimate of what the world would be willing to pay since many millions of persons would also die outside America. Moreover, it may be a large underestimate even for the US since people would generally pay more to avoid the very large risks due to a deadly pandemic than the 1/10,000 improvement in risk that motivated the $3 million estimate I gave.
So the world's population would be willing to pay a lot for an effective vaccine against avian flu, but companies are given weak incentives to spend a lot on developing such vaccines. That is the challenge posed to effective public policy, and I agree with Posner that so far the US and other governments have failed to meet the challenge.
An amazing number of comments, some however needlessly incivil.
A very interesting comment suggests that, if seven years of exclusivity are enough to induce substantial expenditures on developing orphan drugs despite their small market, we should reexamine the need for the 20-year patent term. An even more radical possibility would be to jettison patent protection in favor of some variant of the Orphan Drug approach, a form of intellectual property protection that is much simpler than patent protection. But I recognize the force of the criticisms of the Act in the excellent comment by "SteveSC."
Another comment asks whether the alternative uses to which resoiurces would be put if there were no Orphan Drug Act would contribute as much as or more than to social welfare; the commenter says that "it does not seem that a drug like Viagra is nearly as useful as say, one treating cancer." This proposition has great intuitive appeal, but it is (speaking of useful) useful to distinguish between the utilitarian and economic perspectives. Economists generally measure the welfare effects of a new product by willingness to pay rather than by subjective satisfaction (pleasure, happiness, freedom from pain, etc.). From that standpoint, a drug like Viagra that has a huge potential market might be more "valuable" than a drug that treated a cancer from which only a tiny number of people suffer. I am not suggesting that the economic criterion of welfare should be the only one employed by government. But I insist on the relevance of the economic perspective--and here I quote the commenter who said "It is in fact appropriate to ask--ad absurdum--whether an Act resulting in pharmaceutical companies spending billions to find a cure for a disease whose only victim were Bill Gates, instead of spending them on research that might benefit thousands of even millions of Americans, would in fact have negative benefits [for] the rest of us." No offense intended to Mr. Gates; it is nevertheless a worthwhile question.
A pair of excellent articles by Geeta Anand on the front page of the Wall Street Journal for November 15 and 16 discusses the little-known but very costly Orphan Drug Act of 1983. The Act is designed, mainly by providing expanded intellectual-property protection (there are also tax incentives and research subsidies, but they are considered less important), to encourage the creation of drugs for the treatment of rare diseases, defined as diseases that afflict no more than 200,000 Americans at any given time. Partly because different cancers are classified as different diseases, an estimated 25 million Americans have a rare disease as defined by the Act.
A company that is first to obtain the Food and Drug Administration's approval to sell such a drug has the exclusive right to sell it for seven years. Although this is shorter than the term of a pharmaceutical patent (normally 20 years), establishing patent eligibility is a far more difficult and protracted undertaking and a patent once obtained is subject to court challenges that often succeed in invalidating it.
The expansion in intellectual-property rights brought about by the Orphan Drug Act makes the following economic sense: The incentive to create an intellectual work is a function of the size of the potential market for it. The reason is that, by definition, the principal costs of such a work are fixed costs, incurred before the first sale is made; in the case of orphan drugs, they are the cost of R & D plus (what is often greater) the cost of clinical testing, and they greatly exceed the costs of actually producing the drug. The larger the market, the lower the fixed costs per sale, and so the less the seller has to charge in order to recover those costs. If fixed costs are 100 and variable cost (the cost of producing one unit of the product) is 1, then if there are 10 customers the producer must charge each at least 11 (100 divided by 10, plus 1) to break even, but if there are 100 customers he can break even at a price of 2 (100 divided by 100 plus 1). Hence the rarer a disease, and thus the smaller the potential market for a drug to treat it, the higher the price that the producer must charge in order to break even. His ability to charge that high price will depend on his ability to exclude competition; a producer allowed to duplicate the new drug could undercut the price charged by the original producer yet make a large profit because he would not have borne any R & D costs. The higher the break-even price and therefore the greater the profit opportunity for a competitor, the likelier that competition will quickly erode the price and prevent the original producer from recovering his fixed costs. Giving the original producer more than the usual protection against competition that the law provides to creators of intellectual property is thus a method of increasing the incentive to create drugs that have only a small potential market because relatively few people suffer from the diseases that the drugs treat.
This is not just a theoretical point. The fixed costs of a new drug are indeed high, even if the industry-sponsored figure of $800 million is, as I believe, an exaggeration. This means, moreover, that even without a threat of competition, the incentive to develop a new drug that would have very few buyers would often be insufficient to induce that development. Suppose a drug cost $500 million to develop and had only 50 potential customers. Then each would have to pay (over his lifetime) $10 million (actually more, because of discounting to present value) to enable the producer to cover its fixed costs. Health insurers might be unwilling to pick up such a tab.
The success of the Orphan Drug Act in encouraging the creation of orphan drugs (more than 200 such drugs have been approved since the Act was passed, compared to only 10 in the preceding decade), which in 2003 had total worldwide sales estimated at roughly $28 billion, confirms the economic analysis and shows that intellectual-property protection can have important incentive effects. But has the Act produced a net gain in economic welfare? That is less certain. Of course many people have benefited from the drugs. But the costs per benefited person are frequently astronomical; that is implicit in the rationale for giving producers of such drugs increased protection against competition. The costs are especially high for those orphan drugs, apparently the majority, that alleviate symptoms or prolong life but do not cure the disease, so that the patient has to take them for the rest of his or her life. The Wall Street Journal articles give an example of a woman who suffers from Gaucher disease and spends (or rather her health insurer spends) $601,000 a year for the drug, Ceredase, and its administration. Because by definition the percentage of people who suffer from rare diseases is small, it is feasible for health insurance to cover such extraordinary expenses, provided the insurance pool is large. And Ceredase is at the high end of orphan drug expense.
Resources for medical research are finite. The Orphan Drug Act sucks large research expenditures into creating treatments for rare diseases. Without the Act, those resources would be channeled by the market into other investments that might produce a higher social return. The English economist Arnold Plant pointed out many years ago that if the law protects some monopolies, as by granting patents or equivalent intellectual-property protection, the profit opportunities that such protection creates (Ceredase generates an estimated 25 percent annual rate of return on investment for its producer, Genzyme Corp.), which are not generally available in the economy, may attract into the monopoly markets resources that would produce greater consumer welfare if invested in production in competitive markets. As a result of competition, the price of television sets is much less than the price that people would be willing to pay if the sale of television sets were monopolized; the difference is "consumer surplus" and is a measure of the net value that the industry creates. For all one knows, the consumer surplus that would be generated if the resources now devoted to developing orphan drugs were channeled into competitive markets would exceed the net benefits of those drugs, bearing in mind that there are few beneficiaries. The number of people who take orphan drugs is far fewer than the total number of people with rare diseases. Indeed, apparently only 200,000 Americans are taking such drugs. Assuming that most global expenditures on orphan drugs are for Americans (I'm just guessing--I do not have U.S. figures), this would be an average expenditure of $100,000 ($100,000 times 200,000 equals $20 billion). Few people would be willing, if only because few people would be able, to spend anywhere near this much on drugs.
As the economist Tomas Philipson points out, however, if people who do not suffer from rare diseases derive a benefit from orphan drugs--whether because they are altruistic or because they fear that they or members of their families might develop such a disease--then the total social surplus created by the Orphan Drug Act may exceed the consumer surplus. Yet if the R & D expenditures induced by the Act were channeled instead into developing drugs for equally serious but much more common diseases, this might well be preferred by most people.
I agree with most of Posner's discussion that as usual is presented very clearly. But I appear to differ on one issue that I believe is important.
The Orphan Drug Act of 1983 greatly expanded research on rare disease that has resulted in the discovery of many more drugs that successfully treat such diseases. Yet R&D spending on these drugs still takes only a small share of total spending on R&D by biotech and pharmaceutical companies. This is why I doubt, but cannot prove without much additional research, whether the R&D spending on orphan drugs stimulated by this Act significantly affected spending on finding treatments for more common diseases. It probably mainly increased total spending on medical R&D by a modest amount.
If this conclusion is correct, is it a mistake to have the Orphan Drug Act give greater intellectual property protection for drugs that treat rare diseases because these diseases would attract little research effort without better protection? I follow Posner initially and ignore the tax benefits and research subsidies provided by the Act. Suppose that only because of the better patent protection provided for seven years, a biotech company develops a drug that treats a rare disease with a small market, and charges a high price-as in some examples given in the Wall Street Journal articles. Assume to start the analysis that persons with the disease treated by this drug pay for treatments from their own resources, and enough of them can pay so that the biotech company can cover, perhaps more than cover, their development and production costs.
Surely not only the biotech company, but also persons with the disease are better off that the drug was developed due to the Act, even though they have to pay a lot. If they were not better off, they would not be willing to pay the high price demanded. Since it is a win-win situation, in such cases it is obviously helpful to persons with rare diseases to have an Act that stimulates the development of drugs that treat their diseases.
The analysis is not greatly different if private health insurance providers voluntarily cover rare diseases, as discovered with great effort by the woman with Gaucher disease chronicled by the Wall Street Journal. As Posner indicates, insurance companies might be willing to cover rare diseases since such coverage does not raise premiums very much for other persons who are insured. This would be a strictly business decision by the insurance industry if made without political pressure. So voluntary private insurance coverage of persons with rare diseases does not materially change my favorable evaluation of the Orphan Drug Act.
The hard cases arise when the high prices charged for drugs that treat rare diseases are paid not by persons with the diseases, but by the government through Medicaid, Medicare, or other publicly funded health programs. Then taxpayers rather than persons with these diseases or private insurance companies may foot most of the cost of developing drugs that treat rare diseases. Should taxpayers be asked to pay $100,000 per year (Posner's estimate of the average cost of the drugs developed for rare diseases) for drugs that can keep persons with rare diseases alive for many years? The answer is not obviously yes, although as Tomas Philipson has argued, government coverage might be justified if taxpayers are concerned about the welfare of persons who are unfortunate to have these diseases, or as a way to provide insurance protection against the risk of being born with rare genetic defects.
Medicare pays enormous sums to hospitals, nursing homes, and drug companies to keep elderly persons alive sometimes for only a few additional months. Yet the justification for doing this seems weaker than using government funds to pay for expensive drugs that enable young persons with rare diseases to live fairly normal lives for many years rather than dying at young ages. Perhaps Medicare should not pay a lot to keep elderly persons alive for a short period, but I do not believe the case for government payment of the cost of treating persons with rare diseases can be analyzed in isolation from a more general consideration of what type of health care should be provided out of government funds.
The Act also would look less favorable if, as is likely, biotech and other drug companies sometimes reclassify the markets for new drugs to help them qualify for the Act's benefits. I also have doubts about the wisdom of the provision that allows drug companies to immediately write off their R&D spending on orphan drugs.
So an overall evaluation of the Orphan Drug Act is not easy. Still, it might well be desirable to give stronger patent protection to drugs with small markets that treat rare diseases in order to induce the development of such drugs.
Response on Riots in France-BECKER
Although there is some dispute for the US about the magnitude of the effects of the minimum wage on the unemployment of low skilled persons, the French results on the effects of its minimum wage are far clearer. The clearer results for France (see the work of Bernard Salanie) are probably because the French minimum has been much higher relative to its average wage than has been the American minimum. Note that higher minimums might raise family income inequality not only because it reduces employment, but also because many of those benefiting from higher wages are teenagers and others in fairly well off families.
For the most part (with one or two exceptions) I avoided attributing causes to the French riots. Instead, I stressed that the French labor market system in effect discriminates against Muslim and other immigrants. I suspect that the low wages, high unemployment, and poor education of African immigrants contributed to the riots there, but at this point we can hardly be confident about that conclusion.
Let me add that unemployment rates in France and the US are determined in similar ways. It is not true that persons who have been unemployed for more than six months are no longer considered unemployed in American data. They are considered to be unemployed if they do not have jobs and are looking for work.
I do believe there is often anti-competitive behavior by economic and other elites. It usually operates through discriminatory legislation. Many of the labor laws in Europe favor the elites-in this case, the insiders with good jobs- and hurts outsiders who are looking for decent pay and employment.
I do not believe there is anything in Islamic law, and my wife who teaches Islamic history confirms this, that prevents good Muslims from giving allegiance to non-Islamic states. However, when Muslims are segregated into separate suburbs, as in most French cities, it presumably is easier to generate hostility to the ruling authorities, and even support for more radical forms of Islam.
In the US, new immigrants do generally have jobs, even when they are unskilled. Moreover, immigrants can rise over time in the economic ladder more easily here than in Europe. This does not mean there will be no riots by any American immigrant groups, but I believe they are less likely.
Much has been written about the rioting by mainly Muslim youths of African descent in France, but few discussions have related them to the race riots by African-American youths in the 1960's. The lessons from these earlier riots are disturbing, but they have a couple of reassuring aspects as well.
Many economists have recognized for more than a decade that the generous minimum wages and other rigidities of the French labor market caused unemployment rates that have remained stubbornly high since the early 1990's. Immigrants, youths, and other new entrants into the labor market have been hurt the most since they have had the greatest difficulty finding jobs. The overall French unemployment rate is now almost 9 per cent- compared to about 5 per cent in the US- with a rate over 20 per cent for young persons. About 40 per cent of the unemployed have been without a regular job for over a year, a rate that is far higher than the American long-term unemployment rate. The French have intentionally avoided collecting separate economic data on Muslims, but the Muslim unemployment rate is estimated by labor economists in France at more than 20 per cent, with the unemployment rate for young Muslims probably exceeding 30 per cent.
The French labor market is sick, and needs reforms to make it more flexible, so that "insiders" with jobs have less of an advantage over "outsiders" looking for work. These reforms include making it easier for companies to let go of workers without expensive severance pay packages, lower minimum wage levels-the French minimum is one of the highest anywhere- reduced regulatory barriers to the formation of new companies, and lower social security and other taxes on employees. If the riots help exert greater pressure on French politicians to greatly free up the French labor market, they would have been of some value not only to Muslim youths, but also to all other French men and women who have been priced out of jobs.
An old and well-established rule of life is that the thoughts of young men turn to mischief when they have lots of time on their hands. Muslim and other African youths in many poor outer-city suburbs, the notorious banlieues, clearly have had lots of free time because many drop out of secondary school before receiving a diploma, and then they cannot easily get jobs. Seemingly small events, such as the violent accidental deaths of two youths in the French case, often set off a series of reactions that spread by word of mouth, and in these modern days also by cell phones and the internet. Copy-cat behavior, burning cars has been a favorite activity in the French riots, have spread to different poor French suburbs with African immigrants outside Paris, and then to the banlieues surrounding other cities.
The race riots in the US during the 1960's also started from what in retrospect looks like misinformation and relatively minor events. Yet there were more than 750 riots during the period 1964 to 1971 (the Watts riot was in 1965) that killed over 200 persons and injured thousands of others. After more than 10,000 incidents of arson, many black communities were in ruins.
Sociologists and economists have not succeeded in explaining which cities had riots and which avoided them. The likelihood of a riot is not explained by differences among cities in the black unemployment rate, in black incomes relative to those of whites, in rates at which blacks were advancing economically, in the education of blacks relative to whites, and so on for many other variables. Cities with relatively many blacks were more likely to have riots, and Northern cities were far more likely to have race riots than cities in the South, even though blacks were more numerous and worse off in the South.
Segregation of blacks into largely separate neighborhoods is an important factor, but practically all cities in the North with significant numbers of blacks have been highly segregated. It is interesting that Marseilles is one of the few major French cities that essentially escaped any rioting (at least so far). Its large Muslim population is not segregated into poor suburbs, but Muslims live in many different parts of Marseilles.
Although the cities and neighborhoods that experienced American race riots in the 1960's cannot be well explained even in retrospect, the economic position of blacks in rioting cities did suffer badly. The economic historian, Robert Margo, and a colleague at Vanderbilt examined the effects of the ‚Äò60's riots on employment, incomes, and property values. They find that from 1960 to 1970 median black family income dropped by about 9 percent, and the median value of black-owned homes dropped even more, in cities with major riots compared with similar cities without such riots. From 1960 to 1980, male employment in cities with severe riots dropped several percentage points compared with otherwise similar cities.
This analysis suggests that the suburbs with riots in France will also suffer compared to Muslim and other African immigrant communities that did not riot. One bit of good news from the American riots for France and its Muslim population is that they have not reoccurred on a large scale during the subsequent more than 30 years. For example, the riots in black communities of Los Angeles in 1992 that began after a video film on television showed graphically the beating administered by LA policemen to a black man, Rodney King, caused considerable damage, but these riots did not result in many copy-cat riots in other American cities. Perhaps the negative effects of 1960's rioting on the jobs and wealth of blacks influenced their behavior during other later tense periods.
It is worth noting that whereas black families did advance a lot economically relative to white families during the 1960's and 1970's, my colleague Derek Neal has shown that the economic position of black families relative to that of white families fell a lot since 1980. This is in the face of greater affirmative action that may have benefited a small number of blacks. The main causes of the decline since 1980 are a further fall in the stability of black families, and the widening skill differential in earnings that started in the late 1970's. This decline in the relative position of blacks did not lead, as I indicated earlier, to any resumption of large-scale rioting. Although black unemployment has remained about twice that of whites, young blacks have been far more likely to find jobs than are young Muslims in France.
Perhaps these riots will give greater power to the few politicians in France who recognize that important economic reforms are needed to help all young Frenchmen get jobs, and to allow them to advance in the economic hierarchy when they demonstrate the requisite talent and ambition. Economics cannot predict with any confidence how such reforms will affect the prospects of further riots, but these reforms would surely improve the position of young immigrants, regardless of their religion or country of origin.
It is tempting to attribute the recent riots to the failure of the French (more broadly the Continental) economic model, in particular job protections (mandated fringe benefits, minimum wage, and tenure) that make employers reluctant to hire (because labor costs are so high and bad workers so difficult to fire). The least productive workers are hurt worst by such a system--hence the enormous unemployment rate among French of African (mainly Algerian) origin--20 percent or higher.
But the United States, with its much more open economy, has its own history of race riots. The riot in 1965 in the Watts district of Los Angeles resulted in 34 deaths. Race riots in Detroit and Newark in 1967 resulted in another 70 or so deaths. The race riots that broke out in April 1968 after Martin Luther King, Jr. was assassinated spread to 110 cities, the worst hit being Washington, D.C. And in 1992 the beating by police of Rodney King led to another major race riot in Los Angeles. The recent French riots, however, have been more widespread even than those of April 1968, though they have involved remarkably few deaths (one, at this writing) and apparently very little looting.
Riots either of the American race-riot variety or the recent French ethnic-riot variety (most Algerians are white rather than black) are mysterious phenomena. They are not concerted, and so, in contrast to political riots such as the one that occurred at the 1968 Democratic Convention in Chicago, they are difficult to understand in instrumental terms, as efforts to extract concessions from the government. Although the April 1968 race riots involved looting, the net economic effect, according to a study by economists William Collins and Robert Margo mentioned by Becker, was to depress the value of black-owned property. Undoubtedly insurance rates for stores in black neighborhoods rose as well and were passed on in part to consumers.
Other things being equal, one would expect unemployment to increase the likelihood and scope of a race riot, because the unemployed have lower opportunity costs both of rioting and of being jailed. Becker, however, cites a study that finds that the likelihood of a race riot in the United States is not correlated with black unemployment. Residential segregation can be expected to increase the likelihood of rioting, because it produces a concentration of people having similar propensities. The rioters don't have to assemble far from their homes in order to form a critical mass of rioters; the need to agree on a time and place at which to assemble would reduce the likelihood of a spontaneous riot. Poor information, which allows inflammatory rumors to spread, is still another plausible causal factor in riots; likewise youth, because young people have less aversion to risk and violence than mature people; and of course anger, which may be induced or aggravated by discrimination and inequality. But so far as economic differences between France and America are concerned that can be traced to our more open labor markets, probably the only significant one, so far as bearing on the likelihood of riots is concerned, is the much higher French unemployment rate, though even its significance is somewhat doubtful, in view of the lack of correlation between riot propensity and black unemployment in the U.S. history of race riots.
Several other differences between France and the United States may be as important as or more important than the difference in unemployment rates. One is that the French appear to have a much greater propensity to riot, or to engage in other riot-like direct action, than the citizens of other countries. French truckers and farmers are notorious for direct action, as in blocking roads, in order to enforce their demands. In 2003, a plan to reduce civil servants' pensions provoked wildcat strikes by tens of thousands of civil servants. Why the French have this propensity I don't know (it probably is not French economic policies, which are similar to those of most European countries), but it suggests a lower riot threshold than in the United States.
Another relevant consideration is that the French, like most Europeans, are much less welcoming to foreigners than Americans are. This is one reason that we have not experienced and are unlikely to experience riots by Muslims, even though there are several million of them in the United States. Direct comparison with France is difficult, however; because Muslims are a far higher proportion of the French population (roughly 10 percent to our roughly 1 percent). There is little discrimination against American Muslims, in part because most of them are solidly middle class. No doubt our free labor markets have enabled them to achieve middle class incomes. But it is possible that even if the French had free labor markets, French insularity would result in discrimination. After all, that was the U.S. experience with blacks: our race riots invariably occurred in northern states, in which blacks had the same legal access to jobs and education as whites but nevertheless were still being subjected to serious private discrimination in the prime riot era of the 1960s.
Another factor in the recent French riots may be the French refusal to engage in affirmative action. The French are reluctant even to collect statistics on the number of people in France of various ethnicities, their incomes, and their unemployment rates. No effort is made to encourage discrimination in favor of restive minorities (as distinct from women, who are beneficiaries of affirmative action in France) and as a result there are very few African-origin French in prominent positions in commerce, the media, or the government. Affirmative action in the United States took off at approximately the same time as the 1967 and 1968 race riots, and is interpretable (so far as affirmative action for blacks is concerned) as a device for reducing black unemployment, creating opportunities for the ablest blacks to rise, promoting at least the appearance of racial equality, and in all these ways reducing the economic and emotional precipitants of race riots. Of particular importance, affirmative action was used to greatly increase the fraction of police that are black, while the "community policing" movement improved relations between the police and the residents of black communities. French police, traditionally brutal, have by all accounts very bad relations with the inhabitants of the Muslim slums. The French riots are a reminder that affirmative action, although offensive to meritocratic principles, may have redeeming social value in particular historical circumstances.
Thanks for very good comments. First, my apologies to Senator McCain for a typo in the spelling of his name.
Campaign contributions do involve rent seeking, but they are not all socially wasted if they lead to desirable political outcomes. If I knew of an effective way to cut down contributions without affecting outcomes I might support it. But no one has ever come up with proposals that do not have huge loopholes. Since I indicated in my discussion that total spending on campaign contributions is rather small, I believe we are better off allowing great freedom to contribute rather then trying to cap them, or in other ways cut down on wasteful components of contributions.
It is not useful to say that democracy should involve cooperation rather than competition. Every system of government must have a method for actually reaching political decisions, as opposed to simply describing our hopes for how decisions should be reached. Most modern approaches to democracy since Joseph Schumpeter's discussion in Capitalism, Socialism, and Democracy have involved important aspects of political competition as a way of reaching political decisions.
The reason political freedom is useful is not that top leaders always emerge, but that one avoids the sometime terrible policies of totalitarian leaders. A free press and political competition do not guarantee good outcomes, but it helps reduce the likelihood of the really awful outcomes produced by a Stalin, Mao, or Hitler.
Advertising in politics or the economy helps unknown leaders or unknown firms break into a political or economic market. To me that is a great advantage of all advertising. Nor is it clear that advertising raises prices of the products advertised (see the discussion of the theory and evidence in G. S. Becker and K. M. Murphy, "A Simple Theory of Advertising as a Good or Bad", Quarterly Journal of Economics, Nov. 1993).
I agree that I should have been more careful in distinguishing self-financing of campaigns from campaign contributions more generally. The evidence I cited on the small effect of campaign contributions on outcomes refers to general contributions. James Snyder gives a good summary of this evidence in an article in the Winter 2003 issue of the Journal of Economic Perspectives.
A rich set of comments. Several suggest that the solution to the "soft bribery" problem is to require that all campaign contributions be anonymous; then no one could prove that he had contributed to a particular candidate. The problem is that, since "soft bribery" is an important motive for contributions, the total amount of contributions, and hence of political advertising, will fall, and so there will be reduced dissemination of political information. That is a loss. I do not know whether it would exceed the gain from reducing the amount of soft bribery, but it might well. The brunt would be borne by new entrants, who need to advertise more in order to make a dent in the "brand recognition" of incumbents. In addition, the wealthy, who are the big donors, are not a monolith; they have competing interests and therefore provide virtual representation for many ordinary people, such as the employees of the big corporations. Also the wealthy do not have the votes; their political advertisements are aimed at average people. Furthermore, if some candidates court the wealthy, this will drive others to raise money from the nonwealthy, something that the Internet has made easier to do, as we learned in the 2004 presidential election. The nonwealthy give less per capita, of course, but there are vastly more of them.
Still another point is that even the wealthy do not care solely about policies likely to benefit them. They also care about leadership, always a major focus in a presidential election.
I agree with the comment which suggests that increased political advertising could reduce turnout. The politicians are not interested in maximizing turnout, but in winning, and a winning strategy may be to depress turnout if higher turnout would produce more votes for your opponent. Negative advertising might provoke counter advertising also negative, the net effect of which was to reduce turnout but to the advantage of the candidate who had initiated the negative campaign.
I do not agree, however, that advertising in commercial markets is likely to depress output (the analog of turnout in the electoral market). The comment that argues this points out that in a cartelized market, that is, in a market in which the sellers have agreed not to compete in price, there is a tendency for nonprice competition, including advertising, to increase, as sellers vie to engross the largest possible share of the profits generated by the cartel price. I don't see how forbidding advertising in such a setting would result in higher output; it would simply increase the sellers' profits at the cartel price. On the contrary, by reducing the erosion of cartel profits through nonprice competition, the advertising ban would tend to make the cartel last longer. But in any event there is no price competition in the political market because politicians can't buy votes directly. Advertising (broadly defined) is the only permitted method of competition.
Finally, I disagree with the suggestion, common though it is, that unlimited campaign spending impairs democracy by giving political power to the wealthy, or more precisely to any individuals or groups able and willing to spend disproportionately to support particular candidates or policies. The suggestion confuses democracy with equality. Democracy is the political system in which the principal officials are forced to stand for election at short intervals. The identity and policies of the officials may well be influenced by the underlying distribution of income and wealth in society, but that does not make the society less democratic.