Chicago's Approach to Big Boxes-BECKER
The City Council of Chicago recently passed an ordinance that makes Chicago the largest city in the United States to impose special wage and fringe benefit requirements for "big box" retailers. The ordinance requires that beginning next July, companies with more than $1 billion in annual sales and having stores in Chicago with at least 90,000 square feet of space will have to pay Chicago employees a minimum of $9.25 an hour in wages and $1.50 an hour in fringe benefits, such as health insurance. By 2010 these will rise to $10 an hour in wages and $3 an hour in benefits. These minimums far exceed Illinois' minimum wage of $6.50 per hours. About 40 existing stores in the city would be affected.
The ordinance was supported by 35 out of 49 alderman on the Council despite the vehement opposition of Mayor Richard Daley, who in the past could dictate the Council‚Äôs policies. The mayor is right to be opposed, for it is indeed a bad ordinance, and will hurt the very groups, African-Americans and other poor or lower middle class individuals, that supporters claim would be helped.
The ordinance will raise the cost of using low skilled labor in Chicago by Wal-Mart, Target, Home Depot, and other big retailers with mega stores. Even without it, large cities are not attractive to mega retailers because space for large stores and for the parking they require is much more expensive in cities than in suburbs and smaller towns. These big box stores are much more common in suburbs of large cities than in the cities themselves partly for this reason, and partly because many suburban communities offer tax and other financial subsidies to these stores in order to induce them to locate there.
Even if retailers with mega stores were trying to cater at least in part to the Chicago market, this ordinance makes them more likely to open up in suburbs that could be reached by some Chicagoans as well as by those living in the suburbs. Large retailers that continue to operate in Chicago will reduce their use of low skilled workers by replacing some of them by more skilled employees, and by machinery and other capital. Retailers will also try to avoid being covered by the ordinance by reducing their space to just below 90,000 square feet.
In a city like Chicago the burden from these responses to the ordinance will fall disproportionately on African Americans and Latinos since fewer jobs will be available to workers in the city with less education and lower skills. In addition, prices in Chicago of items sold relatively cheaply by stores like Wal-Mart and Target will rise because fewer of these stores will open in the city. The mega stores that remain will raise their prices because their costs will go up. Since city customers of these stores are mainly families with modest incomes who seek low prices rather than elaborate service, they more than the affluent classes will be hurt by the rise in prices and reduced availability of big box outlets.
Who would favor such a bad ordinance that will harm the very groups it is claimed to help? Support for the ordinance from more conventional supermarket chains and clothing stores is easy to understand since the mega stores drain away customers and force prices down. The absence of opposition from low-income consumers who shop at these stores is not surprising since they are not well organized to exert political pressure on the City Council.
The strong backing of the ordinance by Chicago unions is also to be expected. Unions always favor increases in minimum wages, even when as in this case the minimum only apply to some employers. Any increase in the minimum wage would raise the demand for unionized skilled workers who would substitute for the less skilled employees displaced by the minimum.
Unions have an additional reason to try to raise the costs of big box companies like Wal-Mart's since these companies do not have unions, and aggressively oppose them. Higher costs forced on non-union companies reduce the competition they offer to unionized companies. Perhaps of even greater importance, this ordinance helps demonstrate that unions have the political clout that can make operations more costly and difficult for large non-union retailers. To ward off further discriminatory ordinances, these companies could be forced to adopt a more favorable stance toward unionization of their employees.
It is more difficult to understand the aggressive support of the Chicago ordinance by most African-American members of the Council and other leaders of the African-American community. However, it should be noted that some of those who represent predominantly African-American communities voted against the ordinance, including Leslie Hairston who represents the 5th Ward (where I live). Not only will fewer jobs be available for African-Americans, but also the prices they pay for food, clothing, and many other retail goods would go up.
One explanation for why most African-American leaders support the ordinance is that they are politically allied with unions and possibly other groups that benefit from this ordinance. These leaders may recognize that their constituents will generally be harmed by the ordinance, but in return for taking this hit they expect the support of unions on issues like more generous Medicaid support that help low income families.
Clearly, this ordinance might raise serious Federal constitutional issues because of its discriminatory treatment of large retailers. Since to my knowledge the City Council has not offered any plausible reason for basing the ordinance on square footage of floor space, it is likely to be considered a violation of equal protection of the laws.
Still, ordinances like this one are dangerous not only because of their direct harmful effects, but also because they encourage future legislation that could apply similar and additional requirements to stores like McDonald's and other smaller stores. It also encourages interferences in other markets, such as the proposal now before the Chicago Council that would require residential developers to include a certain percentage of "affordable" housing units in their developments. So Mayor Daley is right to oppose this ordinance and he should veto it, even if the veto will be over ridden.
Becker's comprehensive analysis leaves me with little to add, especially as I am not permitted to comment publicly on the constitutionality of the "big box" ordinance because (if it does go into effect) its constitutionality is likely to be challenged, and in my court to boot.
The first-order economic analysis of minimum wage laws shows that they reduce employment by raising the price of labor; the Law of Demand teaches that an increase in the price of a good reduces the quantity of it that is demanded. A second-order analysis complicates the picture. Price affects supply as well as demand. An increase in the price of labor might attract into the labor force individuals who, at the existing price, prefer to go to school, engage in crime, work part time, or subsist on welfare. If, moreover, there is a large sector exempt from the law, the law's main effect may be to shift workers to the exempt sector rather than to reduce overall employment. The higher wages in the covered sector, by driving up employers' costs in that sector, will tend to reduce the demand for the products and services produced by those employers and to increase the demand for substitute products and services produced in the exempt sector, which in turn will increase the demand for labor in that sector.
What seems relatively clear, however, is that the brunt of the disemployment effect of the minimum wage will be felt by marginal workers. For example, some teenagers whose marginal product (that is, their contribution to the employer's profits) was just at or only slightly above the minimum wage will if the minimum wage is raised be replaced by slightly more productive teenagers from affluent households who were not attracted to working when the wage was lower.
The smaller the sector covered by the minimum wage law (and the coverage of the "big box" ordinance is very limited), the more dramatic the disemployment effects of the law are likely to be. The demand for labor as a whole is inelastic, but the demand for labor by an individual company or a small group of companies is likely to be quite elastic. Not because the company can easily substitute capital for labor, but because it cannot pass on increased costs to its customers if it has many competitors who have lower labor costs by virtue of being exempt from the minimum wage. Such a company, assuming it faces an upward-sloping average-cost curve (meaning that its average cost rises with its output--the normal assumption about a firm's cost structure in a market with many firms, because if its costs were invariant to its output it could expand indefinitely), can control its labor costs only by reducing its output and thus laying off workers. One especially draconian way of doing this is by relocating the firm's plants or other facilities from the jurisdiction imposing the high minimum wage to a jurisdiction that has a lower minimum wage. Becker points out that this may be a consequence of the Chicago ordinance because it does not reach Chicago's suburbs. It is a reason for believing that state minimum wages are likely to have fewer disemployment effects that local minimum wages, and the federal minimum wage fewer disemployment effects than state minimum wages.
At the current minimum wage in Illinois of $7.75 an hour, an employee who works 2000 hours a year (a 40-hour week with two weeks of annual vacation) and is paid the minimum wage earns only $15,500 a year. This is a pittance, though if the minimum-wage employee's spouse is employed at a significantly higher wage, the family's income may not be at a hardship level. Similarly, the minimum-wage employee may be an elderly person who receives social security and Medicare and may have a company pension in addition. These possibilities show that minimum wage laws, even if they had no disemployment effects, would be a clumsy instrument for combating poverty. A better approach than raising the mininum wage would be increasing the earned-income tax credit (negative income tax), which is a method of increasing the earnings of marginal workers without confronting their employer with a higher cost of labor and thus inducing the employer to discharge those workers whose marginal product is lower than the minimum wage. But this would be difficult for an individual city or even state to do; it would require federal action.
Concern has been voiced in some quarters that Israel should not be punishing Lebanon for the acts of Hezbollah, because Lebanon's army has not attacked Israel and it is unclear whether Lebanon has the ability to disarm or otherwise restrain Hezbollah. (There is also, however, doubt whether Lebanon has the will to do so.) In other words, Israel's conduct is being criticized as an exercise of collective punishment (likewise its military measures in Gaza), which involves punishing a collective for the act of an individual member, even if some or all of the other members of the collective bear no responsibility for the act. Israel has responded that since Hezbollah is a part of the Lebanese government, its acts are the Lebanese government's acts. That may be, but is to one side of the issue of the appropriateness of collective punishment. Israel has also defended its actions as targeted exclusively on Hezbollah, with any harms to Lebanese who are not part of Hezbollah's armed wing being inevitable accidents of war.
Without taking sides, but assuming for the sake of argument that Israel is engaged, in part anyway, in the deliberate infliction of collective punishment, I want to discuss the economics of collective punishment, which is a conventional legal tool that is efficient in many of its applications. An important modern example is the employer's liability for injuries resulting from acts by its employees within the scope of their duties. The employer may have exercised due care in the selection, training, assigning, monitoring, and disciplining of the employee who caused the accident, but if the employee was at fault and therefore is liable to the victim, the employer is also liable no matter how faultless its behavior. And usually it is the employer that ends up paying the entire judgment in the suit by the victim because the employee is more often that not judgment-proof. The law allows the employer to seek indemnity from the employee for any judgment the employer is required to pay the victim of the employee's tort, because the employee is the primary wrongdoer. But the judgment-proof problem renders the employer's right of indemnity of little or no value in most cases.
Another important example of collective punishment in law is the rule that all members of a conspiracy are criminally liable for the crimes committed by any member within the scope of the conspiracy, provided it was forseseeable. So if one member of a drug gang beats up a defaulting customer, the other members are apt to be guilty of assault and battery as well even though they had nothing to do with the beating. A related rule, the felony-murder rule, makes a criminal guilty of first-degree murder if a killing occurs in the course of his crime, even if the killing is by someone else and he did not authorize or even expect it--as in the case where a policeman in the course of trying to thwart the crime accidentally kills a bystander.
The theory behind these rules--the theory behind collective punishment in general--is that someone other than the actual perpetrator of a wrongful act may have more information that he could, if motivated, use to prevent the act than the government has. The employer may have been faultless in the particular case, but knowing that it is liable anyway will give it a strong incentive to exert control over its employees to prevent accidents--even by such indirect measures as reducing its work force by substituting robots or other mechanical devices for fallible human workers. Similarly, conspirators have an incentive to police their members to avoid getting themselves into unnecessary trouble; and the perpetrators of a bank robbery, for example, have an incentive to avoid being armed or provoking bank guards or police.
Collective punishment can properly be criticized when the cost of punishment to the innocent members of the collective is disproportionate to the benefits. This would be true if the government executed the family members of murderers. Such a measure would create powerful incentives for family members to monitor each other's behavior, and the murder rate would drop. Or would it? The law would deter the formation of families; and it might even induce families to murder members whom they thought likely to commit murders, since the family might be better able to conceal a murder within the family than the family member who was murdered would have been able to conceal his own murders. In addition, even if the family-responsibility law was effective in reducing the murder rate, the rate of killing might rise; for suppose there were 10 percent fewer murders but for every murder that did occur an average of two family members would be executed.
The example, while extreme, illustrates the essential point about collective punishment: that it is an extremely costly method of punishment, because several or many people are punished for the wrongful act of one. For example, if the cost of punishment to a person punished is X, then if he is a member of a group of ten, all of whom are punished collectively for his act, the punishment cost is 10X rather than X. So collective punishment is properly regarded as highly exceptional. It is most likely to be optimal if either the collective punishment is very mild or the cost to the punisher of failing to prevent the wrongful act is very great, and in either case if in addition the alternative of individual punishment is inadequate. The first case is illustrated by mild collective punishment of children. It includes things like a parent's punishing both his squabbling children because he cannot figure out which one was at fault, or a teacher's keeping the entire class after school because he cannot determine which child threw a spitball at him. These are easy cases because the innocent member or members of the collectively punished group have the necessary information, and ability to act effectively on it, for preventing the misbehavior; they can do so at much lower cost than the punisher because the punisher cannot readily obtain the information necessary to identify the actual wrongdoer; yet the costs to the group of the punishment are slight.
The second case--optimal collective punishment when the cost of failing to prevent the wrongful is great--may be illustrated by Israel's policy of demolishing the houses of the families of suicide bombers. The suicide bomber himself is not deterrable, the harm he does is great, and the punishment method, while severe, is mild relative to the harm that a successful suicide bomber can inflict.
Because warfare is inherently indiscriminate, innocent persons whose only connection to the fighting is that they live in the combat zone are unavoidably "punished," but this is not collective punishment as a deliberate policy. For one thing, those persons will usually have no ability to restrain the combatants on their side. As for the conflict in Lebanon, however, a nation is undoubtedly responsible for predatory acts committed against another nation by groups operating openly on the nation's territory. That responsibility is an example of the kind of collective responsibility that warrants collective punishment for its breach, as in the somewhat parallel case of the employer's liability for the torts of its employees when they are committed within the scope of employment. But how do you "punish" a nation? The nation is the collective of its citizens. Punishing the nation means punishing its citizens even if there is nothing they can do or could ever have done to to prevent the actions for which they are being held responsible. Assessment of the reasonableness of the punisher‚Äôs course of action would then depend on such factors as the alternatives open to the punisher, the amount of damage inflicted by the group that the collectively punished population failed to prevent, the amount of damage that collective punishment inflicts on that population, and the likelihood that the punishment will succeed in getting the punished nation to take effective steps to prevent similar attacks by the rogue group in the future. The last point is vital because it is extremely difficult for one nation to prevent an attack mounted by a terrorist group from the territory of a nation that has acted as the group's willing or unwilling host. That is why that nation is responsible for restraining the group and why, therefore, it may be a proper candidate for collective punishment.
A final point. I said earlier that a law imposing capital punishment on family members who failed to prevent one of their members from committing a murder would discourage family formation. In other words, collective punishment tends to cause defection from the group. This may be in the punisher's interest: if Lebanese flee southern Lebanon so as not to be "collectively punished" for the acts of Hezbollah, Israel will have a freer hand in dealing with Hezbollah there.
Collective punishments are part of "negative" incentives that are used to reduce crime, military aggression, and other injurious acts. There is often a strong case for such collective punishment to deter harmful acts. Punishing the individuals or groups who commit these acts through police, armed forces, and the judiciary is the first line of defense against such socially harmful behavior. Sometimes, in addition, "positive" incentives are used to encourage the help of private enforcers. This is accomplished by offering payments to whistleblowers who report white collar crime, to spies who give information on the military intentions of potential enemies, and to individuals who provide information on wanted criminals or unsolved crimes.
In his discussion in favor of collective punishment, Posner uses the example of employers who may be held liable for injuries due to acts by their employees while performing their duties. Employer punishment is often appropriate for the reasons Posner gives. A less good example frequently discussed is the owners of bars who are penalized for automobile accidents or other injuries caused by persons who became drunk at their bars. Similarly, some states hold the hosts of parties partly responsible for any automobile accidents or other injuries caused by guests who had too much to drink at their parties.
I believe that collective responsibility in these drunk-driving examples and in many other situations is inappropriate because those being punished have little ability to deter the injurious behavior that is being discouraged. Can party hosts be expected to keep track of how much each of their guests has drunk, especially at large cocktail parties? That seems to me to an unwise use of negative incentives unless the goal is to discourage cocktail parties themselves. Otherwise, it is best to only punish the individuals who get drunk at parties and afterwards injure others. They are the ones who can best keep track of how much they drink.
It is easier for managers of bars than party hosts to keep track of the number of drinks ordered by different patrons. However, punishments to bar owners after serving more than say 4 drinks to patrons who later commit acts that injure others would give heavy drinkers an incentive to bar hop, and have their quota of 4 drinks at each of several bars. That might cut down the amount of heavy drinking since bar hoping is more costly than drinking at a single bar, but it also punishes heavy drinkers who take care not to drive afterwards or engage in different actions that cause injury to others. It surely would not make much sense to collectively punish the set of bars where patrons accumulate their excessive amount of drinking. So my conclusion is that in this case too the preferable policy is to only punish intoxicated persons who cause injuries to others, and not attempt collective punishment of bar owners.
To take a different example, parents should often be held responsible for harms to others caused by their younger children. Parents can discourage crimes and other anti-social acts of these children by the upbringing they provide, and also by the punishments they administer to children who engage in such acts. Since after a certain age, perhaps sixteen or eighteen, parents have much less control over children, parental responsibility for children's acts should diminish, and children's responsibility should increase as the children age.
At one time, children were responsible after the death of parents for any debts their parents left. Children were also punished for other anti-social behavior of their parents. This type of collective punishment has been eliminated by developed nations, presumably because children do not have the power typically to deter their parents from contracting debts or committing crimes. The only justification for such collective punishment of children in these cases would be that parents care about the children, and that caring parents would be less likely to enter into debts they cannot pay, or engage in anti-social acts, if children were held responsible for parental behavior. But such collective punishment to children would have little effect on selfish parents, and it would increase the suffering of their children who already are harmed by having selfish parents.
To take a different political example than the Lebanese one that Posner uses, should the German people have been held collectively responsible for the atrocities committed by Hitler and other Nazis? It was inevitable that many German people suffered from World War II, although bombing of Dresden and some other cities by the Allies was probably unnecessary. Collective punishment of leading Nazis was appropriate, as was the requirement that Germany pay reparations for property taken, for some of the damages caused by German occupations of various countries, and for the murder of millions of Jews, Poles, Russians, and other groups.
However, it would be more far-fetched to hold the German people responsible for the election of Hitler since he took steps to prevent the German people from voting him out of office. Moreover, people who voted for Hitler in the first place could not have easily anticipated the full dimensions of the horrors he would inflict on the world.
A report released last Tuesday by the American Council on Education, and discussed in various media articles this week, indicates that over 55% of college students are women. This reflects a continuing upward trend in women's share of enrollments for the past 30 years. It is ironic that an earlier 1992 study claimed that colleges were biased against women because women were intimidated to speak up, the type of course work emphasized favored men, etc. In a political response to at most a minor problem, Congress unwisely passed "gender equity" legislation during the 1990‚Äôs.
The gender gap in enrollments is especially large for lower income African-Americans and Latinos, and is negligible for children from middle and upper income white families. I do not believe there is reason to be concerned about the overall growth in the relative number of women college students-good for them- but I continue to worry about the performance of African American and Latino young men.
On pretty much all objective measures, women deserve to have greater college representation than men because they study harder, get better grades, are more likely to graduate from high school, complete their school work in a more timely fashion, write better, and in other ways outperform young men. Schools competing in trying to get the best students naturally respond to this, and end up selecting larger numbers of young women than young men. Women still remain a minority, however, in the sciences, engineering, business, and economics.
The trend toward increased college enrollment of women will continue the growth in the education of women in the labor force compared to that of men. This should further narrow the gender gap in earnings, a gap that has already narrowed greatly since the mid-1970's. Since the education of younger women is exceeding that of men, will the gender earnings gap begin to reverse signs, so that women will earn more than men?
In answering this question, first note the study by Francine Blau and Lawrence Kahn, which shows that the gender pay convergence slowed during the 1990's even though the education of women in the labor force continued to grow relative to that of men. This slowdown in convergence is consistent with my belief that earnings of the average women in the labor force will not rise above that of the average man, although an increasing fraction of women in the labor force will have higher hourly earnings than men. While the gap between the education of women and men in the labor force will continue to grow, the commitment of women to careers will remain below that of men, despite the claims about their career ambitions from the selected college women in the media stories on the enrollment gap.
Women will continue to have much greater responsibilities for child care than men do. That means sometimes long periods of being out of the labor force, more reluctance to work overtime, less willingness to take jobs that require much out of town travel, greater likelihood of taking absences to care for sick children, and other behavior that lowers both hourly earnings and hours worked. All these differences continue to be found in Sweden, perhaps the country with the greatest degree of gender equality, and they would apply in even greater force to the United States and other countries.
Although young women do considerable better on average then men in school, the variance in performance among men is much greater than the variance among women, and admisssion policies should depend on variance as well as mean performance. Due to their greater variance, many more men drop out of school, have failing grades, study little, have disciplinary problems, and the like. Greater variance also implies, however, that many more of the outstanding students are men. Certainly after many decades of teaching economics, I would confirm that my female students have done better on the average, while the men were more likely to be at both tails of the distribution; that is, men were more often both very bad and very good. I would hasten to add, however, that I have had a considerable number of very exceptional female students too.
Larry Summers got into trouble by suggesting that the variance in gender difference in achievement- a gender differ in performance is found on many dimensions of behavior- might have a genetic basis. It surely might for reasons put forward by many biologists, but I believe (and I am confident that Summers would agree) that the difference in variance is mainly explained by interactions between genetic and environmental forces. Young girls may be discouraged from high achievement, or young women may recognize that they will have and want childcare responsibilities, and realize that this will cut down on their career commitments.
Of great social concern is the very poor performance by African American and Hispanic young men compared to young women of the same race or ethnicity. African American young men not only tend to drop out of high school more and are less likely to go to college, but the men are also far more likely to end up as delinquents, in jail, murdered, unemployed, and in other bad circumstances. This to me is the most serious racial issue in the United States, and is only partly reflected in the much higher college enrollment rates of African American women than men.
Perhaps African American boys are more affected than girls by the absence of fathers in their households, or negative peer pressure is more harmful to boys, or drug selling and other crimes is more appealing to them compared to school, and so on. I am not going to try to solve such a major problem in this post, except to indicate that legalizing drugs would help African American young men, and so too would any steps that can be taken to stabilize the family structure of African Americans.
The final issue I address is whether it is proper for colleges to use an affirmative action plan for men; that is, to have easier admission standards for male applicants to bring enrollments closer to 50-50 for men and women. I believe it is a perfectly legitimate strategy. Since the US higher education system is highly competitive, different schools should be allowed to choose their policies on these and most other issues. Then they compete for students and for funds from donors by offering different programs, including the ratio of female to male students.
An additional factor in this case is that usually does not apply to affirmative action programs to help racial or ethnic minorities is that the group facing higher standards, female applicants, often want schools to try to get more male students so that their social life would be better. After all, most schools that formerly had students of only one sex-such as Princeton and Vassar- have become coed to provide a better social and perhaps also intellectual life. Since affirmative action toward men would be supported not only by men, but also by many women, easier standards for male applicants seems to be a desirable policy for many colleges.
I am in broad agreement with Becker's excellent analysis.
As discrimination declines, replaced by affirmative action, explanations for lagging achievement that are based on discrimination lose their plausibility. They were never entrely plausible, given Jewish achievement in the face of fierce discrimination, though it is argued by Stephen Pinker in a recent issue of the New Republic that discrimination against Jews in the Middle Ages, by forcing them into middleman occupations where intelligence is a more valued asset than in farming or soldiering, resulted in the more intelligent Jews having a higher birth rate (because they were better off) than the less intelligent Jews and so, through the operation of natural selection,discrimination can be "credited" with some of the responsibility for the high average IQ of Jews today--even its genetic component. (Hitler may have had something to do with this as well, as it is plausible that the most intelligent European Jews saw the handwriting on the wall earliest and left Europe in the 1930s before it was too late.)
As Becker points out, the mean performance of women in college and university is superior to that of the men, but the variance of male performance is greater and as a result there are more male geniuses. There is no reason why the difference in variance should result in higher average male earnings; that higher average is probably the result of women's spending less time in the work force because of pregnancy and child care. Women's greater proclivity for child care may well have a biological basis, as may the difference in variance that I mentioned. In the "ancestral environment"--the term that anthropologists use to describe the prehistoric period in which human beings reached approximately their current biological state--women who were "steady" would have tended to have the maximum number of children, while natural selection might favor variance in male abilities because variance would produce some outstanding men who would tend to reproduce more than other men (including the "steadies") in the polygamous conditions of prehistoric society.
If the explanation based on evolutionary biology is correct, women will continue to be "underrepresented" in high-achievement positions in many fields; why anyone should care is beyond me. But it doesn't follow that their average earnings will continue to be significantly lower than those of men. Women's lesser commitment to the labormarket may be balanced by their greater ability than men to perform most jobs, assuming academic performance is a good proxy for aptitude for today's desirable jobs. With the decline in the importance of physical strength and stamina as a job qualification, women may be able to perform most jobs better than men on average, though men may continue to dominate the top--but also the bottom--tier of the labor market.
The achievement lag in black males is troublesome from a social standpoint, as it seems correlated with definite social pathologies, such as enormous overrepresentation in criminal activities. Moreover, it is a matter of a lower mean rather than less variance. If and to the extent that that lower mean is a result of lower IQ, not much can be done because IQ has a strong genetic component--and what is not innate may still be innate rather than cultural (a product of conditions in the womb, for example). The genetic and environmental influences on abilities interact, as Becker says, but in addition the genetic can influence the environmental: many low-IQ mothers may be un able to take care of themselves adequately in pregnancy, contributing to their children's having innate intellectual deficiencies due to poor material nutrition or health care.
Differences in the mean achievements of racial or gender groups must be kept in perspective. General intelligence (IQ) follows a bell-shaped distribution, and two bell-shaped distributions that have different means will still overlap to a great extent unless the means are very far apart. The differences will be greatest in the tails of the distributions.
The achievement lag of Hispanic males may be a transitional phenomenon; they may still be adjusting to an American male culture that is quite different from the "macho" culture of Latin America, which is not conducive to vocational achievement under modern American conditions.
Like Becker, I view affirmative action as a matter of choice for colleges and universities, at least when the institutions are private rather than public. Higher education is highly competitive, and I am reluctant to have the government tell its institutions what policies are best. Academic freedom implies a high degree of academic autonomy, including autonomy in the administration of the institutions of higher education. Personally, however, I would like to see a few of the top colleges abolish all preferences unrelated to academic merit--no athletic scholarships, no affirmative action, no favoritism for the children of professors or of major donors, and no legacy admissions. That would be a useful experiment in the benefits and perhaps costs of meritocracy. It would have the incidental effect of giving us a better idea of the extent of real differences across race and gender in academic capability.
Articles by Eric Lipton in the June 18 and 19 issues of the New York Times discussed the "revolving door" phenomenon with specific reference to the Department of Homeland Security. According to Lipton, although the Department is only three and a half years old, already more than two-thirds of its senior executives have quit for jobs in the private sector, mostly working for companies that have or seek contracts with the Department, which has an annual budget of some $40 billion. These executives, some of whom had come to the Department from the private sector for brief stints in government sservice, are paid multiples of their government salaries when they leave to join or rejoin the private sector. Although departing government employees are forbidden to lobby their former government employer for a year, the prohibition is particularly porous in the case of the Department of Homeland Security because a former employee is permitted to lobby from the start any unit in the Department for which he did not work. The Department is a conglomerate of 22 formerly separate agencies, with overlapping responsibilities, and there are subunits with each of the agencies.
Should the revolving door be stopped or slowed? Two considerations favor the revolving door. First, people who have served in government have useful information about government's needs and procedures; that information can enable a better matching of government contractors with the agencies that purchase their services. Second, the opportunity for lucrative private employment after a stint of public service reduces the cost to the government of obtaining able employees. The compensation of government employees includes not only their government salaries but also the enhanced private earning capacity that they acquire by their government service.
But these points are persuasive only with regard to career government employees, in the sense of people who worked for the government--initially at least at a junior level--for many years. They accrue valuable knowledge over the course of their employment and the prospect of eventual private-sector employment substantially increases in real terms their meager compensation as government employees. Not that there isn't a loss; many of the ablest and most experienced government employees leave government well before normal retirement age, while the least able stay till or beyond that age because of the difficulty of firing government workers.
The system can also produce transitional crises, as illustrated by the hemorrhaging of government security personnel in the wake of the September 11, 2001, terrorist attacks. The attacks caused a surge in private demand for security personnel, resulting in a sudden and substantial loss of experienced CIA, FBI, and other security officers to the private sector at greatly increased salaries. The increased ratio of private to government salaries represented a windfall for these officers because it had not been anticipated. In the long run, however, these windfalls become anticipations that will enable the government security services to hire abler people because they will foresee superior private-sector opportunities. In other words, as a result of the continuing concern with terrorism, working for government security agencies confers on one more human capital than before 9/11. Meanwhile, however, there is tremendous turnover in government security agencies, and a resulting decline in the quality of those agencies, as senior officers vacate their positions for the private sector and are replaced by inexperienced juniors. The impact on quality is aggravated by the disuptive effect of rapid turnover in any organization.
The exodus of officials of the Department of Homeland Security for the private sector, about which Lipton wrote, is a distinct phenomenon. Many of them are people who had come to work for Tom Ridge in the White House when he was the President's homeland security advisor and went with him to the Department when it was formed in March of 2003 and he became the first head of it. Many did not have extensive or relevant government experience. Moreover, the Department has from the outset been grossly mismanaged. The fault lies mainly in the design and structure of the Department and in the haste with which it was created; but no one considers it, even given these constraints, a well-managed enterprise. The companies that have hired these officials do not care, however, because they are not hiring DHS officials for their managerial expertise. They are hiring them in the hope that it will facilitate the obtaining of contracts with the Department.
The Department needs contracts, of course, and its former officials doubtless have a good sense of how a contractor can make an attractive pitch to the Department; otherwise the contractors would not have hired these officials at high salaries. But whether the officials are actually knowledgeable about the Department's needs is another matter. Many of them were birds of passage, who never became real experts on security. There is warranted suspicion that many of them got their high positions in the Department by reason of political contacts, and those contacts may enable them to land contracts for their new employers that are not in the government's best interest. So the first reason I gave for why "revolving door" practices may serve the public interest is probably absent in the case of senior officials. And likewise the second. The prospect of subsequent reemployment by the private sector probably attracts few able nongovernment people to government jobs. It is disruptive to give up one's private job to work for government for a short time with the aim of then returning to the private sector at a higher level--a level one might well have attained in the ordinary course of promotions and job changes had one remained in the private sector.
Moreover, there is what economists call a "last period" problem that is more serious in the "bird of passage" case than in the case of the career government employee. An individual in the last period of his employment (or a company that is about to go out of business) is not restrained in his self-interested behavior by concern that his employer will fire him (or, in the case of the company, that its customers will desert it). Any government employee who has decided to seek private employment may be tempted to make decisions that will make him more attractive to prospective private employers; the added problem with the "birds of passage" is that their entire government service is last period because they know they are going to return to the private sector soon. All their decisions as government officials may be influenced by a desire to position themselves for as lucrative a reentry into the private sector as possible.
What might be done to alleviate the revolving-door problem? One possibility would be to restructure the civil service so that it paid better and, as important, reached higher in the government system. In the United Kingdom, civil servants occupy the highest posts in government just below the ministerial level. The opportunity to become a permanent undersecretary is an inducement to the ablest civil servants to remain in government service for their entire, or at least a very long, career. In our government quite junior officials, such as assistant secretaries of department and even many deputy assistant secretaries, are appointed from outside the ranks of the civil servants. These are, many of them, the birds of passage; and the diminished promotion opportunities for the civil servants makes a civil service career much less attractive for able and ambitious people than it would otherwise be.
The major exception of course is the military, a branch (realistically) of the civil service in which one can rise to a very high rank, because there is no lateral entry into the uniformed service. The CIA and FBI are other exceptions, since among their top officials ordinarily only the director himself is appointed from outside the agency staff.
Of course there would be costs in strengthening the civil service--one being that the able people it attracts might be more productive in the private sector. But the challenges faced by the American government at present are so acute that we must take steps to improve governmental efficiency, and reform of civil service may be one of them.
Quit rates of secretaries and other lower-level federal government employees are considerably below those of comparable workers in the private sector, while government officials quit at much higher rates than their civilian counterparts. What explains this difference, and is it good or bad?
The first part of the question is easy to answer: differences in quit rates are due to differences in the ratio of federal compensation to compensation in the private sector at low and high lob levels. Federal employees at lower level jobs may not make more than their civilian counterparts, but their economic situation is quite good when all other characteristics are taken into account. Government workers at these levels have great job security since they cannot be fired after a short probationary period, except for the grossest forms of misbehavior, such as frequent absences from work, racial or sexual remarks, etc. In addition, they get many holidays, good vacations, generous pensions and health benefits, and are usually not under much pressure at work. The full set of characteristics offered to these federal employees is very attractive, which is why lower level jobs attract many applicants, and the jobs must be rationed through tests and in other ways.
By contrast, federal employees at higher-level jobs-including senior executives in the Department of Homeland Security that Posner discusses- are paid considerably below that of comparable private sector executives. In order to attract and keep high quality employees, the federal government must provide enough compensating advantages in the form of prestige, power, working conditions, and in other ways. There is little turnover of federal judges presumably for that reason since most of these judges could earn much more as practicing lawyers, even judges who, unlike Judge Posner, are not particularly energetic.
For many high-level federal officials, government service is a short-term option that may provide interesting experiences, including learning about various policy issues. But after a while the much higher compensation in the private sector becomes too tempting-of course, their short stay in the government may have been anticipated- and many officials quit the federal government after only a few years (or less).
It would be possible to reduce turnover of federal officials by significantly raising their pay, so that it becomes closer to what they could receive in the private sector. As with federal judges, turnover might be low even with pay that remained considerable below that in the private sector, but not as much below as at present. Some talented men and women like working on public problems of great importance, the security of job tenure would appeal to some, and so on.
Members of Congress currently receive salaries of $165,200 per year-leaders in the House and Senate receives a little more- plus generous retirement and health benefits. That is a lot relative to the pay of most employees in the private sector, and there does not seem to be a shortage of men and women who want to be elected to Congress. Under present rules, it is not possible to pay senior officials in Homeland Security or other agencies more than members of Congress receive, even when higher pay is necessary to fight off the appeal of employment in the private sector. To pay top public officials much more than they already get would require a change in these rules, which would not be politically easy.
Yet for the sake of this discussion, suppose it were possible to cut down the turnover of federal officials in the Department of Homeland Security and elsewhere. Would that be desirable? I believe that having more experienced employees is as valuable to the federal government as it obviously is to the private sector. Turnover of executives with years of experience at the same private company is low- clearly much below that among federal employees- because these companies value that experience. Long time executives accumulate useful knowledge about the organization and practices of the firms they have worked for over the years-what economists call firm-specific capital. I see no reason why such knowledge should be much less important in the federal government. As Posner indicates, Great Britain and some other countries do manage to retain high-level officials of good quality over much of their working lives.
Turnover of federal officials is undesirable for another reason that is not really applicable to the private sector. There is much "rent-seeking" by private companies that try to get special treatment and subsidies from the federal government as it spends huge resources. Hiring of former federal employees to top executive positions at private companies helps them improve their rent-seeking position vies a vies competitors. In particular, if competitors all hire former top executives of the Department of Homeland Security, that may simply offset their rent-seeking positions without adding social value. At the same time losing many top executives weakens this department.
So I favor higher pay for top federal officials in sectors that experience heavy turnover. However, without major changes in pay scales, I am skeptical about the advantages of developing much more stringent rules that prevent federal officials from going to work for suppliers of services or products to the agencies that employed them. The risk of such rules is that they eliminate one of the major present attractions of federal employment at high-level jobs. Adding such rules to the low pay would then make it even more difficult for federal agencies to attract able and honest top-level executives.
None of the comments addressed the dead weight costs involved in maintaining the estate tax. Some estimates suggest that far more than $1 is spent in avoiding the tax for each dollar collected. This is not surprising, given the attention paid to trusts, generation-skipping trusts, and other methods of legal avoidance. Such a high ratio of costs to collections hardly qualifies the estate tax as an attractive tax.
I am also concerned about trying to equalize opportunities for children from different families. But the estate tax makes only a small contribution to that since most of the inequality that is passed on from generation to generation is in the form of earnings. Children of higher income families earn more than others, and the difference is sizeable. The estate tax makes a small contribution compared to the effects of parental wealth on earnings and inequality.
Children who inherit a lot may not work hard and lead not very valuable lives. But I believe that should be left for parents to decide how much they want to leave them.
Moreover, if the desire is to affect inequality in the children's generation, the tax should be on inheritances, not on estates. So what is the estate tax accomplishing when it is expensive, unimportant in affecting inequality, and does not address directly the inequality of inheritances?
Still, while I am against the estate tax, as I said in my post, I could tolerate a tax on very large estates. But the minimum estate that would be taxed should be far higher than it is at present. In addition, the tax rate should be no higher than about 20 %, so less effort would be put into using lawyers and accountants to reduce the estate tax liability.
Whatever is the case with the Catholic Church and the Republican Party, they are not foundations. I was referring to foundations that moved from being on the left to becoming conservative or libertarian.
Even without an estate tax, a basis would have to be established for assets that are transferred to heirs. If it is the original purchase price, then there is no less incentive to trade before death than there would be by the heirs after death. In either case, capital gains would be taxed at the capital gains rate.
I do not agree that the act of giving makes someone liberal in the sense of a government interventionist. One can give to support the education of children from poor families, to scientists working on finding cures for cancer or diabetes, and so forth without being a proponent of large government and extensive regulations. Indeed, the less one believe that governments can do the job; the more one will support private charities.
Very wealthy men and women like Warren Buffett and Bill Gates would frequently create charitable foundations even if that did not help them avoid estate taxes on their wealth. After all, the estate tax was negligible when Rockefeller and Carnegie created their large foundations. Nevertheless, a sizable estate tax that exempts charitable giving has encouraged the creation of many large private charitable foundations in the United States.
Private giving to various causes is a substitute for public giving, and private giving tends to be more effective because of the competition among different foundations and other charities. In this respect I believe the US market for giving is much better than the European approach because financing of the arts, higher education, hospitals, and many other activities to a considerable extent comes from governments in Europe, while in the US these activities much more depend on fees for services and private donations. Competition among private donors is as conducive to efficiency and productivity growth in these fields as is competition among producers of cars and other such goods.
Although this will not be the main focus of my discussion, I support the abolition of the estate tax, or at least its significant weakening to cover only very large fortunes, perhaps along the lines of the recent bill on reform of the estate tax passed by the House of Representatives. The present estate tax is too discouraging to accumulations of moderate amounts of capital in small businesses and other ways. It also encourages legal and accounting spending devoted solely to finding and exploiting loopholes that is considerable relative to the tax revenue the estate tax raises (about $30 billion in 2005). If the concern is about inequality caused by inheritance, the tax should not be on estates, which may be divided up among a number of heirs, but on the amounts inherited by individuals.
Decentralized private charitable giving can be encouraged through the tax system even without an estate tax, as long as charitable contributions can be generously deducted from income taxes. Indeed, as I argue below there are advantages from having foundations created while donors are alive rather than mainly after their death. However, if the estate tax were abolished or substantially weakened, it would be desirable to raise significantly the cap on giving in the federal tax code-set presently at 30% of adjusted gross income- to a much higher per cent, perhaps 100 % or even higher (if higher than 100%, there might be a loss carry over provision).
This brings me to my main topic, Warren Buffett's announced gift of over $30 billion to the Bill and Melinda Gates Foundation. It is unusual in that this amount and any interest earned are supposed to be entirely spent by the time the Gates' have either died or no longer run their foundation. Even more rare for such a large gift is that Buffett is not creating a foundation with his name, but instead is giving the money to a foundation created and soon to be managed by his friend, Bill Gates.
Most foundations continue long after the death of their creators, and after replacement of the initial managers and Boards. This commonly leads to a shift away from the donors' intent. Partly that is desirable given unanticipated changes in the social and economic environments, and in the resources supplied by other donors and by governments. Most people recognize the need for foundations to react to these fundamental forces, but another type of change over time in foundation goals is more disturbing. Self-made businessmen who usually have strong beliefs in capitalism and a competitive market system create many of the large foundations. Such "conservative" views often guide the early days of their foundations' activities, partly because the donors choose managers and Boards who are sympathetic to their beliefs.
Over time, however, foundations frequently become more liberal, as management and Boards change. Although the market for foundation executives is highly competitive, these executives typically come from education and family backgrounds that are similar to those of modern journalists. As a result, foundation executives also tend to have a "liberal" outlook on the role of government, and the most pressing social and economic questions. This liberal outlook often clashes with the views of the original creators of the foundations they manage.
Examples of the shift from conservative to liberal over time include the large Ford and Pew Foundations. Some foundations created after the death of a conservative donor, such as the MacArthur and Packard Foundations, from the start have a much more liberal orientation than the donors. I do not know of any prominent examples that moved the other way, from initially liberal to becoming conservative over time. A solution to the problem of the shift over time away from the intent of donors is for donors to require that their foundations give away all their assets by a set date, or shortly after their deaths or that of managers they trust. The Olin foundation is the most prominent large foundation that was directed to, and did, succeed in giving away its assets by a set date. It accomplished that goal, under the leadership of William Simon and James Pierson, while giving its money in a thoughtful manner, such as funding a number of prominent Law and Economics programs at Law Schools.
Warren Buffett believes in capitalism and competitive markets, but he is not a "conservative". Still, he is concerned that over time his huge gift would be used in ways that he would not approve. So he is following the example set by Olin and a few other foundations, and he is installing a sunset provision that would require all his charitable assets to be spent by the time the Gates" have either died, or withdrawn from an active role in their foundation.
Even more unusual is his decision not to set up a new foundation under his name, but to give the bulk of his fortune to the Bill and Melinda Gates Foundation. He has chosen to place his philanthropic money in the same manner as he invests the money in his funds; namely, by picking managers that he believes will use the money well and yield a high return. The only difference between spending charitable and investment monies is that the former yields returns in the form not of profits, but in terms of effectiveness in furthering the donor's aims. The Gates foundation has so far been spending most of its money on promoting health in developing countries through attacking diseases that are more common in these countries, such as malaria. From the little I know about this foundation, it has spent its money relatively well. So it does not seem surprising that Buffett has confidence in this particular foundation.
What Buffett is doing may be wise, but is very uncommon because most large donors want their names on the Foundation they create, and also on the organizations sometimes set up by recipients of their gifts. Even the Olin Foundation generally called the Law and Economics Centers they created after Olin. That Buffett could resist the temptation to have a foundation monument in his name testifies not only to his wisdom but also to the inner confidence of the man.