President Bush has suggested that spreading democracy is the surest antidote to Islamist terrorism. He can draw on a literature that finds that democracies very rarely go to war with each other, although a conspicuous exception is the U.S. Civil War, since both the Union and the Confederacy were democracies.
Hamas, which has just won a majority in the parliament of the Palestinian proto-state, is a political party that has an armed terrorist wing and is pledged to the destruction of Israel. Can that surprising outcome of what appears to have been a genuinely free election be squared with the belief that democracy is the best antidote to war and terrorism?
The first thing to note is that one democratic election is not the equivalent of democracy. When Hitler in 1933 was asked by President Hindenburg to form a government, the processes of democracy appeared to be working. The Nazi Party was the largest party in the Reichstag; it was natural to invite its leader to form a government. Within months, Germany was a dictatorship. So the fact that Hamas has won power fairly and squarely does not necessarily portend the continuation of Palestinian democracy.
But suppose Palestine remains democratic. What can we look forward to? I don't think the question is answerable if democracy is analyzed realistically. The great economist Joseph Schumpeter sketched in his 1942 book Capitalism, Socialism, and Democracy what has come to be called the theory of "elite" or "procedural" or "competitive" democracy. In this concept, which I have elaborated in my book Law, Pragmatism, and Democracy (2003), and which seems to me descriptive of most modern democracies, including that of the United States, there is a governing class, consisting of people who compete for political office, and a citizen mass. The governing class corresponds to the selling side of an economic market, and the citizen mass to the consuming side. Instead of competing for sales, however, the members of the governing class compete for votes. The voters are largely ignorant of policy, just as consumers are ignorant of the inner workings of the products they buy. But the power of the electorate to turn elected officials out of office at the next election gives the officials an incentive to adopt policies that do not outrage public opinion and to administer the policies with some minimum of honesty and competence. It was Fatah's dramatic failure along these dimensions that opened the way to Hamas's surprisingly strong electoral showing. Hamas cleverly coupled armed resistance to Israel with the provision of social welfare services managed more efficiently and honestly than the services provided by the notoriously corrupt official Palestinian government, controlled by Fatah.
In troubled times, such as afflicted Germany in the early 1930s and Palestine today, democratic elections provide opportunities for radical parties that provide an alternative to discredited policies of incumbent officials. The worse the incumbent party, the better even an extremist challenger looks. The German example suggests that moderation of a radical party when it takes power is not inevitable. The party may continue its radical policies and even use its initial popularity to destroy democracy. Hitler and Mussolini took power in a more or less orderly democratic fashion and Lenin by a coup, but in all three cases the consequence of the seizure of power by a radical party was the opposite of moderation. Hitler and Mussolini remained popular until their policies failed dramatically; there is no theoretical or empirical basis for supposing that popular majorities in all societies are bound to favor more enlightened policies than a dictator or oligarchy would.
How then to explain the empirical regularity that democracies rarely war with each other, and the concomitant hope that if Palestine were democratic it would stop trying to destroy Israel? The answer lies in considering what is required for democracy to take root rather than to make a rapid transition to dictatorship. Democracy is unstable unless anchored by legally protected liberties, including freedom of speech, freedom from arbitrary arrest, and property rights. The liberties in turn tend to be unstable without a measure of democracy. When there are no liberties, a one-sided election can result in a quick extinction of democracy, because there is nothing to prevent the winner from calling an end to the electoral game in order to perpetuate his control. When there is no democracy, rulers are not effectively checked, and corruption and other abuses flourish. The combination of democracy and liberty, as in the U.S. Constitution, provides an auspicious framework for prosperity, resulting eventually in dominance of the society by a large middle class. Middle-class people don't have much taste for offensive wars or violence in general. They are not specialized to such activities, which benefit primarily monarchs and aristocrats (who internalize martial values), impoverished adventurers, and (closely related to the adventurers) political and religious fanatics. (This is in general, not in every case; the Germany that Hitler took over was a middle-class republic, democratic though imperfectly so.) As Samuel Johnson said, people are rarely so innocently engaged as when trying to make money, since in a well-ordered society they can do that only through trade, which wars disrupt.
So democracy itself is not a panacea for the world's political ills and dangers. But if the Palestinians are able to develop a genuinely republican government and move rapidly toward embourgeoisement, there is some hope for the eventual emergence of a peaceful Palestinian state.
There is another point, special to the Palestinian situation, that provides a further ray of hope. With Hamas in power, its members are paradoxically much more vulnerable to Israeli military power than they were when Fatah was in power. The Hamas leaders then were scattered and hidden and efforts to fight them risked killing innocent civilians and discrediting the Palestinian government, with which Israel was trying to make peace. Given Fatah's inability to suppress Hamas, Israel could not crush Hamas by bombing the government buildings occupied by Fatah. Once Hamas is the government, however, further violence toward Israel by Hamas members can be met appropriately by massive military force directed against the organs and leaders of the government. This threat may cause Hamas to avoid attacks on Israel. Hamas's victory may be the best thing that has happened to Israel in years.
The election victory of Hamas sent a bombshell throughout the Middle East and the rest of the world. I will comment briefly on its implications for democracy among the Palestinians, economic development, and relations with Israel.
I agree with Posner that it is hard to tell whether free elections will continue among the Palestinians. One free election means very little in forecasting the future-even the Weimar Republic had several elections before Hitler destroyed Germany's young democracy. Contested government, a free press, and other free institutions are far more likely to persist when they have been practiced for a long time. James Madison argued against Thomas Jefferson's proposal to continually change the American Constitution, and in favor of a stable constitution because of "that veneration, which time bestows on everything, and without which perhaps the wisest and freest governments would not possess the requisite stability" (Federalist Paper no. 49).
Economically, the Palestinian Authority is a basket case: no foreign investment, little foreign trade, and emigration of the more talented, educated, and ambitious Palestinians to elsewhere in the Middle East, or to America. The Authority is barely kept afloat by aid from Europe, other Arab nations, and the United States that amounts to about $1.5 billion per year.
Hamas now has to choose between two radically different paths. In many respects the easiest one would be to maintain its charter that calls for the "obliteration" of Israel. Surely, however, that would further discourage foreign investment, and is likely to speed up the out-migration of talented Palestinians. In addition, it will mean the end of aid to the Palestinian Authority from America, and possibly also from much of Europe as well. A sizable reduction in foreign aid may force Hamas to try to introduce economic reforms, but these cannot succeed as long as its goal is to eliminate Israel.
The wiser course would be for Hamas to become more flexible and greatly moderate its hostile actions and rhetoric toward Israel. After all, Ariel Sharon while in power shifted from a hard-line policy toward the Palestinian Authority to a more moderate position, and the Israel economy is in far better shape than is the Palestinian economy. Hamas showed that it could win an election by downplaying its hostility to Israel, and instead emphasizing its ability to run the government more efficiently and with less corruption than Fatah. However, unless the new government can significantly improve the dismal Palestinian economic situation, Hamas' popularity is likely to erode. Yet the only way to retain human capital, attract foreign direct investment, and widen foreign trade, all essential for significant economic progress, is to reach a stable settlement with Israel.
For these reasons, I am more optimistic than Posner and many others about the chances that Hamas' victory will improve rather than worsen relations with Israel. Perhaps, as Posner argues, it will become easier for Israel to retaliate against Hamas leaders when they are physically more concentrated either in the Palestinian Parliament or in executive offices. But I do not consider that a crucial consideration.
My cautious optimism is based on the economic pressures Hamas will face as it tries to govern the Palestinians. Yet the Middle East is the most unpredictable region of the world. So I would not bet a lot on my analysis, especially in the near-term, but I do disagree with the pessimistic views among the media and politicians about what Hamas will do after its astonishing political victory.
A set of varied comments, with some interesting and insightful. I have a few reactions.
Some of you questioned whether for-profit colleges provide a useful education. The only decisive way to get at this is to calculate how much earnings increased as a result of attending for-profit colleges, and then compare that to the cost of attending them in order to estimate rates of return on this investment. Rates of return estimates can also take into account that relatively many individuals who attended commercial colleges default on government-backed loans. Thousands of rate of return calculations have been made for traditional public and private non-profit colleges and universities, but I am not familiar with any for commercial colleges (that does not mean there are no such estimates; only that there are not many).
I do not believe signaling is an important factor in explaining returns to higher education in general, or to commercial colleges in particular. The signaling interpretation of the benefits of going to college originated in the 1970's and had a run of a couple of decades, but is seldom mentioned any longer. I believe it declined because economists began to realize that companies rather quickly discover the productivity of employees who went to college, whether a Harvard or a University of Phoenix. Before long, their pay adjusts to their productivity rather than to their education credentials. I agree with one of the comments that such credentialism is more likely to survive among public sector employees.
The federal government already punishes proprietary colleges and others whose graduates have high default rates on their loans. As I understand the procedure, if default rates get above a certain level, students at these schools have trouble getting loans. Of course, that makes it much more difficult to attract students.
Studies do show that retraining of adults over age 40 generally produce very little in the way of higher earnings. But for-profit colleges mainly enroll students in the twenties and thirties, not much older than that.
Someone asked why States like New York oppose for-profit colleges? As someone pointed out, the University of Phoenix had to fight hard to get accredited in many states, and is still denied the opportunity to enroll student In New York and about fifteen other states. I suggested in my post that the answer is opposition from public and private non-profit colleges that do not want the competition. It is common for companies in many industries to restrict the entry of competitors if they can. Why should traditional colleges be any different? They only express their opposition in more high-falutin and self-righteous language.
An article in the New York Times yesterday discussed the moratorium imposed last week by the New York State of Regents on new for‚Äìprofit or commercial colleges in that state. Commercial colleges have been growing rapidly nationally, and the Times' article discusses problems that have been found with some of them in New York and elsewhere. Despite various abuses, I believe that for-profit colleges and universities fill an important need, and the moratorium imposed by New York is unwise and should be lifted.
Government-run schools dominate higher education in most countries, including the U.S. where some 70-75 per cent of undergraduates attend public colleges and universities. To be sure, private non-profit colleges and universities make important contributions in some countries, such as the University of Chicago, Stanford University, and Swarthmore College among many others in the United States, Keio University and numerous little known other schools in Japan, and Insead in France. During the past thirty years, the number of for-profit colleges and universities has grown rapidly from negligible numbers, especially in the United States but also in China and elsewhere in Asia. The Career College Association, an association of for-profit postsecondary institutions, lists over 2000 members, and that association does not even include the best-known for-profit colleges, The University of Phoenix and DeVry University. Phoenix is the largest accredited private university, and among the oldest of the for-profit universities. It was founded in 1976, enrolls 100,000 online student, even more students at 170 campuses in over 30 states, and it is publicly listed with a market capitalization of several billion dollars.
According to the Times' article, commercial colleges enroll about 7 per cent of students in higher education in NY State. This is even without the University of Phoenix, which has not yet been allowed to enroll students in that state. Other states also have rapidly growing enrollments in for-profit colleges, although I do not have figures on their enrollment shares.
What explains the boom in commercial colleges, given the difficulties in competing against highly subsidized taxpayer-financed institutions, and private non-profit institutions with considerable endowments, and exemption from property and income taxes? To me, the obvious answer is that commercial colleges are meeting a need not met by these other institutions. For-profits generally enroll lower income and older students who are disproportionately African‚ÄìAmerican and from other minority backgrounds. They offer specialized programs with classes that often meet in the evening and at other convenient times. Such opportunities are usually less available at cheaper government-run colleges and non-profit institutions.
In addition, for-profit institutions have taken the lead in providing online education that offers the greatest flexibility for working students. Students can take online courses in the evening, weekends, before they start working, or at other times that are convenient for them. Online courses do not allow direct interaction among students and faculty available in classrooms, but virtual classrooms provide opportunities to chat with other students no matter where they are located. In addition, they often provide direct and immediate access to faculty who answer questions and provide other information. No wonder that hundreds of online for-profit institutions continue to operate even after the crash several years ago of internet-based companies. Some of these online institutions offer degrees, including advanced degrees, while most offer specialized training in particular areas, or refresher courses for out-of-date professionals.
Students at certified for-profit colleges have long been eligible for federal-backed loan programs, and are also eligible for most state programs that provide financial assistance, such as New York State's extensive tuition-assistance program. Since for-profits enroll relatively many students from poor backgrounds with modest earnings, it is no surprise that their students take a disproportionate share of federal-backed loans and state grants. For example, according to the Times‚Äô article, they get 17 percent of the tuition assistance provided by New York while enrolling only 7 per cent of the students.
The Times concentrates on a few examples of corrupt practices uncovered in New York State and elsewhere. In addition, it is well known that students who went to proprietary colleges have higher rates of default on federal-backed loans than students who went to state or non-profit institutions. For-profit institutions have been accused of false advertising about their programs, very low standards for admission, and even changing student answers to make them eligible for state aid.
However, no one to my knowledge has conducted a good study that analyzes the frequency of misleading advertising, or deceptive and dishonest practices, at commercial institutions of higher education compared with state and private non-profit institutions. Many of the private and public non-profit colleges and universities are guilty of shoddy teaching, misleading claims in their handbooks and advertising about what students would learn at their institutions, taking students in PhD programs where jobs are almost impossible to find upon graduation, and other false, misleading, or immoral practices. The late George Stigler, a Nobel Prize winning economist, wrote a humorous essay entitled ‚ÄúA Sketch of the History of Truth in Teaching‚Äù (reprinted in his collected essays The Intellectual and the Marketplace) where he basically argues that if traditional universities were held to the same standard of truth as private companies, they would be subject to large and numerous lawsuits.
Some economists have argued that non-profit organizations perform better from a social perspective than for-profit firms when customers have difficulty assessing various hidden qualities of the services provided. But that argument does not seem important in comparing performances of non-profit and for-profit institutions of higher education. Students can usually quickly evaluate the type of teaching they receive, and they can also learn whether graduates of their institution get good jobs. Many students at commercial institutions may overestimate their abilities and the job market they would have upon graduation or finishing a program, but that is also likely with students who major at the most prestigious universities in subjects where few jobs are available, such as Icelandic Literature or Medieval European History.
Commercial colleges have grown rapidly in a highly competitive industry where other colleges are greatly subsidized. This suggests that they generally are filling a useful niche inadequately covered by traditional colleges and universities. Sure, lying and cheating by these institutions should be attacked by private and public lawsuits, but government moratoriums and other orchestrated attacks should not be the way non-profits are allowed to fight off new and tough competitors.
I agree with what Becker has written on this important subject. I want to approach the subject from a slightly different angle, however, which is to consider why higher education in the United States is dominated by public and nonprofit-private institutions (abroad, almost all education is government-operated) and what this implies about the reasons for the growth of the profit-making institutions.
A nonprofit enterprise is one that (1) enjoys an exemption from taxation and (2) operates under a nondistribution constraint--that is, any surplus of revenues over expenses cannot be distributed as profits to the firm‚Äôs "owners." The points are related. To enjoy a charitable exemption from taxes, an institution must not only have a purpose deemed worthy (such as promoting education, health, religion, the arts, and so forth), but must also devote all its resources, including income on endowment, to its charitable purpose.
The nondistribution constraint is indeed constraining, because it means that the institution cannot raise money in the equity markets. It can compete with profit-making competitors only if it can attract investment from donors. Generally, this requires that it have many affluent alumni, as they are the principal donors to colleges and universities (partly out of gratitude, partly for the less altruistic reason that they derive prestige from having attended a distinguished institution and they want to help it maintain its distinction). There is a chicken and egg problem. To attract children of well-to-do families, and other children who have good earning prospects, the school has to offer an attractive program, good living and athletic facilities, and a distinguished faculty, but all those things cost money, which is hard for a nonprofit institution to raise unless it already has wealthy alumni. This may be why the very successful nonprofit colleges and universities tend to be quite old. They have had a long time to "grow" alumni who make generous contributions. Brandeis University, founded in the 1940s, is one of the few prominent private universities that is not very old--and it has had great trouble building up an endowment (though in part this is because of the elimination of Jewish quotas at other prominent universities--those quotas were one of the major factors in the decision to create Brandeis).
The result is a tendency for nonprofit colleges and universities to be quite expensive. Access to them by kids who are not well off and do not have good earnings prospect is further restricted by the practice of "legacy admissions," an important part of the fund-raising strategy of the classy nonprofit institutions.
Public colleges and universities take up much of the slack by subsidizing tuition; there are also federal and state loan programs for college tuition. But tuition expense at public institutions has been rising, at the same time that these institutions have begun angling for more affluent students by becoming semi-private--sometimes more than semi: for example, the University of Michigan, though state-owned, now derives only about 10 percent of its revenue from the state.
The rise of the profit-making college and university, described in Becker's post, can therefore be interpreted as a response to the increasing scarcity of places in nonprofit and public colleges and universities for students who for whatever reason do not have good prospects as high earners, which would make them attractive to and able to afford the tuition charged by the nonprofit and public institutions. Not being able to rely on future alumni donations from such students, the capital required for their education must be raised from nonaltruists, i.e., profit-making investors; hence the increasing adoption of the for-profit form. Nonprofit institutions catering to the low end of the market have also emerged in recent years, but they may be at a competitive disadvantage vis-√†-vis profit-making firms, as they may find it difficult to raise capital without an alumni base.
Is fraud and other malfeasance more likely in the new profit-making institutions? I think so, for two reasons. First, the consumers served by these institutions are less sophisticated than the consumers (the students and their families) of the educational services provided by the established institutions. Second, established institutions have more ‚Äúreputation capital‚Äù at stake than a new enterprise; hence fraud or other misconduct is more costly to them and so they make greater efforts to prevent it. This has nothing to do with any differences in "greed" across different organizational forms, but merely with differences in the cost of engaging in misconduct, which is greater for the nonprofit and public institutions because of their clientele and reputation. But reputation capital is as important to established profit-making institutions, such as the University of Phoenix, as to nonprofit ones. However, the the rapid growth in the number of profit-making colleges and universities means that a disproportionate number of these institutions are new and therefore not yet established, and that would suggest that fraud may indeed be on the increase, as the New York authorities believe.
Even so, that is no reason to shut down the profit-making educational sector, which may have discovered a demand for college education that the nonprofits had overlooked. Given the private as well as social return to higher education, the contribution of for-profit colleges and universities should not be disparaged.
Overall high quality comments on my discussion of tenure. A few responses.
Professors at the vast majority of colleges and universities do very little research, long-term or any other type. Serious research is concentrated at 50-100 universities. So it is hard to see the length of time it takes to complete major research as an argument for tenure at the remaining 3000 or so colleges and universities. Moreover, Bell Labs in its heyday, and other corporate research centers have encouraged long-term research without giving tenure. Good organizations, whether universities or corporations, will see the potential of original research, whereas bad ones will not, with or without tenure.
I do not know enough about what Boston University offered. If they did offer both tenure and non-tenure options, the data would tell us something useful about the value placed on tenure, although more risk-averse professors will tend to choose the tenure route.
I agree with some of the comments that several year contracts that are renewable may well be the way to go in academia. This would provide better incentives to professors during their prime years. It would also help get around the foolish Federal law that prevents universities from forcing older professors to retire, except in extreme circumstance.
Perhaps university administrators desire tenure because their evaluation by higher ups is shortsighted. But good colleges and universities would be better managed than that, given the competitiveness of the market for higher education in the United States. In fact, the survival of tenure in such a competitive higher education market often makes me wonder if the arguments against tenure are overlooking some important reasons why tenure may improve performance and efficiency.
James Miller was a student of mine, and a very good student indeed. He is also politically conservative in the sense that he believes in the advantages of free markets and a private enterprise system. Unfortunately, most faculties, including many economics departments, do not appreciate such views. That said, I must add that I do not know the situation at Smith, and why he was initially denied tenure, although it is interesting and relevant to the answer that he apparently won his appeal.
A number of interesting comments, as usual. I respond to a number of them here.
On whether unions promote efficiency, a commenter was correct to point out that unions can benefit members, but they do so but restricting competition among workers. While this may raise the wages of unionized workers, it harms nonunionized workers (as well as consumers). If because of unionization an employer's wage bill rises, its demand for labor will decline, which means that fewer workers will be employed. By the way, in response to another comment, the decline in unionization in the private sector seems to me better evidence that union-protected employment is less efficient than employment at will than a study would be. It is the real market test.
One commenter suggests that tenure increases the incentive of workers to invest in specialized skills. This may be true, but observation suggests that employers are able to encourage such investment without granting tenure. All sorts of nontenured private-sector workers, including doctors, lawyers, and engineers, invest in specialized skills. Becker explains the mechanism: specialization in firm-specific skills may make a worker more dependent on his employer, but it also increases the worker's value to the employer.
One comment perpetuates the very natural error of thinking that Einstein was employed by Princeton University. He was employed by the Institute of Advanced Study, which is located in Prdinceton, New Jersey, but is not part of the university. Princeton U. has garnered a great deal of prestige from the co-location of the Institute!
Another and more germane misunderstanding is that tenure is guaranteed employment. That is not correct. If a college shuts down, it does not have to continue paying the tenured faculty. And I think without being certain that if a university closes a department, it doesn't have to retain the faculty of that department on its payroll. In effect what tenure guarantees is that you won't be replaced--even by a better candidate!
Iincidentally, I do not suggest that a university or other employer should be forbidden to offer a tenure contract if the employee is willing to accept a compensating reduction in wage. The problem is asymmetric information. If the employee asks for such a contract, the employer may wonder whether the employee has private information that he is not sharing--for example, that he doesn't intend working hard any more.
I agree that tenure protects academics against being fired because of their unpopular ideas, but there are other forms of retaliation that are almost as effective. If there is a market for the unpopular idea, the fired professor can find another job. If there is no market, he's likely to be ostracized by his peers. I would like some examples of where tenure made the difference between production and suppression (presumablly temporary) of a genuinely important idea.
One comment misunderstands me as advocating abolition of tenure for civil servants. Not so. All I said was that I didn't think the Supreme Court in the name of the First Amendment should have abolished the spoils system. I emphasized that when performance measures are unavailable, which they often are for public services, the creation of a "high commitment" environment, including tenure, as a substitute for high salaries to compensate for risk of being fired for nonobjective reasons, may be optimal. A spoils system may well be less efficient than a tenure system, yet the tenure system may be less efficient than employment at will in settings in which performance measures are feasible.
I do think tenure for judges makes sense, because without it the judiciary would be excessively politicized. I do not have tenure in my part-time teaching job at the University of Chicago, and I think that's fine.
Most Americans employed in the private sector do not have any job protection. They are what are known as "employees at will." They can quit or be fired at any time for any reason other than a reason forbidden by law, such as race. Unionized workers (now a very small percentage of the private-sector work force) have some job protection; they can be laid off if their employer experiences a fall in demand and therefore doesn‚Äôt need as many workers, but they can be fired only "for cause," normally some form of deficient job performance. In the public sector, most employees below the top political level have extensive job protection (including teachers), except in the military and other national-security employment, such as the CIA. Generally, civil employees of the government can be discharged only for cause, which often is very difficult to prove. The Supreme Court has largely abolished, in the name of free speech, the "spoils" system whereby state and local government jobs were given to the political supporters of the party in power. Federal judges can be removed (barring physical or mental disability) only by the cumbersome process of impeachment by the House of Representatives and conviction by the Senate. An important category of job-protected workers that bridges the public and private divide is tenured professors, who cannot be fired without cause. Finally, in Europe most workers have far more extensive job protections than American workers do.
The question I wish to address is whether this pattern makes any economic sense. One way to pose the question is to ask why--since employment at will is the cheapest form of employment contract--aren't all employees employees at will? In the otherwise dissimilar cases of unionized workers and public employees protected by the Supreme Court‚Äôs interpretation of the First Amendment against political firing, tenure (employment protection) is imposed from the outside. Employers would like greater flexibility, but outsiders--unions or judges--impose tenure for their own reasons. Unions worry that without tenure protection, employers will pick off the union's supporters; the Supreme Court worries that without tenure protection public employees will be afraid to express political views opposed to those of their superiors, and so freedom of expression will be curtailed. But surely the curtailment would be slight, since few public employees will engage in public disagreement with their superiors even if they can't be disciplined for doing so. Moreover, there is a tradeoff between professional competence and personal loyalty. A slightly less able employee who is loyal to his superiors because of political compatibility or even nepotism will work more harmoniously with them, and the reduction in friction may offset a (modest) competence deficit.
Tenure is an efficient system in what organizational economists call a "high commitment" workplace. Contrast two types of enterprise. In one, the contribution of the individual employee to the enterprise‚Äôs output is readily measured. Ordinarily this will be a business firm. Revenues, costs, and ultimately profits provide objective measures of performance. The individual employee's contribution to those measures may be more difficult to measure, especially when employees work in teams. But reasonable estimates are usually possible--employees and their superiors negotiate reasonable goals for the coming year relating to sales, markups, and cost reductions and progress toward those goals is measured throughout the year. Employees can therefore be paid a salary or wage that approximates their marginal product. With their productivity continuously measurable, there is no need for job protection.
Or so it seems; for even in a firm, there may be some benefits to providing a degree of job protection. Suppose employees are in a position where by sharing their know-how they could increase the productivity of other employees. They may be reluctant to do this if they fear losing their jobs because they have helped the other employees become more productive than they. Some firms deal with this problem by making an employee's annual bonus depend not only on his own contribution but also on the overall performance of the firm that year. This is a more flexible method than giving workers tenure.
The sharing problem is sometimes offered as an argument for how unionization might actually increase productivity. But it is a weak argument. If tenure is an efficient employment contract, employers will institute it without union prodding. The steep decline of unionization in the private sector is a convincing "Darwinian" refutation of the argument one used to hear that unions actually promote efficiency.
Although performance measures are generally most feasible for business firms, some governmental or other noncommercial activities lend themselves to such measures. Criminal-investigation agencies such as the FBI provide good examples. An FBI agent can be evaluated by the number of arrests he makes weighted by convictions (arrests that do not lead to convictions are not productive), with the convictions in turn weighted by the length of the sentence and the value of any property recovered as a result of the prosecution. Note that the measure here, as in a firm, is not a simple quantitative measure of contribution to output, but rather is a value measure.
In activities (some of which may be team production within business firms) in which performance measures are infeasible, usually because either the value of output or the employee's contribution to that output cannot be quantified, other methods of employee motivation than performance-based compensation must be sought. The "high commitment" workplace is a recognition that, fortunately, employees have other motivations for working productively besides the hope of salary increments, such as identification with the goals of the employer, as when judges and (other) civil servants internalize a "public service" ethic that induces them to work productively for a modest wage with limited hope of advancement. Tenure in such a setting both encourages sharing and discourages "influence activities," a term organizational economists use to refer to the kind of jockeying for position that occurs in the workplace when the absence of objective performance measures opens the door to worker competition based on personality, connections, and intrigue.
Even in a high-commitment environment, additional motivation may be provided by a tournament-style promotion system. Even if an employee's output cannot be measured with any precision, it may be possible to identify the best employee because the gap between his contribution and that of the next best may be large enough to be perceived without being quantifiable. Promoting the best employee to the next rank is therefore a method of incentivizing employees to do their best.
Both judicial and academic tenure are defended as needed to encourage independent thought and prevent political retaliation for unpopular views. This rationale is more persuasive in these contexts than in that of ordinary public employees, but it is not very satisfactory. In most nations, including nations that we consider our peers, the judiciary is insulated from political pressures but the judicial career is much like that of other employees. Judges start at the bottom rung of the judiciary when they are appointed and work their way up by impressing their superiors. The U.S. federal judicial system (also the British judiciary, and that of the other former British possessions) is unusual in being a system of lateral appointments (from practice or the academy, generally) with very limited promotion. The difference may be due to the fact that the Anglo-American and especially the U.S. legal system gives much more discretionary authority to judges than other foreign systems do, so that identifying the "best" for promotion is difficult and even arbitrary.
I do not think tenure makes a great deal of sense any longer in the academic setting, and I expect to see it gradually abandoned. (It has already been abandoned in England, for example.) If a university wishes to offer its faculty protection against political retaliation for unpopular views, it can do that by writing into the employment contract that politics is an impermissible ground for termination. Tenure is no longer needed because of an absence of performance measures. These measures exist in abundance. Quality of teaching is readily measurable by student evaluations, provided care is taken to prevent teachers from courting popularity by easy grading and light assignments and student evaluations are supplemented by faculty observation of the classroom. Quality of research is readily measurable by grants, prizes, and above all by citations to the professor‚Äôs scholarly publications, weighted by the quality of the journal in which the citations appear.
In some fields, such as mathematics, there is generally a significant falling off in academic output at a young age, and there is fear that without tenure these faculty would be turned out to pasture long before retirement age. But this is no different from the situation in professional sports, modeling, and other youthful occupations, where it is handled by an alteration in the wage profile. If a career in mathematics entails a sharp fall-off in market wages after, say, age 40, the academic market will compensate by offering disproportionately high wages to young mathematicians; otherwise, talented mathematicians will choose professions, such as economics, in which math skills are valued but productivity does not decline steeply with age.
One reason for the superior productivity of U.S. compared to European workers is that tenure encourages lazinesss by reducing the cost of laziness to the worker. But that is not the principal problem. Tenure removes the stick but not necessarily the carrot. More productive professors can be paid more and, even if their university has a lock-step compensation system, can obtain prestige and outside income by outstanding performance. The greater cost of tenure is simply in forcing retention of inferior employees. The 80-year-old mathematician may be working hard, but he may be incapable of achieving the output of the 25-year-old mathematician who would take his place were it not for tenure. Note how governmental prohibition of compulsory retirement at a fixed age aggravates the inefficiency of tenure--and is no doubt contributing to its eventual abandonment.
Perhaps the strongest argument for academic tenure is that without it academics would be reluctant to undertake promising projects with a high risk of failure. But the situation is no different in "knowledge" firms such as software and pharmaceutical-drug producers, which encourage their scientists to undertake high-risk projects--and do not think it necessary to offer tenure. If most good new ideas are produced by young academics, then an institution that raises the average age of faculty, namely tenure, seems likely to reduce academic productivity. An interesting empirical project, therefore, would be to study the effect of England's abolition of tenure on the average age and productivity of English university faculties.
The traditional justification for academic tenure is that otherwise professors would be unwilling to express unpopular views for fear of being fired. This argument for academic tenure is extremely weak in the United States where several thousand colleges and universities compete for professors. In fact, tenure only became common at American universities in the 1920's. It is possible for academics with extremely unpopular views to gain an appointment with tenure at different institutions, as seen from the tenure of faculty who deny the holocaust, or a Ward Churchill at The University of Colorado with outrageous views on terrorism and other issues. The case for tenure is stronger in countries where governments control all universities, and can block academics with unpopular opinions from gaining and keeping appointments. Yet even that argument has become weaker with the rapidly growing international market for good academics.
Are there other persuasive arguments for academic tenure? Some have been made in the economics literature, including the alleged difficulty in judging the quality of teaching and research, the non-profit nature of universities, and still others. I have not found any of them persuasive- for example, there is rather widespread agreement in most departments about which are the good teachers, and also to a large extent about who has produced the more influential research.
The American Constitution gives Federal judges lifetime tenure so that they would be free to decide cases without fear of political reprisals for unpopular decisions. In posts on March 12th and 19th of 2005 I argued against the lifetime tenure of judges as encouraging judges to remain too long, especially now when they are likely to live into their eighties and into their nineties within a couple of decades. A single long term of between 14-20 years would entirely eliminate any political influence over their decisions due to any fear of losing their positions. It would also weaken the opposition to the appointment of judges with strong views since they would not be deciding cases for thirty years or more.
Civil servants have tenure because of similar political considerations, but top-level government officials do not have tenure presently, and are selected by the administration in power. It is hard to see why low-level government employees should have tenure either since they do not make any politically sensitive decisions. Perhaps tenure would be justified at certain intermediate levels, but that would at best cover only a small fraction of all government officials.
Companies often give de facto tenure to employees who have worked for them for a long time, except when the companies get into financial difficulties. This is readily explained since long-term employees usually have made significant investments in what is called firm-specific human capital. This term means knowledge and skills of employees that are more valuable at the company where they have worked for many years than at other companies. In order to encourage such investment, and to discourage inefficient quits, companies give a combination of implicit tenure and higher wages to their long-term employees.
Note that tenure alone would not be sufficient to encourage these investments and discourage quits. It has to be combined with higher earnings to long-term employees. Indeed, high enough earnings to employees with much firm-specific investment would be sufficient to discourage quits without tenure. But bargaining between workers and employees should lead to at least de facto tenure because that is more efficient if employees are more productive at this firm than at other firms. It is efficient because both workers and companies would be better off if workers with much relevant firm-specific investment stayed at the companies where they have worked for many years.
Firm-specific investment provides some of the "commitment" that Posner discusses since employees are obviously more committed to companies where they are more productive. Similarly, the company would be more committed to these employees than to other employees. Commitment is also related to loyalty to an organization and to employees. Loyalty in any organization is extremely important, both loyalty from employees to the organization, and from the organization to its employees. That can be encouraged by higher earning as performance improves, and by good and considerate treatment of employees. It would be in the self-interest of organizations to keep their loyal employees, so no explicit tenure rule seems desirable to encourage the retention of loyal members of an organization, no matter what work or profession they engage in.
Since de facto tenure is in the self-interest of companies as well as workers, the value of tenure does not provide justification for laws against firing older workers, or laws that require costly severance pay to long-term employees. Union contracts that make long-term employees less subject to layoffs may in some circumstances provide useful codifications of implicit tenure. However, this could be inefficient when more senior union members have a disproportionate influence over union bargaining. In any case, one would expect companies without unions to have an incentive when that is efficient to codify hiring and firing rules. In fact, most large non-union companies already have these rules.
Sorry for this delay in responding about organ markets, but I have been tied up with other matters. On the whole the comments led to a very sophisticated discussion, with a minimal number of personal attacks. Let me respond briefly.
Barter arrangements like LifeSharers may well improve the present transplant system, but they have all the disadvantages of barter. That is why money and markets have replaced barter throughout the world.
I indicated that opt out systems do not have a large effect on organ donations. The reason is mainly that family members often overrule the implied wishes of their deceased relatives. I suspect that many people do not take the effort to opt out because they expect their parents or children to have the ultimate say.
A common concern among the critics is that the poor will both give too many of their organs, and not have access to transplants. I have more confidence than these critics do in the ability of the vast majority of poor people to make decisions in their self-interest. Moreover, market forces rather than rich persons would determine the price of organs, in the same way that rich people do not presently set the price of maid services.
Most organ transplants are paid by private insurance, Medicaid, or Medicare. Since that would continue, and since I indicated that market-determined organ prices are unlikely to add much to the total cost of transplants, the poor should not be at more of a disadvantage in getting transplants if organs were sold than they are under the present system. Indeed, they are likely to be at less of a disadvantage when the supply of organs clears the demand for organs. For the rich and famous sometimes can now use influence to get priority, and they can travel to countries where they are assured of getting a transplant.
Someone argued for a tax on people who do not agree to make their organs available rather than a price clearing market in organs. Taxing is an alternative way to clear the organ market, and also the markets for steel, apples, and other products as well. The rich would have far more influence over the setting of the magnitude and form of these taxes than they would in the determination of organ prices that raise supply to equal demand. Minimizing the use of force and other government powers in determining supply is a general advantage of market determination of prices that fully applies to organs.
At least one of you claimed that the price of organs would be high because of the so-called "endowment" affect, while others thought it would be too "low" because the poor would be duped. It cannot be both, and I believe it would be neither, partly for the reasons in the post and above.
Many decisions in life have important elements of irreversibility in the sense that the cost of reversing may be either infinite or very large. These include going to law school, marrying a particular person, joining the armed forces for several years, etc. The irreversibility of organ donation for a live transplant will make people consider that issue carefully. The present system is in this regard worst since often family members are pressured into making quick decisions about providing organs for live transplants because their relatives would die if they did not get a transplant, and they cannot expect an organ soon, given their place on the queue.
I have been at several international conferences of transplant surgeons, and have argued there for markets in organs. While I believe most but far from all of these surgeons oppose such a change, they all recognize the very serious problems with the present system. Surgeons and hospitals fight sometimes over who has access to available organs, they see many patients die because they cannot get organs, and they often must perform a transplant surgery at a time that is not optimal for a person receiving the transplant because a matching organ becomes available at a particular moment.
So I believe much more of the concern expressed in some comments should relate to persons in need of transplants who either die because they cannot get them, or wait for years in ill-health before they receive suitable organs.